00:00:00.001 Started by upstream project "autotest-per-patch" build number 132708 00:00:00.001 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.174 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.175 The recommended git tool is: git 00:00:00.175 using credential 00000000-0000-0000-0000-000000000002 00:00:00.177 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.201 Fetching changes from the remote Git repository 00:00:00.205 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.228 Using shallow fetch with depth 1 00:00:00.228 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.228 > git --version # timeout=10 00:00:00.244 > git --version # 'git version 2.39.2' 00:00:00.244 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.256 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.256 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.933 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.947 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.957 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.957 > git config core.sparsecheckout # timeout=10 00:00:05.968 > git read-tree -mu HEAD # timeout=10 00:00:05.985 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.007 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.008 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.121 [Pipeline] Start of Pipeline 00:00:06.160 [Pipeline] library 00:00:06.162 Loading library shm_lib@master 00:00:06.162 Library shm_lib@master is cached. Copying from home. 00:00:06.179 [Pipeline] node 00:00:06.189 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.191 [Pipeline] { 00:00:06.200 [Pipeline] catchError 00:00:06.201 [Pipeline] { 00:00:06.210 [Pipeline] wrap 00:00:06.216 [Pipeline] { 00:00:06.223 [Pipeline] stage 00:00:06.224 [Pipeline] { (Prologue) 00:00:06.536 [Pipeline] sh 00:00:06.821 + logger -p user.info -t JENKINS-CI 00:00:06.839 [Pipeline] echo 00:00:06.841 Node: WFP8 00:00:06.847 [Pipeline] sh 00:00:07.144 [Pipeline] setCustomBuildProperty 00:00:07.152 [Pipeline] echo 00:00:07.153 Cleanup processes 00:00:07.157 [Pipeline] sh 00:00:07.437 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.437 2339664 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.448 [Pipeline] sh 00:00:07.729 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.729 ++ grep -v 'sudo pgrep' 00:00:07.729 ++ awk '{print $1}' 00:00:07.729 + sudo kill -9 00:00:07.729 + true 00:00:07.741 [Pipeline] cleanWs 00:00:07.749 [WS-CLEANUP] Deleting project workspace... 00:00:07.749 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.755 [WS-CLEANUP] done 00:00:07.758 [Pipeline] setCustomBuildProperty 00:00:07.770 [Pipeline] sh 00:00:08.047 + sudo git config --global --replace-all safe.directory '*' 00:00:08.126 [Pipeline] httpRequest 00:00:08.897 [Pipeline] echo 00:00:08.900 Sorcerer 10.211.164.20 is alive 00:00:08.909 [Pipeline] retry 00:00:08.911 [Pipeline] { 00:00:08.923 [Pipeline] httpRequest 00:00:08.927 HttpMethod: GET 00:00:08.927 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.928 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.930 Response Code: HTTP/1.1 200 OK 00:00:08.930 Success: Status code 200 is in the accepted range: 200,404 00:00:08.931 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.037 [Pipeline] } 00:00:10.055 [Pipeline] // retry 00:00:10.062 [Pipeline] sh 00:00:10.348 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.364 [Pipeline] httpRequest 00:00:10.714 [Pipeline] echo 00:00:10.716 Sorcerer 10.211.164.20 is alive 00:00:10.726 [Pipeline] retry 00:00:10.728 [Pipeline] { 00:00:10.742 [Pipeline] httpRequest 00:00:10.747 HttpMethod: GET 00:00:10.747 URL: http://10.211.164.20/packages/spdk_05632f11a7d36f5bcaedebbee01d09177c85f1b6.tar.gz 00:00:10.748 Sending request to url: http://10.211.164.20/packages/spdk_05632f11a7d36f5bcaedebbee01d09177c85f1b6.tar.gz 00:00:10.762 Response Code: HTTP/1.1 200 OK 00:00:10.762 Success: Status code 200 is in the accepted range: 200,404 00:00:10.763 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_05632f11a7d36f5bcaedebbee01d09177c85f1b6.tar.gz 00:01:06.152 [Pipeline] } 00:01:06.170 [Pipeline] // retry 00:01:06.178 [Pipeline] sh 00:01:06.465 + tar --no-same-owner -xf spdk_05632f11a7d36f5bcaedebbee01d09177c85f1b6.tar.gz 00:01:09.019 [Pipeline] sh 00:01:09.310 + git -C spdk log --oneline -n5 00:01:09.310 05632f11a lib/reduce: Add a chunk data read/write cache 00:01:09.310 a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:01:09.310 a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair 00:01:09.310 2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting 00:01:09.310 e2dfdf06c accel/mlx5: Register post_poller handler 00:01:09.321 [Pipeline] } 00:01:09.331 [Pipeline] // stage 00:01:09.339 [Pipeline] stage 00:01:09.341 [Pipeline] { (Prepare) 00:01:09.357 [Pipeline] writeFile 00:01:09.372 [Pipeline] sh 00:01:09.656 + logger -p user.info -t JENKINS-CI 00:01:09.668 [Pipeline] sh 00:01:09.952 + logger -p user.info -t JENKINS-CI 00:01:09.964 [Pipeline] sh 00:01:10.245 + cat autorun-spdk.conf 00:01:10.245 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.245 SPDK_TEST_NVMF=1 00:01:10.245 SPDK_TEST_NVME_CLI=1 00:01:10.245 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:10.245 SPDK_TEST_NVMF_NICS=e810 00:01:10.245 SPDK_TEST_VFIOUSER=1 00:01:10.245 SPDK_RUN_UBSAN=1 00:01:10.245 NET_TYPE=phy 00:01:10.253 RUN_NIGHTLY=0 00:01:10.259 [Pipeline] readFile 00:01:10.312 [Pipeline] withEnv 00:01:10.314 [Pipeline] { 00:01:10.325 [Pipeline] sh 00:01:10.609 + set -ex 00:01:10.609 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:10.609 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:10.609 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.609 ++ SPDK_TEST_NVMF=1 00:01:10.609 ++ SPDK_TEST_NVME_CLI=1 00:01:10.609 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:10.609 ++ SPDK_TEST_NVMF_NICS=e810 00:01:10.609 ++ SPDK_TEST_VFIOUSER=1 00:01:10.609 ++ SPDK_RUN_UBSAN=1 00:01:10.609 ++ NET_TYPE=phy 00:01:10.609 ++ RUN_NIGHTLY=0 00:01:10.609 + case $SPDK_TEST_NVMF_NICS in 00:01:10.609 + DRIVERS=ice 00:01:10.609 + [[ tcp == \r\d\m\a ]] 00:01:10.609 + [[ -n ice ]] 00:01:10.609 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:10.609 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:10.609 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:10.609 rmmod: ERROR: Module irdma is not currently loaded 00:01:10.609 rmmod: ERROR: Module i40iw is not currently loaded 00:01:10.609 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:10.609 + true 00:01:10.609 + for D in $DRIVERS 00:01:10.609 + sudo modprobe ice 00:01:10.609 + exit 0 00:01:10.617 [Pipeline] } 00:01:10.628 [Pipeline] // withEnv 00:01:10.632 [Pipeline] } 00:01:10.643 [Pipeline] // stage 00:01:10.652 [Pipeline] catchError 00:01:10.653 [Pipeline] { 00:01:10.666 [Pipeline] timeout 00:01:10.666 Timeout set to expire in 1 hr 0 min 00:01:10.667 [Pipeline] { 00:01:10.681 [Pipeline] stage 00:01:10.682 [Pipeline] { (Tests) 00:01:10.696 [Pipeline] sh 00:01:10.980 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:10.980 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:10.980 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:10.980 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:10.980 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:10.980 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:10.980 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:10.980 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:10.980 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:10.980 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:10.980 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:10.980 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:10.980 + source /etc/os-release 00:01:10.980 ++ NAME='Fedora Linux' 00:01:10.980 ++ VERSION='39 (Cloud Edition)' 00:01:10.980 ++ ID=fedora 00:01:10.980 ++ VERSION_ID=39 00:01:10.980 ++ VERSION_CODENAME= 00:01:10.980 ++ PLATFORM_ID=platform:f39 00:01:10.980 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:10.980 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:10.980 ++ LOGO=fedora-logo-icon 00:01:10.980 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:10.980 ++ HOME_URL=https://fedoraproject.org/ 00:01:10.980 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:10.980 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:10.980 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:10.980 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:10.980 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:10.980 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:10.980 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:10.980 ++ SUPPORT_END=2024-11-12 00:01:10.980 ++ VARIANT='Cloud Edition' 00:01:10.980 ++ VARIANT_ID=cloud 00:01:10.980 + uname -a 00:01:10.980 Linux spdk-wfp-08 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:10.980 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:12.882 Hugepages 00:01:12.882 node hugesize free / total 00:01:12.882 node0 1048576kB 0 / 0 00:01:12.882 node0 2048kB 0 / 0 00:01:12.882 node1 1048576kB 0 / 0 00:01:12.882 node1 2048kB 0 / 0 00:01:12.882 00:01:12.882 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:12.882 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:12.882 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:12.882 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:12.882 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:12.882 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:12.882 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:12.882 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:12.882 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:12.882 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:12.882 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:13.142 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:13.142 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:13.142 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:13.142 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:13.142 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:13.142 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:13.142 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:13.142 + rm -f /tmp/spdk-ld-path 00:01:13.142 + source autorun-spdk.conf 00:01:13.142 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.142 ++ SPDK_TEST_NVMF=1 00:01:13.142 ++ SPDK_TEST_NVME_CLI=1 00:01:13.142 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:13.142 ++ SPDK_TEST_NVMF_NICS=e810 00:01:13.142 ++ SPDK_TEST_VFIOUSER=1 00:01:13.142 ++ SPDK_RUN_UBSAN=1 00:01:13.142 ++ NET_TYPE=phy 00:01:13.142 ++ RUN_NIGHTLY=0 00:01:13.142 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:13.142 + [[ -n '' ]] 00:01:13.142 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:13.142 + for M in /var/spdk/build-*-manifest.txt 00:01:13.142 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:13.142 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:13.142 + for M in /var/spdk/build-*-manifest.txt 00:01:13.142 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:13.142 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:13.142 + for M in /var/spdk/build-*-manifest.txt 00:01:13.142 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:13.142 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:13.142 ++ uname 00:01:13.142 + [[ Linux == \L\i\n\u\x ]] 00:01:13.142 + sudo dmesg -T 00:01:13.142 + sudo dmesg --clear 00:01:13.142 + dmesg_pid=2341105 00:01:13.142 + [[ Fedora Linux == FreeBSD ]] 00:01:13.142 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:13.142 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:13.142 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:13.142 + [[ -x /usr/src/fio-static/fio ]] 00:01:13.142 + export FIO_BIN=/usr/src/fio-static/fio 00:01:13.142 + FIO_BIN=/usr/src/fio-static/fio 00:01:13.142 + sudo dmesg -Tw 00:01:13.142 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:13.142 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:13.142 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:13.142 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:13.142 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:13.143 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:13.143 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:13.143 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:13.143 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:13.143 03:09:33 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:13.143 03:09:33 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:13.143 03:09:33 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.143 03:09:33 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:13.143 03:09:33 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:13.143 03:09:33 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:13.143 03:09:33 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:13.143 03:09:33 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:13.143 03:09:33 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:13.143 03:09:33 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:13.143 03:09:33 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:13.143 03:09:33 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:13.143 03:09:33 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:13.402 03:09:33 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:13.402 03:09:33 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:13.402 03:09:33 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:13.402 03:09:33 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:13.402 03:09:33 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:13.402 03:09:33 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:13.402 03:09:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.402 03:09:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.402 03:09:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.402 03:09:33 -- paths/export.sh@5 -- $ export PATH 00:01:13.402 03:09:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.403 03:09:33 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:13.403 03:09:33 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:13.403 03:09:33 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733450973.XXXXXX 00:01:13.403 03:09:33 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733450973.P5WczZ 00:01:13.403 03:09:33 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:13.403 03:09:33 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:13.403 03:09:33 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:13.403 03:09:33 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:13.403 03:09:33 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:13.403 03:09:33 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:13.403 03:09:33 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:13.403 03:09:33 -- common/autotest_common.sh@10 -- $ set +x 00:01:13.403 03:09:33 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:13.403 03:09:33 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:13.403 03:09:33 -- pm/common@17 -- $ local monitor 00:01:13.403 03:09:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.403 03:09:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.403 03:09:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.403 03:09:33 -- pm/common@21 -- $ date +%s 00:01:13.403 03:09:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.403 03:09:33 -- pm/common@21 -- $ date +%s 00:01:13.403 03:09:33 -- pm/common@21 -- $ date +%s 00:01:13.403 03:09:33 -- pm/common@25 -- $ sleep 1 00:01:13.403 03:09:33 -- pm/common@21 -- $ date +%s 00:01:13.403 03:09:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733450973 00:01:13.403 03:09:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733450973 00:01:13.403 03:09:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733450973 00:01:13.403 03:09:33 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733450973 00:01:13.403 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733450973_collect-vmstat.pm.log 00:01:13.403 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733450973_collect-cpu-temp.pm.log 00:01:13.403 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733450973_collect-cpu-load.pm.log 00:01:13.403 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733450973_collect-bmc-pm.bmc.pm.log 00:01:14.340 03:09:34 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:14.340 03:09:34 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:14.340 03:09:34 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:14.340 03:09:34 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:14.340 03:09:34 -- spdk/autobuild.sh@16 -- $ date -u 00:01:14.340 Fri Dec 6 02:09:34 AM UTC 2024 00:01:14.340 03:09:34 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:14.340 v25.01-pre-304-g05632f11a 00:01:14.340 03:09:34 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:14.340 03:09:34 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:14.340 03:09:34 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:14.340 03:09:34 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:14.340 03:09:34 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:14.340 03:09:34 -- common/autotest_common.sh@10 -- $ set +x 00:01:14.340 ************************************ 00:01:14.340 START TEST ubsan 00:01:14.340 ************************************ 00:01:14.340 03:09:34 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:14.340 using ubsan 00:01:14.340 00:01:14.340 real 0m0.000s 00:01:14.340 user 0m0.000s 00:01:14.340 sys 0m0.000s 00:01:14.340 03:09:34 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:14.340 03:09:34 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:14.340 ************************************ 00:01:14.340 END TEST ubsan 00:01:14.340 ************************************ 00:01:14.340 03:09:34 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:14.340 03:09:34 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:14.340 03:09:34 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:14.340 03:09:34 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:14.340 03:09:34 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:14.340 03:09:34 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:14.340 03:09:34 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:14.340 03:09:34 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:14.340 03:09:34 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:14.600 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:14.600 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:14.861 Using 'verbs' RDMA provider 00:01:28.018 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:40.244 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:40.244 Creating mk/config.mk...done. 00:01:40.244 Creating mk/cc.flags.mk...done. 00:01:40.244 Type 'make' to build. 00:01:40.244 03:09:58 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:01:40.244 03:09:58 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:40.244 03:09:58 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:40.244 03:09:58 -- common/autotest_common.sh@10 -- $ set +x 00:01:40.244 ************************************ 00:01:40.244 START TEST make 00:01:40.244 ************************************ 00:01:40.244 03:09:58 make -- common/autotest_common.sh@1129 -- $ make -j96 00:01:40.244 make[1]: Nothing to be done for 'all'. 00:01:40.508 The Meson build system 00:01:40.508 Version: 1.5.0 00:01:40.508 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:40.508 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:40.508 Build type: native build 00:01:40.508 Project name: libvfio-user 00:01:40.508 Project version: 0.0.1 00:01:40.508 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:40.508 C linker for the host machine: cc ld.bfd 2.40-14 00:01:40.508 Host machine cpu family: x86_64 00:01:40.508 Host machine cpu: x86_64 00:01:40.508 Run-time dependency threads found: YES 00:01:40.508 Library dl found: YES 00:01:40.508 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:40.508 Run-time dependency json-c found: YES 0.17 00:01:40.508 Run-time dependency cmocka found: YES 1.1.7 00:01:40.508 Program pytest-3 found: NO 00:01:40.508 Program flake8 found: NO 00:01:40.508 Program misspell-fixer found: NO 00:01:40.508 Program restructuredtext-lint found: NO 00:01:40.508 Program valgrind found: YES (/usr/bin/valgrind) 00:01:40.508 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:40.508 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:40.508 Compiler for C supports arguments -Wwrite-strings: YES 00:01:40.508 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:40.508 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:40.508 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:40.508 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:40.508 Build targets in project: 8 00:01:40.508 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:40.508 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:40.508 00:01:40.508 libvfio-user 0.0.1 00:01:40.508 00:01:40.508 User defined options 00:01:40.508 buildtype : debug 00:01:40.508 default_library: shared 00:01:40.508 libdir : /usr/local/lib 00:01:40.508 00:01:40.509 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:41.075 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:41.333 [1/37] Compiling C object samples/null.p/null.c.o 00:01:41.333 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:41.333 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:41.333 [4/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:41.333 [5/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:41.333 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:41.333 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:41.333 [8/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:41.333 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:41.333 [10/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:41.333 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:41.333 [12/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:41.333 [13/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:41.333 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:41.333 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:41.333 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:41.333 [17/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:41.333 [18/37] Compiling C object samples/server.p/server.c.o 00:01:41.333 [19/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:41.333 [20/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:41.333 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:41.333 [22/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:41.333 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:41.333 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:41.333 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:41.333 [26/37] Compiling C object samples/client.p/client.c.o 00:01:41.333 [27/37] Linking target samples/client 00:01:41.333 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:41.333 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:41.592 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:41.592 [31/37] Linking target test/unit_tests 00:01:41.592 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:41.592 [33/37] Linking target samples/server 00:01:41.592 [34/37] Linking target samples/lspci 00:01:41.592 [35/37] Linking target samples/gpio-pci-idio-16 00:01:41.592 [36/37] Linking target samples/null 00:01:41.592 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:41.592 INFO: autodetecting backend as ninja 00:01:41.592 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:41.851 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:42.111 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:42.111 ninja: no work to do. 00:01:47.383 The Meson build system 00:01:47.383 Version: 1.5.0 00:01:47.384 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:47.384 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:47.384 Build type: native build 00:01:47.384 Program cat found: YES (/usr/bin/cat) 00:01:47.384 Project name: DPDK 00:01:47.384 Project version: 24.03.0 00:01:47.384 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:47.384 C linker for the host machine: cc ld.bfd 2.40-14 00:01:47.384 Host machine cpu family: x86_64 00:01:47.384 Host machine cpu: x86_64 00:01:47.384 Message: ## Building in Developer Mode ## 00:01:47.384 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:47.384 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:47.384 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:47.384 Program python3 found: YES (/usr/bin/python3) 00:01:47.384 Program cat found: YES (/usr/bin/cat) 00:01:47.384 Compiler for C supports arguments -march=native: YES 00:01:47.384 Checking for size of "void *" : 8 00:01:47.384 Checking for size of "void *" : 8 (cached) 00:01:47.384 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:47.384 Library m found: YES 00:01:47.384 Library numa found: YES 00:01:47.384 Has header "numaif.h" : YES 00:01:47.384 Library fdt found: NO 00:01:47.384 Library execinfo found: NO 00:01:47.384 Has header "execinfo.h" : YES 00:01:47.384 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:47.384 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:47.384 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:47.384 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:47.384 Run-time dependency openssl found: YES 3.1.1 00:01:47.384 Run-time dependency libpcap found: YES 1.10.4 00:01:47.384 Has header "pcap.h" with dependency libpcap: YES 00:01:47.384 Compiler for C supports arguments -Wcast-qual: YES 00:01:47.384 Compiler for C supports arguments -Wdeprecated: YES 00:01:47.384 Compiler for C supports arguments -Wformat: YES 00:01:47.384 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:47.384 Compiler for C supports arguments -Wformat-security: NO 00:01:47.384 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:47.384 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:47.384 Compiler for C supports arguments -Wnested-externs: YES 00:01:47.384 Compiler for C supports arguments -Wold-style-definition: YES 00:01:47.384 Compiler for C supports arguments -Wpointer-arith: YES 00:01:47.384 Compiler for C supports arguments -Wsign-compare: YES 00:01:47.384 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:47.384 Compiler for C supports arguments -Wundef: YES 00:01:47.384 Compiler for C supports arguments -Wwrite-strings: YES 00:01:47.384 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:47.384 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:47.384 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:47.384 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:47.384 Program objdump found: YES (/usr/bin/objdump) 00:01:47.384 Compiler for C supports arguments -mavx512f: YES 00:01:47.384 Checking if "AVX512 checking" compiles: YES 00:01:47.384 Fetching value of define "__SSE4_2__" : 1 00:01:47.384 Fetching value of define "__AES__" : 1 00:01:47.384 Fetching value of define "__AVX__" : 1 00:01:47.384 Fetching value of define "__AVX2__" : 1 00:01:47.384 Fetching value of define "__AVX512BW__" : 1 00:01:47.384 Fetching value of define "__AVX512CD__" : 1 00:01:47.384 Fetching value of define "__AVX512DQ__" : 1 00:01:47.384 Fetching value of define "__AVX512F__" : 1 00:01:47.384 Fetching value of define "__AVX512VL__" : 1 00:01:47.384 Fetching value of define "__PCLMUL__" : 1 00:01:47.384 Fetching value of define "__RDRND__" : 1 00:01:47.384 Fetching value of define "__RDSEED__" : 1 00:01:47.384 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:47.384 Fetching value of define "__znver1__" : (undefined) 00:01:47.384 Fetching value of define "__znver2__" : (undefined) 00:01:47.384 Fetching value of define "__znver3__" : (undefined) 00:01:47.384 Fetching value of define "__znver4__" : (undefined) 00:01:47.384 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:47.384 Message: lib/log: Defining dependency "log" 00:01:47.384 Message: lib/kvargs: Defining dependency "kvargs" 00:01:47.384 Message: lib/telemetry: Defining dependency "telemetry" 00:01:47.384 Checking for function "getentropy" : NO 00:01:47.384 Message: lib/eal: Defining dependency "eal" 00:01:47.384 Message: lib/ring: Defining dependency "ring" 00:01:47.384 Message: lib/rcu: Defining dependency "rcu" 00:01:47.384 Message: lib/mempool: Defining dependency "mempool" 00:01:47.384 Message: lib/mbuf: Defining dependency "mbuf" 00:01:47.384 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:47.384 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:47.384 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:47.384 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:47.384 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:47.384 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:47.384 Compiler for C supports arguments -mpclmul: YES 00:01:47.384 Compiler for C supports arguments -maes: YES 00:01:47.384 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:47.384 Compiler for C supports arguments -mavx512bw: YES 00:01:47.384 Compiler for C supports arguments -mavx512dq: YES 00:01:47.384 Compiler for C supports arguments -mavx512vl: YES 00:01:47.384 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:47.384 Compiler for C supports arguments -mavx2: YES 00:01:47.384 Compiler for C supports arguments -mavx: YES 00:01:47.384 Message: lib/net: Defining dependency "net" 00:01:47.384 Message: lib/meter: Defining dependency "meter" 00:01:47.384 Message: lib/ethdev: Defining dependency "ethdev" 00:01:47.384 Message: lib/pci: Defining dependency "pci" 00:01:47.384 Message: lib/cmdline: Defining dependency "cmdline" 00:01:47.384 Message: lib/hash: Defining dependency "hash" 00:01:47.384 Message: lib/timer: Defining dependency "timer" 00:01:47.384 Message: lib/compressdev: Defining dependency "compressdev" 00:01:47.384 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:47.384 Message: lib/dmadev: Defining dependency "dmadev" 00:01:47.384 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:47.384 Message: lib/power: Defining dependency "power" 00:01:47.384 Message: lib/reorder: Defining dependency "reorder" 00:01:47.384 Message: lib/security: Defining dependency "security" 00:01:47.384 Has header "linux/userfaultfd.h" : YES 00:01:47.384 Has header "linux/vduse.h" : YES 00:01:47.384 Message: lib/vhost: Defining dependency "vhost" 00:01:47.384 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:47.384 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:47.384 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:47.384 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:47.384 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:47.384 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:47.384 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:47.384 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:47.384 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:47.384 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:47.384 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:47.384 Configuring doxy-api-html.conf using configuration 00:01:47.384 Configuring doxy-api-man.conf using configuration 00:01:47.384 Program mandb found: YES (/usr/bin/mandb) 00:01:47.384 Program sphinx-build found: NO 00:01:47.384 Configuring rte_build_config.h using configuration 00:01:47.384 Message: 00:01:47.384 ================= 00:01:47.384 Applications Enabled 00:01:47.384 ================= 00:01:47.384 00:01:47.384 apps: 00:01:47.384 00:01:47.384 00:01:47.384 Message: 00:01:47.384 ================= 00:01:47.384 Libraries Enabled 00:01:47.384 ================= 00:01:47.384 00:01:47.384 libs: 00:01:47.384 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:47.384 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:47.384 cryptodev, dmadev, power, reorder, security, vhost, 00:01:47.384 00:01:47.384 Message: 00:01:47.384 =============== 00:01:47.384 Drivers Enabled 00:01:47.384 =============== 00:01:47.384 00:01:47.384 common: 00:01:47.384 00:01:47.384 bus: 00:01:47.384 pci, vdev, 00:01:47.384 mempool: 00:01:47.384 ring, 00:01:47.384 dma: 00:01:47.384 00:01:47.384 net: 00:01:47.384 00:01:47.384 crypto: 00:01:47.384 00:01:47.384 compress: 00:01:47.384 00:01:47.384 vdpa: 00:01:47.384 00:01:47.384 00:01:47.384 Message: 00:01:47.384 ================= 00:01:47.384 Content Skipped 00:01:47.384 ================= 00:01:47.384 00:01:47.384 apps: 00:01:47.384 dumpcap: explicitly disabled via build config 00:01:47.384 graph: explicitly disabled via build config 00:01:47.384 pdump: explicitly disabled via build config 00:01:47.384 proc-info: explicitly disabled via build config 00:01:47.384 test-acl: explicitly disabled via build config 00:01:47.384 test-bbdev: explicitly disabled via build config 00:01:47.384 test-cmdline: explicitly disabled via build config 00:01:47.384 test-compress-perf: explicitly disabled via build config 00:01:47.384 test-crypto-perf: explicitly disabled via build config 00:01:47.384 test-dma-perf: explicitly disabled via build config 00:01:47.384 test-eventdev: explicitly disabled via build config 00:01:47.384 test-fib: explicitly disabled via build config 00:01:47.384 test-flow-perf: explicitly disabled via build config 00:01:47.384 test-gpudev: explicitly disabled via build config 00:01:47.384 test-mldev: explicitly disabled via build config 00:01:47.384 test-pipeline: explicitly disabled via build config 00:01:47.384 test-pmd: explicitly disabled via build config 00:01:47.384 test-regex: explicitly disabled via build config 00:01:47.384 test-sad: explicitly disabled via build config 00:01:47.385 test-security-perf: explicitly disabled via build config 00:01:47.385 00:01:47.385 libs: 00:01:47.385 argparse: explicitly disabled via build config 00:01:47.385 metrics: explicitly disabled via build config 00:01:47.385 acl: explicitly disabled via build config 00:01:47.385 bbdev: explicitly disabled via build config 00:01:47.385 bitratestats: explicitly disabled via build config 00:01:47.385 bpf: explicitly disabled via build config 00:01:47.385 cfgfile: explicitly disabled via build config 00:01:47.385 distributor: explicitly disabled via build config 00:01:47.385 efd: explicitly disabled via build config 00:01:47.385 eventdev: explicitly disabled via build config 00:01:47.385 dispatcher: explicitly disabled via build config 00:01:47.385 gpudev: explicitly disabled via build config 00:01:47.385 gro: explicitly disabled via build config 00:01:47.385 gso: explicitly disabled via build config 00:01:47.385 ip_frag: explicitly disabled via build config 00:01:47.385 jobstats: explicitly disabled via build config 00:01:47.385 latencystats: explicitly disabled via build config 00:01:47.385 lpm: explicitly disabled via build config 00:01:47.385 member: explicitly disabled via build config 00:01:47.385 pcapng: explicitly disabled via build config 00:01:47.385 rawdev: explicitly disabled via build config 00:01:47.385 regexdev: explicitly disabled via build config 00:01:47.385 mldev: explicitly disabled via build config 00:01:47.385 rib: explicitly disabled via build config 00:01:47.385 sched: explicitly disabled via build config 00:01:47.385 stack: explicitly disabled via build config 00:01:47.385 ipsec: explicitly disabled via build config 00:01:47.385 pdcp: explicitly disabled via build config 00:01:47.385 fib: explicitly disabled via build config 00:01:47.385 port: explicitly disabled via build config 00:01:47.385 pdump: explicitly disabled via build config 00:01:47.385 table: explicitly disabled via build config 00:01:47.385 pipeline: explicitly disabled via build config 00:01:47.385 graph: explicitly disabled via build config 00:01:47.385 node: explicitly disabled via build config 00:01:47.385 00:01:47.385 drivers: 00:01:47.385 common/cpt: not in enabled drivers build config 00:01:47.385 common/dpaax: not in enabled drivers build config 00:01:47.385 common/iavf: not in enabled drivers build config 00:01:47.385 common/idpf: not in enabled drivers build config 00:01:47.385 common/ionic: not in enabled drivers build config 00:01:47.385 common/mvep: not in enabled drivers build config 00:01:47.385 common/octeontx: not in enabled drivers build config 00:01:47.385 bus/auxiliary: not in enabled drivers build config 00:01:47.385 bus/cdx: not in enabled drivers build config 00:01:47.385 bus/dpaa: not in enabled drivers build config 00:01:47.385 bus/fslmc: not in enabled drivers build config 00:01:47.385 bus/ifpga: not in enabled drivers build config 00:01:47.385 bus/platform: not in enabled drivers build config 00:01:47.385 bus/uacce: not in enabled drivers build config 00:01:47.385 bus/vmbus: not in enabled drivers build config 00:01:47.385 common/cnxk: not in enabled drivers build config 00:01:47.385 common/mlx5: not in enabled drivers build config 00:01:47.385 common/nfp: not in enabled drivers build config 00:01:47.385 common/nitrox: not in enabled drivers build config 00:01:47.385 common/qat: not in enabled drivers build config 00:01:47.385 common/sfc_efx: not in enabled drivers build config 00:01:47.385 mempool/bucket: not in enabled drivers build config 00:01:47.385 mempool/cnxk: not in enabled drivers build config 00:01:47.385 mempool/dpaa: not in enabled drivers build config 00:01:47.385 mempool/dpaa2: not in enabled drivers build config 00:01:47.385 mempool/octeontx: not in enabled drivers build config 00:01:47.385 mempool/stack: not in enabled drivers build config 00:01:47.385 dma/cnxk: not in enabled drivers build config 00:01:47.385 dma/dpaa: not in enabled drivers build config 00:01:47.385 dma/dpaa2: not in enabled drivers build config 00:01:47.385 dma/hisilicon: not in enabled drivers build config 00:01:47.385 dma/idxd: not in enabled drivers build config 00:01:47.385 dma/ioat: not in enabled drivers build config 00:01:47.385 dma/skeleton: not in enabled drivers build config 00:01:47.385 net/af_packet: not in enabled drivers build config 00:01:47.385 net/af_xdp: not in enabled drivers build config 00:01:47.385 net/ark: not in enabled drivers build config 00:01:47.385 net/atlantic: not in enabled drivers build config 00:01:47.385 net/avp: not in enabled drivers build config 00:01:47.385 net/axgbe: not in enabled drivers build config 00:01:47.385 net/bnx2x: not in enabled drivers build config 00:01:47.385 net/bnxt: not in enabled drivers build config 00:01:47.385 net/bonding: not in enabled drivers build config 00:01:47.385 net/cnxk: not in enabled drivers build config 00:01:47.385 net/cpfl: not in enabled drivers build config 00:01:47.385 net/cxgbe: not in enabled drivers build config 00:01:47.385 net/dpaa: not in enabled drivers build config 00:01:47.385 net/dpaa2: not in enabled drivers build config 00:01:47.385 net/e1000: not in enabled drivers build config 00:01:47.385 net/ena: not in enabled drivers build config 00:01:47.385 net/enetc: not in enabled drivers build config 00:01:47.385 net/enetfec: not in enabled drivers build config 00:01:47.385 net/enic: not in enabled drivers build config 00:01:47.385 net/failsafe: not in enabled drivers build config 00:01:47.385 net/fm10k: not in enabled drivers build config 00:01:47.385 net/gve: not in enabled drivers build config 00:01:47.385 net/hinic: not in enabled drivers build config 00:01:47.385 net/hns3: not in enabled drivers build config 00:01:47.385 net/i40e: not in enabled drivers build config 00:01:47.385 net/iavf: not in enabled drivers build config 00:01:47.385 net/ice: not in enabled drivers build config 00:01:47.385 net/idpf: not in enabled drivers build config 00:01:47.385 net/igc: not in enabled drivers build config 00:01:47.385 net/ionic: not in enabled drivers build config 00:01:47.385 net/ipn3ke: not in enabled drivers build config 00:01:47.385 net/ixgbe: not in enabled drivers build config 00:01:47.385 net/mana: not in enabled drivers build config 00:01:47.385 net/memif: not in enabled drivers build config 00:01:47.385 net/mlx4: not in enabled drivers build config 00:01:47.385 net/mlx5: not in enabled drivers build config 00:01:47.385 net/mvneta: not in enabled drivers build config 00:01:47.385 net/mvpp2: not in enabled drivers build config 00:01:47.385 net/netvsc: not in enabled drivers build config 00:01:47.385 net/nfb: not in enabled drivers build config 00:01:47.385 net/nfp: not in enabled drivers build config 00:01:47.385 net/ngbe: not in enabled drivers build config 00:01:47.385 net/null: not in enabled drivers build config 00:01:47.385 net/octeontx: not in enabled drivers build config 00:01:47.385 net/octeon_ep: not in enabled drivers build config 00:01:47.385 net/pcap: not in enabled drivers build config 00:01:47.385 net/pfe: not in enabled drivers build config 00:01:47.385 net/qede: not in enabled drivers build config 00:01:47.385 net/ring: not in enabled drivers build config 00:01:47.385 net/sfc: not in enabled drivers build config 00:01:47.385 net/softnic: not in enabled drivers build config 00:01:47.385 net/tap: not in enabled drivers build config 00:01:47.385 net/thunderx: not in enabled drivers build config 00:01:47.385 net/txgbe: not in enabled drivers build config 00:01:47.385 net/vdev_netvsc: not in enabled drivers build config 00:01:47.385 net/vhost: not in enabled drivers build config 00:01:47.385 net/virtio: not in enabled drivers build config 00:01:47.385 net/vmxnet3: not in enabled drivers build config 00:01:47.385 raw/*: missing internal dependency, "rawdev" 00:01:47.385 crypto/armv8: not in enabled drivers build config 00:01:47.385 crypto/bcmfs: not in enabled drivers build config 00:01:47.385 crypto/caam_jr: not in enabled drivers build config 00:01:47.385 crypto/ccp: not in enabled drivers build config 00:01:47.385 crypto/cnxk: not in enabled drivers build config 00:01:47.385 crypto/dpaa_sec: not in enabled drivers build config 00:01:47.385 crypto/dpaa2_sec: not in enabled drivers build config 00:01:47.385 crypto/ipsec_mb: not in enabled drivers build config 00:01:47.385 crypto/mlx5: not in enabled drivers build config 00:01:47.385 crypto/mvsam: not in enabled drivers build config 00:01:47.385 crypto/nitrox: not in enabled drivers build config 00:01:47.385 crypto/null: not in enabled drivers build config 00:01:47.385 crypto/octeontx: not in enabled drivers build config 00:01:47.385 crypto/openssl: not in enabled drivers build config 00:01:47.385 crypto/scheduler: not in enabled drivers build config 00:01:47.385 crypto/uadk: not in enabled drivers build config 00:01:47.385 crypto/virtio: not in enabled drivers build config 00:01:47.385 compress/isal: not in enabled drivers build config 00:01:47.385 compress/mlx5: not in enabled drivers build config 00:01:47.385 compress/nitrox: not in enabled drivers build config 00:01:47.385 compress/octeontx: not in enabled drivers build config 00:01:47.385 compress/zlib: not in enabled drivers build config 00:01:47.385 regex/*: missing internal dependency, "regexdev" 00:01:47.385 ml/*: missing internal dependency, "mldev" 00:01:47.385 vdpa/ifc: not in enabled drivers build config 00:01:47.385 vdpa/mlx5: not in enabled drivers build config 00:01:47.385 vdpa/nfp: not in enabled drivers build config 00:01:47.385 vdpa/sfc: not in enabled drivers build config 00:01:47.385 event/*: missing internal dependency, "eventdev" 00:01:47.385 baseband/*: missing internal dependency, "bbdev" 00:01:47.385 gpu/*: missing internal dependency, "gpudev" 00:01:47.385 00:01:47.385 00:01:47.385 Build targets in project: 85 00:01:47.385 00:01:47.385 DPDK 24.03.0 00:01:47.385 00:01:47.385 User defined options 00:01:47.385 buildtype : debug 00:01:47.385 default_library : shared 00:01:47.385 libdir : lib 00:01:47.385 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:47.385 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:47.385 c_link_args : 00:01:47.385 cpu_instruction_set: native 00:01:47.385 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:47.385 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:47.385 enable_docs : false 00:01:47.385 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:47.386 enable_kmods : false 00:01:47.386 max_lcores : 128 00:01:47.386 tests : false 00:01:47.386 00:01:47.386 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:47.386 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:47.652 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:47.652 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:47.652 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:47.652 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:47.652 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:47.652 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:47.652 [7/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:47.652 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:47.652 [9/268] Linking static target lib/librte_kvargs.a 00:01:47.652 [10/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:47.652 [11/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:47.652 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:47.652 [13/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:47.652 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:47.652 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:47.652 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:47.652 [17/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:47.652 [18/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:47.916 [19/268] Linking static target lib/librte_log.a 00:01:47.916 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:47.916 [21/268] Linking static target lib/librte_pci.a 00:01:47.916 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:47.916 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:47.916 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:47.916 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:48.175 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:48.175 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:48.175 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:48.175 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:48.175 [30/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:48.175 [31/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:48.175 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:48.175 [33/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:48.175 [34/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:48.175 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:48.175 [36/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:48.175 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:48.175 [38/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:48.175 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:48.175 [40/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:48.175 [41/268] Linking static target lib/librte_meter.a 00:01:48.176 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:48.176 [43/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:48.176 [44/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:48.176 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:48.176 [46/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:48.176 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:48.176 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:48.176 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:48.176 [50/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:48.176 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:48.176 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:48.176 [53/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:48.176 [54/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:48.176 [55/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:48.176 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:48.176 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:48.176 [58/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:48.176 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:48.176 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:48.176 [61/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:48.176 [62/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:48.176 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:48.176 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:48.176 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:48.176 [66/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:48.176 [67/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:48.176 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:48.176 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:48.176 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:48.176 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:48.176 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:48.176 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:48.176 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:48.176 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:48.176 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:48.176 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:48.176 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:48.176 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:48.176 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:48.176 [81/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:48.176 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:48.176 [83/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:48.176 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:48.176 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:48.176 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:48.176 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:48.176 [88/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:48.176 [89/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.176 [90/268] Linking static target lib/librte_ring.a 00:01:48.176 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:48.176 [92/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:48.176 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:48.176 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:48.176 [95/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:48.176 [96/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:48.176 [97/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:48.176 [98/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:48.176 [99/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:48.176 [100/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:48.176 [101/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:48.176 [102/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:48.176 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:48.435 [104/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:48.435 [105/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:48.435 [106/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:48.435 [107/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:48.435 [108/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.435 [109/268] Linking static target lib/librte_telemetry.a 00:01:48.435 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:48.435 [111/268] Linking static target lib/librte_rcu.a 00:01:48.435 [112/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:48.435 [113/268] Linking static target lib/librte_mempool.a 00:01:48.435 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:48.435 [115/268] Linking static target lib/librte_net.a 00:01:48.435 [116/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:48.435 [117/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:48.435 [118/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:48.435 [119/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:48.435 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:48.435 [121/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:48.435 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:48.435 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:48.435 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:48.435 [125/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:48.436 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:48.436 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:48.436 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:48.436 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:48.436 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:48.436 [131/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:48.436 [132/268] Linking static target lib/librte_eal.a 00:01:48.436 [133/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.436 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:48.436 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:48.436 [136/268] Linking static target lib/librte_cmdline.a 00:01:48.436 [137/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:48.436 [138/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:48.436 [139/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:48.436 [140/268] Linking static target lib/librte_mbuf.a 00:01:48.436 [141/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:48.436 [142/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.694 [143/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.694 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:48.694 [145/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:48.694 [146/268] Linking target lib/librte_log.so.24.1 00:01:48.694 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:48.694 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:48.694 [149/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:48.694 [150/268] Linking static target lib/librte_compressdev.a 00:01:48.694 [151/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.694 [152/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:48.694 [153/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:48.694 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:48.694 [155/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:48.694 [156/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.694 [157/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:48.694 [158/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:48.694 [159/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:48.694 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:48.694 [161/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:48.694 [162/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:48.694 [163/268] Linking static target lib/librte_timer.a 00:01:48.694 [164/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:48.694 [165/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:48.694 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:48.694 [167/268] Linking static target lib/librte_dmadev.a 00:01:48.694 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:48.694 [169/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:48.694 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:48.694 [171/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:48.694 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:48.694 [173/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:48.694 [174/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:48.694 [175/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:48.694 [176/268] Linking static target lib/librte_security.a 00:01:48.694 [177/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:48.694 [178/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:48.694 [179/268] Linking static target lib/librte_reorder.a 00:01:48.694 [180/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:48.694 [181/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:48.694 [182/268] Linking target lib/librte_kvargs.so.24.1 00:01:48.694 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:48.694 [184/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.694 [185/268] Linking static target lib/librte_power.a 00:01:48.694 [186/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:48.694 [187/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:48.953 [188/268] Linking target lib/librte_telemetry.so.24.1 00:01:48.953 [189/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:48.953 [190/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:48.953 [191/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:48.953 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:48.953 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:48.953 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:48.953 [195/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:48.953 [196/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:48.953 [197/268] Linking static target lib/librte_hash.a 00:01:48.953 [198/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:48.953 [199/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:48.953 [200/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:48.953 [201/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:48.953 [202/268] Linking static target drivers/librte_bus_vdev.a 00:01:48.954 [203/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:48.954 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:48.954 [205/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:48.954 [206/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:48.954 [207/268] Linking static target drivers/librte_mempool_ring.a 00:01:48.954 [208/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:48.954 [209/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:48.954 [210/268] Linking static target drivers/librte_bus_pci.a 00:01:49.212 [211/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.212 [212/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.213 [213/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:49.213 [214/268] Linking static target lib/librte_cryptodev.a 00:01:49.213 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.213 [216/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.213 [217/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.213 [218/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.213 [219/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.213 [220/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.472 [221/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:49.472 [222/268] Linking static target lib/librte_ethdev.a 00:01:49.472 [223/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:49.472 [224/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.731 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.731 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.731 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.667 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:50.667 [229/268] Linking static target lib/librte_vhost.a 00:01:50.927 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.304 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.575 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.145 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.145 [234/268] Linking target lib/librte_eal.so.24.1 00:01:58.404 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:58.404 [236/268] Linking target lib/librte_meter.so.24.1 00:01:58.404 [237/268] Linking target lib/librte_timer.so.24.1 00:01:58.404 [238/268] Linking target lib/librte_ring.so.24.1 00:01:58.404 [239/268] Linking target lib/librte_dmadev.so.24.1 00:01:58.404 [240/268] Linking target lib/librte_pci.so.24.1 00:01:58.404 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:58.404 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:58.404 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:58.404 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:58.404 [245/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:58.404 [246/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:58.663 [247/268] Linking target lib/librte_rcu.so.24.1 00:01:58.663 [248/268] Linking target lib/librte_mempool.so.24.1 00:01:58.663 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:58.663 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:58.663 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:58.663 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:58.663 [253/268] Linking target lib/librte_mbuf.so.24.1 00:01:58.923 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:58.923 [255/268] Linking target lib/librte_reorder.so.24.1 00:01:58.923 [256/268] Linking target lib/librte_compressdev.so.24.1 00:01:58.923 [257/268] Linking target lib/librte_net.so.24.1 00:01:58.923 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:01:58.923 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:58.923 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:59.182 [261/268] Linking target lib/librte_cmdline.so.24.1 00:01:59.182 [262/268] Linking target lib/librte_hash.so.24.1 00:01:59.182 [263/268] Linking target lib/librte_ethdev.so.24.1 00:01:59.182 [264/268] Linking target lib/librte_security.so.24.1 00:01:59.182 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:59.182 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:59.182 [267/268] Linking target lib/librte_power.so.24.1 00:01:59.182 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:59.182 INFO: autodetecting backend as ninja 00:01:59.182 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:11.433 CC lib/log/log.o 00:02:11.434 CC lib/log/log_flags.o 00:02:11.434 CC lib/log/log_deprecated.o 00:02:11.434 CC lib/ut_mock/mock.o 00:02:11.434 CC lib/ut/ut.o 00:02:11.434 LIB libspdk_log.a 00:02:11.434 LIB libspdk_ut_mock.a 00:02:11.434 LIB libspdk_ut.a 00:02:11.434 SO libspdk_log.so.7.1 00:02:11.434 SO libspdk_ut_mock.so.6.0 00:02:11.434 SO libspdk_ut.so.2.0 00:02:11.434 SYMLINK libspdk_log.so 00:02:11.434 SYMLINK libspdk_ut_mock.so 00:02:11.434 SYMLINK libspdk_ut.so 00:02:11.434 CC lib/dma/dma.o 00:02:11.434 CC lib/ioat/ioat.o 00:02:11.434 CC lib/util/base64.o 00:02:11.434 CC lib/util/bit_array.o 00:02:11.434 CC lib/util/cpuset.o 00:02:11.434 CC lib/util/crc16.o 00:02:11.434 CC lib/util/crc32.o 00:02:11.434 CC lib/util/crc32c.o 00:02:11.434 CC lib/util/crc64.o 00:02:11.434 CC lib/util/dif.o 00:02:11.434 CC lib/util/crc32_ieee.o 00:02:11.434 CC lib/util/fd.o 00:02:11.434 CC lib/util/file.o 00:02:11.434 CC lib/util/fd_group.o 00:02:11.434 CC lib/util/hexlify.o 00:02:11.434 CXX lib/trace_parser/trace.o 00:02:11.434 CC lib/util/iov.o 00:02:11.434 CC lib/util/math.o 00:02:11.434 CC lib/util/net.o 00:02:11.434 CC lib/util/pipe.o 00:02:11.434 CC lib/util/strerror_tls.o 00:02:11.434 CC lib/util/xor.o 00:02:11.434 CC lib/util/string.o 00:02:11.434 CC lib/util/zipf.o 00:02:11.434 CC lib/util/uuid.o 00:02:11.434 CC lib/util/md5.o 00:02:11.434 CC lib/vfio_user/host/vfio_user_pci.o 00:02:11.434 CC lib/vfio_user/host/vfio_user.o 00:02:11.434 LIB libspdk_dma.a 00:02:11.434 SO libspdk_dma.so.5.0 00:02:11.434 LIB libspdk_ioat.a 00:02:11.434 SYMLINK libspdk_dma.so 00:02:11.434 SO libspdk_ioat.so.7.0 00:02:11.434 SYMLINK libspdk_ioat.so 00:02:11.434 LIB libspdk_vfio_user.a 00:02:11.434 SO libspdk_vfio_user.so.5.0 00:02:11.434 SYMLINK libspdk_vfio_user.so 00:02:11.434 LIB libspdk_util.a 00:02:11.434 SO libspdk_util.so.10.1 00:02:11.434 SYMLINK libspdk_util.so 00:02:11.434 LIB libspdk_trace_parser.a 00:02:11.434 SO libspdk_trace_parser.so.6.0 00:02:11.434 SYMLINK libspdk_trace_parser.so 00:02:11.693 CC lib/env_dpdk/env.o 00:02:11.693 CC lib/env_dpdk/memory.o 00:02:11.693 CC lib/env_dpdk/pci.o 00:02:11.693 CC lib/env_dpdk/init.o 00:02:11.693 CC lib/env_dpdk/threads.o 00:02:11.693 CC lib/env_dpdk/pci_ioat.o 00:02:11.693 CC lib/env_dpdk/pci_virtio.o 00:02:11.693 CC lib/env_dpdk/pci_vmd.o 00:02:11.693 CC lib/env_dpdk/pci_idxd.o 00:02:11.693 CC lib/env_dpdk/pci_event.o 00:02:11.693 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:11.693 CC lib/env_dpdk/sigbus_handler.o 00:02:11.693 CC lib/env_dpdk/pci_dpdk.o 00:02:11.693 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:11.693 CC lib/idxd/idxd.o 00:02:11.693 CC lib/idxd/idxd_kernel.o 00:02:11.693 CC lib/idxd/idxd_user.o 00:02:11.693 CC lib/rdma_utils/rdma_utils.o 00:02:11.693 CC lib/conf/conf.o 00:02:11.693 CC lib/vmd/vmd.o 00:02:11.693 CC lib/vmd/led.o 00:02:11.693 CC lib/json/json_util.o 00:02:11.693 CC lib/json/json_parse.o 00:02:11.693 CC lib/json/json_write.o 00:02:11.951 LIB libspdk_conf.a 00:02:11.951 LIB libspdk_rdma_utils.a 00:02:11.951 SO libspdk_conf.so.6.0 00:02:11.951 SO libspdk_rdma_utils.so.1.0 00:02:11.951 LIB libspdk_json.a 00:02:11.951 SYMLINK libspdk_conf.so 00:02:11.951 SYMLINK libspdk_rdma_utils.so 00:02:11.951 SO libspdk_json.so.6.0 00:02:11.951 SYMLINK libspdk_json.so 00:02:12.211 LIB libspdk_idxd.a 00:02:12.211 SO libspdk_idxd.so.12.1 00:02:12.211 LIB libspdk_vmd.a 00:02:12.211 SO libspdk_vmd.so.6.0 00:02:12.211 SYMLINK libspdk_idxd.so 00:02:12.211 CC lib/rdma_provider/common.o 00:02:12.211 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:12.211 SYMLINK libspdk_vmd.so 00:02:12.471 CC lib/jsonrpc/jsonrpc_server.o 00:02:12.471 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:12.471 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:12.471 CC lib/jsonrpc/jsonrpc_client.o 00:02:12.471 LIB libspdk_rdma_provider.a 00:02:12.471 SO libspdk_rdma_provider.so.7.0 00:02:12.471 SYMLINK libspdk_rdma_provider.so 00:02:12.471 LIB libspdk_jsonrpc.a 00:02:12.471 SO libspdk_jsonrpc.so.6.0 00:02:12.731 SYMLINK libspdk_jsonrpc.so 00:02:12.731 LIB libspdk_env_dpdk.a 00:02:12.731 SO libspdk_env_dpdk.so.15.1 00:02:12.731 SYMLINK libspdk_env_dpdk.so 00:02:12.991 CC lib/rpc/rpc.o 00:02:12.991 LIB libspdk_rpc.a 00:02:13.250 SO libspdk_rpc.so.6.0 00:02:13.250 SYMLINK libspdk_rpc.so 00:02:13.509 CC lib/trace/trace.o 00:02:13.509 CC lib/trace/trace_flags.o 00:02:13.509 CC lib/trace/trace_rpc.o 00:02:13.509 CC lib/notify/notify.o 00:02:13.509 CC lib/notify/notify_rpc.o 00:02:13.509 CC lib/keyring/keyring.o 00:02:13.509 CC lib/keyring/keyring_rpc.o 00:02:13.768 LIB libspdk_notify.a 00:02:13.768 SO libspdk_notify.so.6.0 00:02:13.768 LIB libspdk_trace.a 00:02:13.768 SO libspdk_trace.so.11.0 00:02:13.768 LIB libspdk_keyring.a 00:02:13.768 SYMLINK libspdk_notify.so 00:02:13.768 SO libspdk_keyring.so.2.0 00:02:13.768 SYMLINK libspdk_trace.so 00:02:13.768 SYMLINK libspdk_keyring.so 00:02:14.026 CC lib/sock/sock.o 00:02:14.026 CC lib/sock/sock_rpc.o 00:02:14.026 CC lib/thread/thread.o 00:02:14.026 CC lib/thread/iobuf.o 00:02:14.285 LIB libspdk_sock.a 00:02:14.285 SO libspdk_sock.so.10.0 00:02:14.544 SYMLINK libspdk_sock.so 00:02:14.804 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:14.804 CC lib/nvme/nvme_ctrlr.o 00:02:14.804 CC lib/nvme/nvme_fabric.o 00:02:14.804 CC lib/nvme/nvme_ns_cmd.o 00:02:14.804 CC lib/nvme/nvme_ns.o 00:02:14.804 CC lib/nvme/nvme_pcie_common.o 00:02:14.804 CC lib/nvme/nvme_pcie.o 00:02:14.804 CC lib/nvme/nvme_qpair.o 00:02:14.804 CC lib/nvme/nvme_transport.o 00:02:14.804 CC lib/nvme/nvme.o 00:02:14.804 CC lib/nvme/nvme_quirks.o 00:02:14.804 CC lib/nvme/nvme_discovery.o 00:02:14.804 CC lib/nvme/nvme_opal.o 00:02:14.804 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:14.804 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:14.804 CC lib/nvme/nvme_tcp.o 00:02:14.804 CC lib/nvme/nvme_io_msg.o 00:02:14.804 CC lib/nvme/nvme_poll_group.o 00:02:14.804 CC lib/nvme/nvme_stubs.o 00:02:14.804 CC lib/nvme/nvme_zns.o 00:02:14.804 CC lib/nvme/nvme_auth.o 00:02:14.804 CC lib/nvme/nvme_cuse.o 00:02:14.804 CC lib/nvme/nvme_vfio_user.o 00:02:14.804 CC lib/nvme/nvme_rdma.o 00:02:15.063 LIB libspdk_thread.a 00:02:15.063 SO libspdk_thread.so.11.0 00:02:15.325 SYMLINK libspdk_thread.so 00:02:15.583 CC lib/fsdev/fsdev.o 00:02:15.583 CC lib/fsdev/fsdev_rpc.o 00:02:15.583 CC lib/fsdev/fsdev_io.o 00:02:15.583 CC lib/virtio/virtio_vfio_user.o 00:02:15.583 CC lib/virtio/virtio.o 00:02:15.583 CC lib/virtio/virtio_vhost_user.o 00:02:15.583 CC lib/virtio/virtio_pci.o 00:02:15.583 CC lib/blob/request.o 00:02:15.583 CC lib/blob/blobstore.o 00:02:15.583 CC lib/accel/accel_sw.o 00:02:15.583 CC lib/accel/accel.o 00:02:15.583 CC lib/vfu_tgt/tgt_rpc.o 00:02:15.583 CC lib/blob/zeroes.o 00:02:15.583 CC lib/vfu_tgt/tgt_endpoint.o 00:02:15.583 CC lib/accel/accel_rpc.o 00:02:15.583 CC lib/blob/blob_bs_dev.o 00:02:15.583 CC lib/init/json_config.o 00:02:15.583 CC lib/init/subsystem.o 00:02:15.583 CC lib/init/subsystem_rpc.o 00:02:15.583 CC lib/init/rpc.o 00:02:15.842 LIB libspdk_init.a 00:02:15.842 LIB libspdk_virtio.a 00:02:15.842 SO libspdk_init.so.6.0 00:02:15.842 LIB libspdk_vfu_tgt.a 00:02:15.842 SO libspdk_virtio.so.7.0 00:02:15.842 SO libspdk_vfu_tgt.so.3.0 00:02:15.842 SYMLINK libspdk_init.so 00:02:15.842 SYMLINK libspdk_virtio.so 00:02:15.842 SYMLINK libspdk_vfu_tgt.so 00:02:16.102 LIB libspdk_fsdev.a 00:02:16.102 SO libspdk_fsdev.so.2.0 00:02:16.102 CC lib/event/app.o 00:02:16.102 CC lib/event/reactor.o 00:02:16.102 CC lib/event/log_rpc.o 00:02:16.102 CC lib/event/app_rpc.o 00:02:16.102 CC lib/event/scheduler_static.o 00:02:16.102 SYMLINK libspdk_fsdev.so 00:02:16.361 LIB libspdk_accel.a 00:02:16.361 SO libspdk_accel.so.16.0 00:02:16.361 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:16.361 LIB libspdk_nvme.a 00:02:16.361 SYMLINK libspdk_accel.so 00:02:16.361 LIB libspdk_event.a 00:02:16.361 SO libspdk_event.so.14.0 00:02:16.620 SO libspdk_nvme.so.15.0 00:02:16.620 SYMLINK libspdk_event.so 00:02:16.620 SYMLINK libspdk_nvme.so 00:02:16.620 CC lib/bdev/bdev.o 00:02:16.620 CC lib/bdev/part.o 00:02:16.620 CC lib/bdev/bdev_rpc.o 00:02:16.620 CC lib/bdev/bdev_zone.o 00:02:16.620 CC lib/bdev/scsi_nvme.o 00:02:16.879 LIB libspdk_fuse_dispatcher.a 00:02:16.879 SO libspdk_fuse_dispatcher.so.1.0 00:02:16.879 SYMLINK libspdk_fuse_dispatcher.so 00:02:17.822 LIB libspdk_blob.a 00:02:17.822 SO libspdk_blob.so.12.0 00:02:17.822 SYMLINK libspdk_blob.so 00:02:18.125 CC lib/lvol/lvol.o 00:02:18.125 CC lib/blobfs/blobfs.o 00:02:18.125 CC lib/blobfs/tree.o 00:02:18.499 LIB libspdk_bdev.a 00:02:18.789 SO libspdk_bdev.so.17.0 00:02:18.789 LIB libspdk_blobfs.a 00:02:18.789 SYMLINK libspdk_bdev.so 00:02:18.789 SO libspdk_blobfs.so.11.0 00:02:18.789 LIB libspdk_lvol.a 00:02:18.789 SO libspdk_lvol.so.11.0 00:02:18.789 SYMLINK libspdk_blobfs.so 00:02:18.789 SYMLINK libspdk_lvol.so 00:02:19.102 CC lib/ftl/ftl_init.o 00:02:19.102 CC lib/ftl/ftl_core.o 00:02:19.102 CC lib/ftl/ftl_layout.o 00:02:19.102 CC lib/nbd/nbd.o 00:02:19.102 CC lib/ftl/ftl_l2p_flat.o 00:02:19.102 CC lib/ftl/ftl_debug.o 00:02:19.102 CC lib/ftl/ftl_io.o 00:02:19.102 CC lib/nvmf/ctrlr.o 00:02:19.102 CC lib/ftl/ftl_sb.o 00:02:19.102 CC lib/nbd/nbd_rpc.o 00:02:19.102 CC lib/ftl/ftl_l2p.o 00:02:19.102 CC lib/nvmf/ctrlr_discovery.o 00:02:19.102 CC lib/nvmf/ctrlr_bdev.o 00:02:19.102 CC lib/ftl/ftl_nv_cache.o 00:02:19.102 CC lib/ublk/ublk.o 00:02:19.102 CC lib/ftl/ftl_band.o 00:02:19.102 CC lib/ftl/ftl_reloc.o 00:02:19.102 CC lib/nvmf/subsystem.o 00:02:19.102 CC lib/ftl/ftl_band_ops.o 00:02:19.102 CC lib/ublk/ublk_rpc.o 00:02:19.102 CC lib/ftl/ftl_writer.o 00:02:19.102 CC lib/ftl/ftl_l2p_cache.o 00:02:19.102 CC lib/ftl/ftl_rq.o 00:02:19.102 CC lib/nvmf/nvmf.o 00:02:19.102 CC lib/scsi/dev.o 00:02:19.102 CC lib/nvmf/nvmf_rpc.o 00:02:19.102 CC lib/scsi/lun.o 00:02:19.102 CC lib/ftl/ftl_p2l_log.o 00:02:19.102 CC lib/nvmf/transport.o 00:02:19.102 CC lib/ftl/ftl_p2l.o 00:02:19.102 CC lib/scsi/scsi.o 00:02:19.102 CC lib/scsi/port.o 00:02:19.102 CC lib/nvmf/tcp.o 00:02:19.102 CC lib/nvmf/stubs.o 00:02:19.102 CC lib/ftl/mngt/ftl_mngt.o 00:02:19.102 CC lib/scsi/scsi_bdev.o 00:02:19.102 CC lib/scsi/task.o 00:02:19.102 CC lib/nvmf/mdns_server.o 00:02:19.102 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:19.102 CC lib/scsi/scsi_pr.o 00:02:19.102 CC lib/scsi/scsi_rpc.o 00:02:19.102 CC lib/nvmf/vfio_user.o 00:02:19.102 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:19.102 CC lib/nvmf/auth.o 00:02:19.102 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:19.102 CC lib/nvmf/rdma.o 00:02:19.102 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:19.102 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:19.102 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:19.102 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:19.102 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:19.102 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:19.102 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:19.102 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:19.102 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:19.102 CC lib/ftl/utils/ftl_conf.o 00:02:19.102 CC lib/ftl/utils/ftl_mempool.o 00:02:19.102 CC lib/ftl/utils/ftl_md.o 00:02:19.102 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:19.102 CC lib/ftl/utils/ftl_property.o 00:02:19.102 CC lib/ftl/utils/ftl_bitmap.o 00:02:19.102 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:19.102 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:19.102 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:19.102 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:19.102 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:19.102 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:19.102 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:19.102 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:19.102 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:19.102 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:19.102 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:19.102 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:19.102 CC lib/ftl/base/ftl_base_dev.o 00:02:19.102 CC lib/ftl/ftl_trace.o 00:02:19.102 CC lib/ftl/base/ftl_base_bdev.o 00:02:19.670 LIB libspdk_scsi.a 00:02:19.670 SO libspdk_scsi.so.9.0 00:02:19.670 LIB libspdk_nbd.a 00:02:19.670 LIB libspdk_ublk.a 00:02:19.670 SO libspdk_nbd.so.7.0 00:02:19.670 SYMLINK libspdk_scsi.so 00:02:19.670 SO libspdk_ublk.so.3.0 00:02:19.930 SYMLINK libspdk_nbd.so 00:02:19.930 SYMLINK libspdk_ublk.so 00:02:19.930 LIB libspdk_ftl.a 00:02:19.930 CC lib/iscsi/conn.o 00:02:19.930 CC lib/iscsi/init_grp.o 00:02:19.930 CC lib/iscsi/iscsi.o 00:02:20.187 CC lib/iscsi/param.o 00:02:20.187 CC lib/iscsi/portal_grp.o 00:02:20.187 CC lib/iscsi/tgt_node.o 00:02:20.187 CC lib/iscsi/iscsi_subsystem.o 00:02:20.187 CC lib/iscsi/iscsi_rpc.o 00:02:20.187 CC lib/iscsi/task.o 00:02:20.187 SO libspdk_ftl.so.9.0 00:02:20.187 CC lib/vhost/vhost.o 00:02:20.187 CC lib/vhost/vhost_rpc.o 00:02:20.187 CC lib/vhost/rte_vhost_user.o 00:02:20.187 CC lib/vhost/vhost_scsi.o 00:02:20.187 CC lib/vhost/vhost_blk.o 00:02:20.187 SYMLINK libspdk_ftl.so 00:02:20.753 LIB libspdk_vhost.a 00:02:21.012 LIB libspdk_nvmf.a 00:02:21.012 SO libspdk_vhost.so.8.0 00:02:21.012 SO libspdk_nvmf.so.20.0 00:02:21.012 SYMLINK libspdk_vhost.so 00:02:21.012 LIB libspdk_iscsi.a 00:02:21.012 SYMLINK libspdk_nvmf.so 00:02:21.012 SO libspdk_iscsi.so.8.0 00:02:21.272 SYMLINK libspdk_iscsi.so 00:02:21.842 CC module/env_dpdk/env_dpdk_rpc.o 00:02:21.842 CC module/vfu_device/vfu_virtio.o 00:02:21.842 CC module/vfu_device/vfu_virtio_blk.o 00:02:21.842 CC module/vfu_device/vfu_virtio_scsi.o 00:02:21.842 CC module/vfu_device/vfu_virtio_rpc.o 00:02:21.842 CC module/vfu_device/vfu_virtio_fs.o 00:02:21.842 CC module/blob/bdev/blob_bdev.o 00:02:21.842 CC module/keyring/file/keyring.o 00:02:21.842 CC module/keyring/file/keyring_rpc.o 00:02:21.842 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:21.842 CC module/accel/error/accel_error.o 00:02:21.842 CC module/accel/error/accel_error_rpc.o 00:02:21.842 CC module/accel/dsa/accel_dsa.o 00:02:21.842 CC module/fsdev/aio/linux_aio_mgr.o 00:02:21.842 CC module/scheduler/gscheduler/gscheduler.o 00:02:21.842 CC module/fsdev/aio/fsdev_aio.o 00:02:21.842 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:21.842 CC module/sock/posix/posix.o 00:02:21.842 LIB libspdk_env_dpdk_rpc.a 00:02:21.842 CC module/accel/dsa/accel_dsa_rpc.o 00:02:21.842 CC module/accel/iaa/accel_iaa.o 00:02:21.842 CC module/accel/iaa/accel_iaa_rpc.o 00:02:21.842 CC module/keyring/linux/keyring.o 00:02:21.842 CC module/keyring/linux/keyring_rpc.o 00:02:21.842 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:21.842 CC module/accel/ioat/accel_ioat.o 00:02:21.842 CC module/accel/ioat/accel_ioat_rpc.o 00:02:21.842 SO libspdk_env_dpdk_rpc.so.6.0 00:02:22.101 SYMLINK libspdk_env_dpdk_rpc.so 00:02:22.101 LIB libspdk_keyring_file.a 00:02:22.101 LIB libspdk_keyring_linux.a 00:02:22.101 LIB libspdk_scheduler_gscheduler.a 00:02:22.101 SO libspdk_keyring_linux.so.1.0 00:02:22.101 SO libspdk_keyring_file.so.2.0 00:02:22.101 LIB libspdk_scheduler_dpdk_governor.a 00:02:22.101 LIB libspdk_scheduler_dynamic.a 00:02:22.101 LIB libspdk_accel_error.a 00:02:22.101 LIB libspdk_blob_bdev.a 00:02:22.101 LIB libspdk_accel_ioat.a 00:02:22.101 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:22.101 SO libspdk_scheduler_gscheduler.so.4.0 00:02:22.101 SO libspdk_accel_error.so.2.0 00:02:22.101 SO libspdk_scheduler_dynamic.so.4.0 00:02:22.101 SYMLINK libspdk_keyring_linux.so 00:02:22.101 LIB libspdk_accel_iaa.a 00:02:22.101 SO libspdk_accel_ioat.so.6.0 00:02:22.101 SO libspdk_blob_bdev.so.12.0 00:02:22.101 SYMLINK libspdk_keyring_file.so 00:02:22.101 SYMLINK libspdk_scheduler_gscheduler.so 00:02:22.101 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:22.101 LIB libspdk_accel_dsa.a 00:02:22.101 SO libspdk_accel_iaa.so.3.0 00:02:22.101 SYMLINK libspdk_accel_error.so 00:02:22.101 SYMLINK libspdk_scheduler_dynamic.so 00:02:22.101 SYMLINK libspdk_accel_ioat.so 00:02:22.101 SYMLINK libspdk_blob_bdev.so 00:02:22.101 SO libspdk_accel_dsa.so.5.0 00:02:22.101 SYMLINK libspdk_accel_iaa.so 00:02:22.360 LIB libspdk_vfu_device.a 00:02:22.360 SYMLINK libspdk_accel_dsa.so 00:02:22.360 SO libspdk_vfu_device.so.3.0 00:02:22.360 SYMLINK libspdk_vfu_device.so 00:02:22.360 LIB libspdk_fsdev_aio.a 00:02:22.360 SO libspdk_fsdev_aio.so.1.0 00:02:22.620 LIB libspdk_sock_posix.a 00:02:22.620 SO libspdk_sock_posix.so.6.0 00:02:22.620 SYMLINK libspdk_fsdev_aio.so 00:02:22.620 CC module/bdev/split/vbdev_split.o 00:02:22.620 CC module/bdev/split/vbdev_split_rpc.o 00:02:22.620 CC module/bdev/delay/vbdev_delay.o 00:02:22.620 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:22.620 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:22.620 CC module/blobfs/bdev/blobfs_bdev.o 00:02:22.620 SYMLINK libspdk_sock_posix.so 00:02:22.620 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:22.620 CC module/bdev/nvme/bdev_mdns_client.o 00:02:22.620 CC module/bdev/nvme/bdev_nvme.o 00:02:22.620 CC module/bdev/nvme/nvme_rpc.o 00:02:22.620 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:22.620 CC module/bdev/nvme/vbdev_opal.o 00:02:22.620 CC module/bdev/null/bdev_null_rpc.o 00:02:22.620 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:22.620 CC module/bdev/null/bdev_null.o 00:02:22.620 CC module/bdev/aio/bdev_aio.o 00:02:22.620 CC module/bdev/aio/bdev_aio_rpc.o 00:02:22.620 CC module/bdev/passthru/vbdev_passthru.o 00:02:22.620 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:22.620 CC module/bdev/lvol/vbdev_lvol.o 00:02:22.620 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:22.620 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:22.620 CC module/bdev/error/vbdev_error.o 00:02:22.620 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:22.620 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:22.620 CC module/bdev/error/vbdev_error_rpc.o 00:02:22.620 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:22.620 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:22.620 CC module/bdev/gpt/gpt.o 00:02:22.620 CC module/bdev/gpt/vbdev_gpt.o 00:02:22.620 CC module/bdev/raid/bdev_raid.o 00:02:22.620 CC module/bdev/raid/bdev_raid_rpc.o 00:02:22.620 CC module/bdev/ftl/bdev_ftl.o 00:02:22.620 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:22.620 CC module/bdev/raid/bdev_raid_sb.o 00:02:22.620 CC module/bdev/raid/raid1.o 00:02:22.620 CC module/bdev/raid/concat.o 00:02:22.620 CC module/bdev/raid/raid0.o 00:02:22.620 CC module/bdev/iscsi/bdev_iscsi.o 00:02:22.620 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:22.620 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:22.620 CC module/bdev/malloc/bdev_malloc.o 00:02:22.879 LIB libspdk_blobfs_bdev.a 00:02:22.879 LIB libspdk_bdev_split.a 00:02:22.879 SO libspdk_blobfs_bdev.so.6.0 00:02:22.879 SO libspdk_bdev_split.so.6.0 00:02:22.879 SYMLINK libspdk_blobfs_bdev.so 00:02:22.879 LIB libspdk_bdev_null.a 00:02:22.879 SYMLINK libspdk_bdev_split.so 00:02:22.879 SO libspdk_bdev_null.so.6.0 00:02:22.879 LIB libspdk_bdev_passthru.a 00:02:22.879 LIB libspdk_bdev_gpt.a 00:02:22.879 LIB libspdk_bdev_error.a 00:02:22.879 SO libspdk_bdev_passthru.so.6.0 00:02:22.879 LIB libspdk_bdev_ftl.a 00:02:22.879 LIB libspdk_bdev_delay.a 00:02:22.879 SO libspdk_bdev_gpt.so.6.0 00:02:22.879 SO libspdk_bdev_error.so.6.0 00:02:22.879 SYMLINK libspdk_bdev_null.so 00:02:22.879 LIB libspdk_bdev_aio.a 00:02:23.138 SO libspdk_bdev_ftl.so.6.0 00:02:23.138 SO libspdk_bdev_delay.so.6.0 00:02:23.139 LIB libspdk_bdev_zone_block.a 00:02:23.139 LIB libspdk_bdev_malloc.a 00:02:23.139 LIB libspdk_bdev_iscsi.a 00:02:23.139 SO libspdk_bdev_aio.so.6.0 00:02:23.139 SYMLINK libspdk_bdev_passthru.so 00:02:23.139 SYMLINK libspdk_bdev_gpt.so 00:02:23.139 SO libspdk_bdev_zone_block.so.6.0 00:02:23.139 SYMLINK libspdk_bdev_error.so 00:02:23.139 SO libspdk_bdev_iscsi.so.6.0 00:02:23.139 SO libspdk_bdev_malloc.so.6.0 00:02:23.139 SYMLINK libspdk_bdev_ftl.so 00:02:23.139 SYMLINK libspdk_bdev_delay.so 00:02:23.139 SYMLINK libspdk_bdev_aio.so 00:02:23.139 LIB libspdk_bdev_virtio.a 00:02:23.139 LIB libspdk_bdev_lvol.a 00:02:23.139 SYMLINK libspdk_bdev_malloc.so 00:02:23.139 SYMLINK libspdk_bdev_zone_block.so 00:02:23.139 SYMLINK libspdk_bdev_iscsi.so 00:02:23.139 SO libspdk_bdev_virtio.so.6.0 00:02:23.139 SO libspdk_bdev_lvol.so.6.0 00:02:23.139 SYMLINK libspdk_bdev_virtio.so 00:02:23.139 SYMLINK libspdk_bdev_lvol.so 00:02:23.398 LIB libspdk_bdev_raid.a 00:02:23.398 SO libspdk_bdev_raid.so.6.0 00:02:23.658 SYMLINK libspdk_bdev_raid.so 00:02:24.597 LIB libspdk_bdev_nvme.a 00:02:24.597 SO libspdk_bdev_nvme.so.7.1 00:02:24.597 SYMLINK libspdk_bdev_nvme.so 00:02:25.166 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:25.166 CC module/event/subsystems/sock/sock.o 00:02:25.166 CC module/event/subsystems/fsdev/fsdev.o 00:02:25.166 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:25.166 CC module/event/subsystems/vmd/vmd.o 00:02:25.166 CC module/event/subsystems/keyring/keyring.o 00:02:25.166 CC module/event/subsystems/scheduler/scheduler.o 00:02:25.166 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:25.166 CC module/event/subsystems/iobuf/iobuf.o 00:02:25.166 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:25.424 LIB libspdk_event_keyring.a 00:02:25.424 LIB libspdk_event_vhost_blk.a 00:02:25.424 LIB libspdk_event_fsdev.a 00:02:25.424 LIB libspdk_event_sock.a 00:02:25.424 LIB libspdk_event_scheduler.a 00:02:25.424 SO libspdk_event_fsdev.so.1.0 00:02:25.424 LIB libspdk_event_vfu_tgt.a 00:02:25.424 SO libspdk_event_keyring.so.1.0 00:02:25.424 SO libspdk_event_vhost_blk.so.3.0 00:02:25.424 LIB libspdk_event_vmd.a 00:02:25.424 SO libspdk_event_sock.so.5.0 00:02:25.424 LIB libspdk_event_iobuf.a 00:02:25.424 SO libspdk_event_scheduler.so.4.0 00:02:25.424 SO libspdk_event_vfu_tgt.so.3.0 00:02:25.424 SO libspdk_event_vmd.so.6.0 00:02:25.424 SYMLINK libspdk_event_fsdev.so 00:02:25.424 SYMLINK libspdk_event_vhost_blk.so 00:02:25.424 SO libspdk_event_iobuf.so.3.0 00:02:25.424 SYMLINK libspdk_event_keyring.so 00:02:25.424 SYMLINK libspdk_event_sock.so 00:02:25.424 SYMLINK libspdk_event_scheduler.so 00:02:25.424 SYMLINK libspdk_event_vfu_tgt.so 00:02:25.424 SYMLINK libspdk_event_vmd.so 00:02:25.424 SYMLINK libspdk_event_iobuf.so 00:02:25.992 CC module/event/subsystems/accel/accel.o 00:02:25.992 LIB libspdk_event_accel.a 00:02:25.992 SO libspdk_event_accel.so.6.0 00:02:25.992 SYMLINK libspdk_event_accel.so 00:02:26.251 CC module/event/subsystems/bdev/bdev.o 00:02:26.511 LIB libspdk_event_bdev.a 00:02:26.511 SO libspdk_event_bdev.so.6.0 00:02:26.511 SYMLINK libspdk_event_bdev.so 00:02:26.770 CC module/event/subsystems/nbd/nbd.o 00:02:26.770 CC module/event/subsystems/ublk/ublk.o 00:02:27.029 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:27.029 CC module/event/subsystems/scsi/scsi.o 00:02:27.029 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:27.029 LIB libspdk_event_nbd.a 00:02:27.029 SO libspdk_event_nbd.so.6.0 00:02:27.029 LIB libspdk_event_ublk.a 00:02:27.029 SYMLINK libspdk_event_nbd.so 00:02:27.029 SO libspdk_event_ublk.so.3.0 00:02:27.029 LIB libspdk_event_scsi.a 00:02:27.029 SO libspdk_event_scsi.so.6.0 00:02:27.029 LIB libspdk_event_nvmf.a 00:02:27.029 SYMLINK libspdk_event_ublk.so 00:02:27.288 SYMLINK libspdk_event_scsi.so 00:02:27.288 SO libspdk_event_nvmf.so.6.0 00:02:27.288 SYMLINK libspdk_event_nvmf.so 00:02:27.547 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:27.547 CC module/event/subsystems/iscsi/iscsi.o 00:02:27.547 LIB libspdk_event_vhost_scsi.a 00:02:27.547 LIB libspdk_event_iscsi.a 00:02:27.547 SO libspdk_event_vhost_scsi.so.3.0 00:02:27.547 SO libspdk_event_iscsi.so.6.0 00:02:27.806 SYMLINK libspdk_event_vhost_scsi.so 00:02:27.806 SYMLINK libspdk_event_iscsi.so 00:02:27.806 SO libspdk.so.6.0 00:02:27.806 SYMLINK libspdk.so 00:02:28.066 CXX app/trace/trace.o 00:02:28.066 CC app/spdk_nvme_identify/identify.o 00:02:28.066 CC app/spdk_nvme_discover/discovery_aer.o 00:02:28.328 CC app/trace_record/trace_record.o 00:02:28.328 CC app/spdk_top/spdk_top.o 00:02:28.328 CC app/spdk_nvme_perf/perf.o 00:02:28.328 CC app/spdk_lspci/spdk_lspci.o 00:02:28.328 CC test/rpc_client/rpc_client_test.o 00:02:28.328 TEST_HEADER include/spdk/accel.h 00:02:28.328 TEST_HEADER include/spdk/barrier.h 00:02:28.328 TEST_HEADER include/spdk/accel_module.h 00:02:28.328 TEST_HEADER include/spdk/assert.h 00:02:28.328 TEST_HEADER include/spdk/base64.h 00:02:28.328 TEST_HEADER include/spdk/bdev_module.h 00:02:28.328 TEST_HEADER include/spdk/bdev.h 00:02:28.328 TEST_HEADER include/spdk/bdev_zone.h 00:02:28.328 TEST_HEADER include/spdk/bit_array.h 00:02:28.328 TEST_HEADER include/spdk/bit_pool.h 00:02:28.329 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:28.329 TEST_HEADER include/spdk/blobfs.h 00:02:28.329 TEST_HEADER include/spdk/blob_bdev.h 00:02:28.329 TEST_HEADER include/spdk/blob.h 00:02:28.329 TEST_HEADER include/spdk/conf.h 00:02:28.329 TEST_HEADER include/spdk/config.h 00:02:28.329 TEST_HEADER include/spdk/cpuset.h 00:02:28.329 TEST_HEADER include/spdk/crc16.h 00:02:28.329 TEST_HEADER include/spdk/crc64.h 00:02:28.329 TEST_HEADER include/spdk/crc32.h 00:02:28.329 TEST_HEADER include/spdk/dif.h 00:02:28.329 TEST_HEADER include/spdk/endian.h 00:02:28.329 CC app/iscsi_tgt/iscsi_tgt.o 00:02:28.329 TEST_HEADER include/spdk/env_dpdk.h 00:02:28.329 TEST_HEADER include/spdk/dma.h 00:02:28.329 TEST_HEADER include/spdk/env.h 00:02:28.329 TEST_HEADER include/spdk/event.h 00:02:28.329 TEST_HEADER include/spdk/fd_group.h 00:02:28.329 TEST_HEADER include/spdk/fsdev.h 00:02:28.329 TEST_HEADER include/spdk/fd.h 00:02:28.329 CC app/spdk_dd/spdk_dd.o 00:02:28.329 TEST_HEADER include/spdk/file.h 00:02:28.329 CC app/nvmf_tgt/nvmf_main.o 00:02:28.329 TEST_HEADER include/spdk/ftl.h 00:02:28.329 TEST_HEADER include/spdk/fsdev_module.h 00:02:28.329 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:28.329 TEST_HEADER include/spdk/gpt_spec.h 00:02:28.329 TEST_HEADER include/spdk/histogram_data.h 00:02:28.329 TEST_HEADER include/spdk/hexlify.h 00:02:28.329 TEST_HEADER include/spdk/idxd.h 00:02:28.329 TEST_HEADER include/spdk/idxd_spec.h 00:02:28.329 TEST_HEADER include/spdk/ioat.h 00:02:28.329 TEST_HEADER include/spdk/init.h 00:02:28.329 TEST_HEADER include/spdk/ioat_spec.h 00:02:28.329 TEST_HEADER include/spdk/json.h 00:02:28.329 TEST_HEADER include/spdk/iscsi_spec.h 00:02:28.329 TEST_HEADER include/spdk/jsonrpc.h 00:02:28.329 TEST_HEADER include/spdk/keyring_module.h 00:02:28.329 TEST_HEADER include/spdk/likely.h 00:02:28.329 TEST_HEADER include/spdk/keyring.h 00:02:28.329 TEST_HEADER include/spdk/log.h 00:02:28.329 TEST_HEADER include/spdk/lvol.h 00:02:28.329 TEST_HEADER include/spdk/md5.h 00:02:28.329 TEST_HEADER include/spdk/mmio.h 00:02:28.329 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:28.329 TEST_HEADER include/spdk/memory.h 00:02:28.329 TEST_HEADER include/spdk/net.h 00:02:28.329 TEST_HEADER include/spdk/notify.h 00:02:28.329 TEST_HEADER include/spdk/nbd.h 00:02:28.329 TEST_HEADER include/spdk/nvme.h 00:02:28.329 CC app/spdk_tgt/spdk_tgt.o 00:02:28.329 TEST_HEADER include/spdk/nvme_intel.h 00:02:28.329 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:28.329 TEST_HEADER include/spdk/nvme_spec.h 00:02:28.329 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:28.329 TEST_HEADER include/spdk/nvme_zns.h 00:02:28.329 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:28.329 TEST_HEADER include/spdk/nvmf.h 00:02:28.329 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:28.329 TEST_HEADER include/spdk/nvmf_spec.h 00:02:28.329 TEST_HEADER include/spdk/opal.h 00:02:28.329 TEST_HEADER include/spdk/nvmf_transport.h 00:02:28.329 TEST_HEADER include/spdk/opal_spec.h 00:02:28.329 TEST_HEADER include/spdk/pci_ids.h 00:02:28.329 TEST_HEADER include/spdk/pipe.h 00:02:28.329 TEST_HEADER include/spdk/queue.h 00:02:28.329 TEST_HEADER include/spdk/rpc.h 00:02:28.329 TEST_HEADER include/spdk/reduce.h 00:02:28.329 TEST_HEADER include/spdk/scsi.h 00:02:28.329 TEST_HEADER include/spdk/scheduler.h 00:02:28.329 TEST_HEADER include/spdk/stdinc.h 00:02:28.329 TEST_HEADER include/spdk/sock.h 00:02:28.329 TEST_HEADER include/spdk/scsi_spec.h 00:02:28.329 TEST_HEADER include/spdk/string.h 00:02:28.329 TEST_HEADER include/spdk/thread.h 00:02:28.329 TEST_HEADER include/spdk/trace_parser.h 00:02:28.329 TEST_HEADER include/spdk/trace.h 00:02:28.329 TEST_HEADER include/spdk/tree.h 00:02:28.329 TEST_HEADER include/spdk/ublk.h 00:02:28.329 TEST_HEADER include/spdk/util.h 00:02:28.329 TEST_HEADER include/spdk/version.h 00:02:28.329 TEST_HEADER include/spdk/uuid.h 00:02:28.329 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:28.329 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:28.329 TEST_HEADER include/spdk/vhost.h 00:02:28.329 TEST_HEADER include/spdk/vmd.h 00:02:28.329 TEST_HEADER include/spdk/xor.h 00:02:28.329 TEST_HEADER include/spdk/zipf.h 00:02:28.329 CXX test/cpp_headers/accel.o 00:02:28.329 CXX test/cpp_headers/assert.o 00:02:28.329 CXX test/cpp_headers/barrier.o 00:02:28.329 CXX test/cpp_headers/accel_module.o 00:02:28.329 CXX test/cpp_headers/bdev.o 00:02:28.329 CXX test/cpp_headers/bdev_module.o 00:02:28.329 CXX test/cpp_headers/base64.o 00:02:28.329 CXX test/cpp_headers/bit_array.o 00:02:28.329 CXX test/cpp_headers/bdev_zone.o 00:02:28.329 CXX test/cpp_headers/blobfs.o 00:02:28.329 CXX test/cpp_headers/blob_bdev.o 00:02:28.329 CXX test/cpp_headers/conf.o 00:02:28.329 CXX test/cpp_headers/blobfs_bdev.o 00:02:28.329 CXX test/cpp_headers/bit_pool.o 00:02:28.329 CXX test/cpp_headers/config.o 00:02:28.329 CXX test/cpp_headers/blob.o 00:02:28.329 CXX test/cpp_headers/cpuset.o 00:02:28.329 CXX test/cpp_headers/crc32.o 00:02:28.329 CXX test/cpp_headers/crc16.o 00:02:28.329 CXX test/cpp_headers/dif.o 00:02:28.329 CXX test/cpp_headers/crc64.o 00:02:28.329 CXX test/cpp_headers/endian.o 00:02:28.329 CXX test/cpp_headers/env.o 00:02:28.329 CXX test/cpp_headers/dma.o 00:02:28.329 CXX test/cpp_headers/env_dpdk.o 00:02:28.329 CXX test/cpp_headers/event.o 00:02:28.329 CXX test/cpp_headers/fd_group.o 00:02:28.329 CXX test/cpp_headers/file.o 00:02:28.329 CXX test/cpp_headers/fsdev.o 00:02:28.329 CXX test/cpp_headers/fd.o 00:02:28.329 CXX test/cpp_headers/fsdev_module.o 00:02:28.329 CXX test/cpp_headers/ftl.o 00:02:28.329 CXX test/cpp_headers/fuse_dispatcher.o 00:02:28.329 CXX test/cpp_headers/gpt_spec.o 00:02:28.329 CXX test/cpp_headers/hexlify.o 00:02:28.329 CXX test/cpp_headers/histogram_data.o 00:02:28.329 CXX test/cpp_headers/idxd_spec.o 00:02:28.329 CXX test/cpp_headers/idxd.o 00:02:28.329 CXX test/cpp_headers/init.o 00:02:28.329 CXX test/cpp_headers/ioat.o 00:02:28.329 CXX test/cpp_headers/ioat_spec.o 00:02:28.329 CXX test/cpp_headers/json.o 00:02:28.329 CXX test/cpp_headers/iscsi_spec.o 00:02:28.329 CXX test/cpp_headers/jsonrpc.o 00:02:28.329 CXX test/cpp_headers/keyring_module.o 00:02:28.329 CXX test/cpp_headers/keyring.o 00:02:28.329 CXX test/cpp_headers/log.o 00:02:28.329 CXX test/cpp_headers/likely.o 00:02:28.329 CXX test/cpp_headers/md5.o 00:02:28.329 CXX test/cpp_headers/lvol.o 00:02:28.329 CXX test/cpp_headers/memory.o 00:02:28.329 CXX test/cpp_headers/mmio.o 00:02:28.329 CXX test/cpp_headers/nbd.o 00:02:28.329 CXX test/cpp_headers/net.o 00:02:28.329 CC app/fio/nvme/fio_plugin.o 00:02:28.329 CXX test/cpp_headers/notify.o 00:02:28.329 CXX test/cpp_headers/nvme_intel.o 00:02:28.329 CXX test/cpp_headers/nvme.o 00:02:28.329 CXX test/cpp_headers/nvme_ocssd.o 00:02:28.329 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:28.329 CXX test/cpp_headers/nvme_spec.o 00:02:28.329 CXX test/cpp_headers/nvmf_cmd.o 00:02:28.329 CXX test/cpp_headers/nvme_zns.o 00:02:28.329 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:28.329 CXX test/cpp_headers/nvmf_spec.o 00:02:28.329 CC test/thread/poller_perf/poller_perf.o 00:02:28.329 CXX test/cpp_headers/nvmf.o 00:02:28.329 CXX test/cpp_headers/nvmf_transport.o 00:02:28.329 CXX test/cpp_headers/opal.o 00:02:28.329 CC examples/util/zipf/zipf.o 00:02:28.329 CC test/app/jsoncat/jsoncat.o 00:02:28.329 CC examples/ioat/perf/perf.o 00:02:28.329 CC test/app/histogram_perf/histogram_perf.o 00:02:28.329 CC test/env/memory/memory_ut.o 00:02:28.329 CC examples/ioat/verify/verify.o 00:02:28.329 CC test/app/stub/stub.o 00:02:28.329 CXX test/cpp_headers/opal_spec.o 00:02:28.329 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:28.600 CC test/env/pci/pci_ut.o 00:02:28.600 CC app/fio/bdev/fio_plugin.o 00:02:28.600 CC test/env/vtophys/vtophys.o 00:02:28.600 LINK spdk_lspci 00:02:28.600 CC test/dma/test_dma/test_dma.o 00:02:28.600 LINK rpc_client_test 00:02:28.600 CC test/app/bdev_svc/bdev_svc.o 00:02:28.600 LINK spdk_nvme_discover 00:02:28.871 CC test/env/mem_callbacks/mem_callbacks.o 00:02:28.871 LINK interrupt_tgt 00:02:28.871 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:28.871 LINK jsoncat 00:02:28.871 LINK histogram_perf 00:02:28.871 LINK nvmf_tgt 00:02:28.871 LINK iscsi_tgt 00:02:28.871 LINK zipf 00:02:28.871 LINK poller_perf 00:02:28.871 CXX test/cpp_headers/pci_ids.o 00:02:28.871 LINK spdk_tgt 00:02:28.871 CXX test/cpp_headers/queue.o 00:02:28.871 CXX test/cpp_headers/pipe.o 00:02:28.871 CXX test/cpp_headers/reduce.o 00:02:28.871 CXX test/cpp_headers/rpc.o 00:02:28.871 CXX test/cpp_headers/scsi.o 00:02:28.871 CXX test/cpp_headers/scsi_spec.o 00:02:28.871 CXX test/cpp_headers/scheduler.o 00:02:28.871 CXX test/cpp_headers/sock.o 00:02:28.871 CXX test/cpp_headers/stdinc.o 00:02:28.871 CXX test/cpp_headers/string.o 00:02:28.871 CXX test/cpp_headers/thread.o 00:02:28.871 CXX test/cpp_headers/trace.o 00:02:28.871 CXX test/cpp_headers/trace_parser.o 00:02:28.871 CXX test/cpp_headers/tree.o 00:02:28.871 LINK spdk_trace_record 00:02:29.131 CXX test/cpp_headers/ublk.o 00:02:29.131 CXX test/cpp_headers/util.o 00:02:29.131 CXX test/cpp_headers/uuid.o 00:02:29.131 CXX test/cpp_headers/version.o 00:02:29.131 CXX test/cpp_headers/vfio_user_pci.o 00:02:29.131 CXX test/cpp_headers/vfio_user_spec.o 00:02:29.131 CXX test/cpp_headers/vhost.o 00:02:29.131 CXX test/cpp_headers/vmd.o 00:02:29.131 CXX test/cpp_headers/xor.o 00:02:29.131 CXX test/cpp_headers/zipf.o 00:02:29.131 LINK verify 00:02:29.131 LINK ioat_perf 00:02:29.131 LINK spdk_trace 00:02:29.131 LINK bdev_svc 00:02:29.131 LINK vtophys 00:02:29.131 LINK env_dpdk_post_init 00:02:29.131 LINK stub 00:02:29.131 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:29.131 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:29.131 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:29.131 LINK spdk_dd 00:02:29.390 LINK spdk_nvme 00:02:29.390 LINK pci_ut 00:02:29.390 LINK spdk_nvme_identify 00:02:29.390 LINK spdk_bdev 00:02:29.390 CC examples/idxd/perf/perf.o 00:02:29.390 LINK test_dma 00:02:29.390 CC app/vhost/vhost.o 00:02:29.390 CC test/event/reactor/reactor.o 00:02:29.390 CC examples/sock/hello_world/hello_sock.o 00:02:29.390 CC examples/vmd/lsvmd/lsvmd.o 00:02:29.390 CC test/event/reactor_perf/reactor_perf.o 00:02:29.390 CC examples/vmd/led/led.o 00:02:29.390 CC examples/thread/thread/thread_ex.o 00:02:29.390 CC test/event/app_repeat/app_repeat.o 00:02:29.390 CC test/event/event_perf/event_perf.o 00:02:29.390 CC test/event/scheduler/scheduler.o 00:02:29.390 LINK nvme_fuzz 00:02:29.648 LINK vhost_fuzz 00:02:29.648 LINK lsvmd 00:02:29.648 LINK reactor_perf 00:02:29.648 LINK reactor 00:02:29.648 LINK spdk_top 00:02:29.648 LINK spdk_nvme_perf 00:02:29.648 LINK led 00:02:29.648 LINK vhost 00:02:29.648 LINK event_perf 00:02:29.648 LINK mem_callbacks 00:02:29.648 LINK app_repeat 00:02:29.648 LINK hello_sock 00:02:29.648 LINK idxd_perf 00:02:29.648 LINK scheduler 00:02:29.648 LINK thread 00:02:29.907 CC test/nvme/aer/aer.o 00:02:29.907 CC test/nvme/boot_partition/boot_partition.o 00:02:29.907 CC test/nvme/err_injection/err_injection.o 00:02:29.907 CC test/nvme/reset/reset.o 00:02:29.907 CC test/nvme/fused_ordering/fused_ordering.o 00:02:29.907 CC test/nvme/simple_copy/simple_copy.o 00:02:29.907 CC test/nvme/sgl/sgl.o 00:02:29.907 CC test/nvme/reserve/reserve.o 00:02:29.907 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:29.907 CC test/nvme/cuse/cuse.o 00:02:29.907 CC test/nvme/connect_stress/connect_stress.o 00:02:29.907 CC test/nvme/overhead/overhead.o 00:02:29.907 CC test/nvme/compliance/nvme_compliance.o 00:02:29.907 CC test/nvme/e2edp/nvme_dp.o 00:02:29.907 CC test/nvme/startup/startup.o 00:02:29.907 CC test/nvme/fdp/fdp.o 00:02:29.907 CC test/accel/dif/dif.o 00:02:29.907 CC test/blobfs/mkfs/mkfs.o 00:02:29.907 LINK memory_ut 00:02:29.907 CC test/lvol/esnap/esnap.o 00:02:30.165 CC examples/nvme/abort/abort.o 00:02:30.165 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:30.165 CC examples/nvme/reconnect/reconnect.o 00:02:30.165 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:30.165 LINK boot_partition 00:02:30.165 CC examples/nvme/arbitration/arbitration.o 00:02:30.165 CC examples/nvme/hello_world/hello_world.o 00:02:30.165 CC examples/nvme/hotplug/hotplug.o 00:02:30.165 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:30.165 LINK err_injection 00:02:30.165 LINK doorbell_aers 00:02:30.165 LINK connect_stress 00:02:30.165 LINK startup 00:02:30.165 LINK fused_ordering 00:02:30.165 LINK reserve 00:02:30.165 LINK simple_copy 00:02:30.165 LINK mkfs 00:02:30.165 LINK aer 00:02:30.165 LINK reset 00:02:30.165 LINK nvme_dp 00:02:30.166 CC examples/blob/hello_world/hello_blob.o 00:02:30.166 CC examples/accel/perf/accel_perf.o 00:02:30.166 CC examples/blob/cli/blobcli.o 00:02:30.166 LINK overhead 00:02:30.166 LINK sgl 00:02:30.166 LINK nvme_compliance 00:02:30.166 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:30.166 LINK pmr_persistence 00:02:30.166 LINK fdp 00:02:30.424 LINK hello_world 00:02:30.424 LINK cmb_copy 00:02:30.424 LINK hotplug 00:02:30.424 LINK abort 00:02:30.424 LINK reconnect 00:02:30.424 LINK arbitration 00:02:30.424 LINK hello_blob 00:02:30.424 LINK iscsi_fuzz 00:02:30.424 LINK nvme_manage 00:02:30.424 LINK hello_fsdev 00:02:30.424 LINK dif 00:02:30.684 LINK accel_perf 00:02:30.684 LINK blobcli 00:02:30.943 LINK cuse 00:02:30.943 CC test/bdev/bdevio/bdevio.o 00:02:31.202 CC examples/bdev/hello_world/hello_bdev.o 00:02:31.202 CC examples/bdev/bdevperf/bdevperf.o 00:02:31.462 LINK hello_bdev 00:02:31.462 LINK bdevio 00:02:31.721 LINK bdevperf 00:02:32.290 CC examples/nvmf/nvmf/nvmf.o 00:02:32.549 LINK nvmf 00:02:33.930 LINK esnap 00:02:33.930 00:02:33.930 real 0m55.020s 00:02:33.930 user 7m59.022s 00:02:33.930 sys 3m35.601s 00:02:33.930 03:10:53 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:33.930 03:10:53 make -- common/autotest_common.sh@10 -- $ set +x 00:02:33.930 ************************************ 00:02:33.930 END TEST make 00:02:33.930 ************************************ 00:02:33.930 03:10:53 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:33.930 03:10:53 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:33.930 03:10:53 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:33.930 03:10:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.930 03:10:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:33.930 03:10:53 -- pm/common@44 -- $ pid=2341147 00:02:33.930 03:10:53 -- pm/common@50 -- $ kill -TERM 2341147 00:02:33.930 03:10:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.930 03:10:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:33.930 03:10:53 -- pm/common@44 -- $ pid=2341149 00:02:33.930 03:10:53 -- pm/common@50 -- $ kill -TERM 2341149 00:02:33.930 03:10:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.930 03:10:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:33.930 03:10:53 -- pm/common@44 -- $ pid=2341150 00:02:33.930 03:10:53 -- pm/common@50 -- $ kill -TERM 2341150 00:02:33.930 03:10:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.930 03:10:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:33.930 03:10:53 -- pm/common@44 -- $ pid=2341178 00:02:33.930 03:10:53 -- pm/common@50 -- $ sudo -E kill -TERM 2341178 00:02:33.930 03:10:54 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:33.930 03:10:54 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:34.189 03:10:54 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:02:34.189 03:10:54 -- common/autotest_common.sh@1711 -- # lcov --version 00:02:34.189 03:10:54 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:02:34.189 03:10:54 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:02:34.189 03:10:54 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:34.189 03:10:54 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:34.189 03:10:54 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:34.189 03:10:54 -- scripts/common.sh@336 -- # IFS=.-: 00:02:34.189 03:10:54 -- scripts/common.sh@336 -- # read -ra ver1 00:02:34.189 03:10:54 -- scripts/common.sh@337 -- # IFS=.-: 00:02:34.189 03:10:54 -- scripts/common.sh@337 -- # read -ra ver2 00:02:34.189 03:10:54 -- scripts/common.sh@338 -- # local 'op=<' 00:02:34.189 03:10:54 -- scripts/common.sh@340 -- # ver1_l=2 00:02:34.189 03:10:54 -- scripts/common.sh@341 -- # ver2_l=1 00:02:34.189 03:10:54 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:34.189 03:10:54 -- scripts/common.sh@344 -- # case "$op" in 00:02:34.189 03:10:54 -- scripts/common.sh@345 -- # : 1 00:02:34.189 03:10:54 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:34.189 03:10:54 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:34.189 03:10:54 -- scripts/common.sh@365 -- # decimal 1 00:02:34.189 03:10:54 -- scripts/common.sh@353 -- # local d=1 00:02:34.189 03:10:54 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:34.189 03:10:54 -- scripts/common.sh@355 -- # echo 1 00:02:34.189 03:10:54 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:34.189 03:10:54 -- scripts/common.sh@366 -- # decimal 2 00:02:34.189 03:10:54 -- scripts/common.sh@353 -- # local d=2 00:02:34.189 03:10:54 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:34.189 03:10:54 -- scripts/common.sh@355 -- # echo 2 00:02:34.189 03:10:54 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:34.189 03:10:54 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:34.189 03:10:54 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:34.189 03:10:54 -- scripts/common.sh@368 -- # return 0 00:02:34.189 03:10:54 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:34.189 03:10:54 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:02:34.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:34.189 --rc genhtml_branch_coverage=1 00:02:34.189 --rc genhtml_function_coverage=1 00:02:34.189 --rc genhtml_legend=1 00:02:34.189 --rc geninfo_all_blocks=1 00:02:34.189 --rc geninfo_unexecuted_blocks=1 00:02:34.189 00:02:34.189 ' 00:02:34.189 03:10:54 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:02:34.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:34.189 --rc genhtml_branch_coverage=1 00:02:34.190 --rc genhtml_function_coverage=1 00:02:34.190 --rc genhtml_legend=1 00:02:34.190 --rc geninfo_all_blocks=1 00:02:34.190 --rc geninfo_unexecuted_blocks=1 00:02:34.190 00:02:34.190 ' 00:02:34.190 03:10:54 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:02:34.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:34.190 --rc genhtml_branch_coverage=1 00:02:34.190 --rc genhtml_function_coverage=1 00:02:34.190 --rc genhtml_legend=1 00:02:34.190 --rc geninfo_all_blocks=1 00:02:34.190 --rc geninfo_unexecuted_blocks=1 00:02:34.190 00:02:34.190 ' 00:02:34.190 03:10:54 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:02:34.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:34.190 --rc genhtml_branch_coverage=1 00:02:34.190 --rc genhtml_function_coverage=1 00:02:34.190 --rc genhtml_legend=1 00:02:34.190 --rc geninfo_all_blocks=1 00:02:34.190 --rc geninfo_unexecuted_blocks=1 00:02:34.190 00:02:34.190 ' 00:02:34.190 03:10:54 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:34.190 03:10:54 -- nvmf/common.sh@7 -- # uname -s 00:02:34.190 03:10:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:34.190 03:10:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:34.190 03:10:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:34.190 03:10:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:34.190 03:10:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:34.190 03:10:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:34.190 03:10:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:34.190 03:10:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:34.190 03:10:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:34.190 03:10:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:34.190 03:10:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:02:34.190 03:10:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:02:34.190 03:10:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:34.190 03:10:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:34.190 03:10:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:34.190 03:10:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:34.190 03:10:54 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:34.190 03:10:54 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:34.190 03:10:54 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:34.190 03:10:54 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:34.190 03:10:54 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:34.190 03:10:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:34.190 03:10:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:34.190 03:10:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:34.190 03:10:54 -- paths/export.sh@5 -- # export PATH 00:02:34.190 03:10:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:34.190 03:10:54 -- nvmf/common.sh@51 -- # : 0 00:02:34.190 03:10:54 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:34.190 03:10:54 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:34.190 03:10:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:34.190 03:10:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:34.190 03:10:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:34.190 03:10:54 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:34.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:34.190 03:10:54 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:34.190 03:10:54 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:34.190 03:10:54 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:34.190 03:10:54 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:34.190 03:10:54 -- spdk/autotest.sh@32 -- # uname -s 00:02:34.190 03:10:54 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:34.190 03:10:54 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:34.190 03:10:54 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:34.190 03:10:54 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:34.190 03:10:54 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:34.190 03:10:54 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:34.190 03:10:54 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:34.190 03:10:54 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:34.190 03:10:54 -- spdk/autotest.sh@48 -- # udevadm_pid=2403375 00:02:34.190 03:10:54 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:34.190 03:10:54 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:34.190 03:10:54 -- pm/common@17 -- # local monitor 00:02:34.190 03:10:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.190 03:10:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.190 03:10:54 -- pm/common@21 -- # date +%s 00:02:34.190 03:10:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.190 03:10:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.190 03:10:54 -- pm/common@21 -- # date +%s 00:02:34.190 03:10:54 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733451054 00:02:34.190 03:10:54 -- pm/common@25 -- # sleep 1 00:02:34.190 03:10:54 -- pm/common@21 -- # date +%s 00:02:34.190 03:10:54 -- pm/common@21 -- # date +%s 00:02:34.190 03:10:54 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733451054 00:02:34.190 03:10:54 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733451054 00:02:34.190 03:10:54 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733451054 00:02:34.190 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733451054_collect-cpu-load.pm.log 00:02:34.190 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733451054_collect-vmstat.pm.log 00:02:34.190 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733451054_collect-cpu-temp.pm.log 00:02:34.190 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733451054_collect-bmc-pm.bmc.pm.log 00:02:35.128 03:10:55 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:35.128 03:10:55 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:35.128 03:10:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:35.128 03:10:55 -- common/autotest_common.sh@10 -- # set +x 00:02:35.128 03:10:55 -- spdk/autotest.sh@59 -- # create_test_list 00:02:35.128 03:10:55 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:35.128 03:10:55 -- common/autotest_common.sh@10 -- # set +x 00:02:35.386 03:10:55 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:35.386 03:10:55 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:35.386 03:10:55 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:35.386 03:10:55 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:35.386 03:10:55 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:35.386 03:10:55 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:35.386 03:10:55 -- common/autotest_common.sh@1457 -- # uname 00:02:35.386 03:10:55 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:35.386 03:10:55 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:35.386 03:10:55 -- common/autotest_common.sh@1477 -- # uname 00:02:35.386 03:10:55 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:35.386 03:10:55 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:35.387 03:10:55 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:35.387 lcov: LCOV version 1.15 00:02:35.387 03:10:55 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:57.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:57.323 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:00.615 03:11:20 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:00.615 03:11:20 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:00.615 03:11:20 -- common/autotest_common.sh@10 -- # set +x 00:03:00.615 03:11:20 -- spdk/autotest.sh@78 -- # rm -f 00:03:00.615 03:11:20 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:03.157 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:03.157 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:03.157 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:03.157 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:03.157 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:03.157 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:03.157 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:03.157 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:03.157 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:03.157 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:03.157 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:03.157 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:03.157 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:03.417 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:03.417 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:03.417 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:03.417 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:03.417 03:11:23 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:03.417 03:11:23 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:03.417 03:11:23 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:03.417 03:11:23 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:03.417 03:11:23 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:03.417 03:11:23 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:03.417 03:11:23 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:03.417 03:11:23 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:03:03.417 03:11:23 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:03.417 03:11:23 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:03.417 03:11:23 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:03.417 03:11:23 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:03.417 03:11:23 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:03.417 03:11:23 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:03.417 03:11:23 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:03.417 03:11:23 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:03.417 03:11:23 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:03.417 03:11:23 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:03.417 03:11:23 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:03.417 No valid GPT data, bailing 00:03:03.417 03:11:23 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:03.417 03:11:23 -- scripts/common.sh@394 -- # pt= 00:03:03.417 03:11:23 -- scripts/common.sh@395 -- # return 1 00:03:03.417 03:11:23 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:03.417 1+0 records in 00:03:03.417 1+0 records out 00:03:03.418 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00151666 s, 691 MB/s 00:03:03.418 03:11:23 -- spdk/autotest.sh@105 -- # sync 00:03:03.418 03:11:23 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:03.418 03:11:23 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:03.418 03:11:23 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:08.692 03:11:28 -- spdk/autotest.sh@111 -- # uname -s 00:03:08.692 03:11:28 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:08.692 03:11:28 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:08.692 03:11:28 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:11.226 Hugepages 00:03:11.226 node hugesize free / total 00:03:11.226 node0 1048576kB 0 / 0 00:03:11.226 node0 2048kB 0 / 0 00:03:11.226 node1 1048576kB 0 / 0 00:03:11.226 node1 2048kB 0 / 0 00:03:11.226 00:03:11.226 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:11.226 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:11.226 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:11.226 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:11.226 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:11.226 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:11.226 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:11.226 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:11.226 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:11.226 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:11.226 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:11.226 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:11.226 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:11.226 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:11.226 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:11.226 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:11.226 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:11.226 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:11.226 03:11:31 -- spdk/autotest.sh@117 -- # uname -s 00:03:11.226 03:11:31 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:11.226 03:11:31 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:11.226 03:11:31 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:13.132 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:13.132 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:13.390 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:13.390 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:13.390 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:13.390 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:13.390 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:13.390 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:13.390 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:13.390 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:13.390 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:13.390 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:13.390 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:13.390 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:13.390 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:13.390 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:14.328 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:14.328 03:11:34 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:15.266 03:11:35 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:15.266 03:11:35 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:15.266 03:11:35 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:15.266 03:11:35 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:15.266 03:11:35 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:15.266 03:11:35 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:15.266 03:11:35 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:15.266 03:11:35 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:15.266 03:11:35 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:15.525 03:11:35 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:15.525 03:11:35 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:15.525 03:11:35 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:18.065 Waiting for block devices as requested 00:03:18.065 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:18.065 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:18.065 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:18.065 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:18.325 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:18.325 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:18.325 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:18.585 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:18.585 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:18.585 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:18.585 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:18.844 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:18.844 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:18.844 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:19.103 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:19.103 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:19.103 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:19.103 03:11:39 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:19.103 03:11:39 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:19.103 03:11:39 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:19.103 03:11:39 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:03:19.103 03:11:39 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:19.103 03:11:39 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:19.103 03:11:39 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:19.363 03:11:39 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:19.363 03:11:39 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:19.363 03:11:39 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:19.363 03:11:39 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:19.363 03:11:39 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:19.363 03:11:39 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:19.363 03:11:39 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:03:19.363 03:11:39 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:19.363 03:11:39 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:19.363 03:11:39 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:19.363 03:11:39 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:19.363 03:11:39 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:19.363 03:11:39 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:19.363 03:11:39 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:19.363 03:11:39 -- common/autotest_common.sh@1543 -- # continue 00:03:19.363 03:11:39 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:19.363 03:11:39 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:19.363 03:11:39 -- common/autotest_common.sh@10 -- # set +x 00:03:19.363 03:11:39 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:19.363 03:11:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:19.363 03:11:39 -- common/autotest_common.sh@10 -- # set +x 00:03:19.363 03:11:39 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:22.655 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:22.655 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:22.655 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:22.655 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:22.655 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:22.655 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:22.655 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:22.655 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:22.655 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:22.655 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:22.655 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:22.655 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:22.655 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:22.655 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:22.655 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:22.655 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:22.914 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:23.173 03:11:43 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:23.173 03:11:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:23.173 03:11:43 -- common/autotest_common.sh@10 -- # set +x 00:03:23.173 03:11:43 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:23.173 03:11:43 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:23.173 03:11:43 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:23.173 03:11:43 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:23.173 03:11:43 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:23.173 03:11:43 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:23.173 03:11:43 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:23.173 03:11:43 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:23.173 03:11:43 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:23.173 03:11:43 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:23.173 03:11:43 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:23.173 03:11:43 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:23.173 03:11:43 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:23.173 03:11:43 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:23.173 03:11:43 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:23.173 03:11:43 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:23.173 03:11:43 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:23.173 03:11:43 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:23.173 03:11:43 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:23.173 03:11:43 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:23.173 03:11:43 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:23.173 03:11:43 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:03:23.173 03:11:43 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:03:23.173 03:11:43 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=2417354 00:03:23.173 03:11:43 -- common/autotest_common.sh@1585 -- # waitforlisten 2417354 00:03:23.173 03:11:43 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:23.173 03:11:43 -- common/autotest_common.sh@835 -- # '[' -z 2417354 ']' 00:03:23.173 03:11:43 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:23.173 03:11:43 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:23.173 03:11:43 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:23.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:23.173 03:11:43 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:23.173 03:11:43 -- common/autotest_common.sh@10 -- # set +x 00:03:23.433 [2024-12-06 03:11:43.352791] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:03:23.433 [2024-12-06 03:11:43.352837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2417354 ] 00:03:23.433 [2024-12-06 03:11:43.414443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:23.433 [2024-12-06 03:11:43.455157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:23.692 03:11:43 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:23.692 03:11:43 -- common/autotest_common.sh@868 -- # return 0 00:03:23.692 03:11:43 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:23.692 03:11:43 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:23.692 03:11:43 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:26.983 nvme0n1 00:03:26.984 03:11:46 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:26.984 [2024-12-06 03:11:46.851030] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:26.984 request: 00:03:26.984 { 00:03:26.984 "nvme_ctrlr_name": "nvme0", 00:03:26.984 "password": "test", 00:03:26.984 "method": "bdev_nvme_opal_revert", 00:03:26.984 "req_id": 1 00:03:26.984 } 00:03:26.984 Got JSON-RPC error response 00:03:26.984 response: 00:03:26.984 { 00:03:26.984 "code": -32602, 00:03:26.984 "message": "Invalid parameters" 00:03:26.984 } 00:03:26.984 03:11:46 -- common/autotest_common.sh@1591 -- # true 00:03:26.984 03:11:46 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:26.984 03:11:46 -- common/autotest_common.sh@1595 -- # killprocess 2417354 00:03:26.984 03:11:46 -- common/autotest_common.sh@954 -- # '[' -z 2417354 ']' 00:03:26.984 03:11:46 -- common/autotest_common.sh@958 -- # kill -0 2417354 00:03:26.984 03:11:46 -- common/autotest_common.sh@959 -- # uname 00:03:26.984 03:11:46 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:26.984 03:11:46 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2417354 00:03:26.984 03:11:46 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:26.984 03:11:46 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:26.984 03:11:46 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2417354' 00:03:26.984 killing process with pid 2417354 00:03:26.984 03:11:46 -- common/autotest_common.sh@973 -- # kill 2417354 00:03:26.984 03:11:46 -- common/autotest_common.sh@978 -- # wait 2417354 00:03:28.892 03:11:48 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:28.892 03:11:48 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:28.892 03:11:48 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:28.892 03:11:48 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:28.892 03:11:48 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:28.892 03:11:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:28.892 03:11:48 -- common/autotest_common.sh@10 -- # set +x 00:03:28.892 03:11:48 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:28.892 03:11:48 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:28.892 03:11:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:28.892 03:11:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:28.892 03:11:48 -- common/autotest_common.sh@10 -- # set +x 00:03:28.892 ************************************ 00:03:28.892 START TEST env 00:03:28.892 ************************************ 00:03:28.892 03:11:48 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:28.892 * Looking for test storage... 00:03:28.892 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:28.892 03:11:48 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:28.892 03:11:48 env -- common/autotest_common.sh@1711 -- # lcov --version 00:03:28.892 03:11:48 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:28.892 03:11:48 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:28.892 03:11:48 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:28.892 03:11:48 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:28.892 03:11:48 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:28.892 03:11:48 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:28.892 03:11:48 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:28.892 03:11:48 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:28.892 03:11:48 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:28.892 03:11:48 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:28.892 03:11:48 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:28.892 03:11:48 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:28.892 03:11:48 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:28.892 03:11:48 env -- scripts/common.sh@344 -- # case "$op" in 00:03:28.892 03:11:48 env -- scripts/common.sh@345 -- # : 1 00:03:28.892 03:11:48 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:28.892 03:11:48 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:28.892 03:11:48 env -- scripts/common.sh@365 -- # decimal 1 00:03:28.892 03:11:48 env -- scripts/common.sh@353 -- # local d=1 00:03:28.892 03:11:48 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:28.892 03:11:48 env -- scripts/common.sh@355 -- # echo 1 00:03:28.892 03:11:48 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:28.892 03:11:48 env -- scripts/common.sh@366 -- # decimal 2 00:03:28.892 03:11:48 env -- scripts/common.sh@353 -- # local d=2 00:03:28.892 03:11:48 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:28.892 03:11:48 env -- scripts/common.sh@355 -- # echo 2 00:03:28.892 03:11:48 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:28.892 03:11:48 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:28.892 03:11:48 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:28.892 03:11:48 env -- scripts/common.sh@368 -- # return 0 00:03:28.892 03:11:48 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:28.892 03:11:48 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:28.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.892 --rc genhtml_branch_coverage=1 00:03:28.892 --rc genhtml_function_coverage=1 00:03:28.892 --rc genhtml_legend=1 00:03:28.892 --rc geninfo_all_blocks=1 00:03:28.892 --rc geninfo_unexecuted_blocks=1 00:03:28.892 00:03:28.892 ' 00:03:28.892 03:11:48 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:28.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.892 --rc genhtml_branch_coverage=1 00:03:28.892 --rc genhtml_function_coverage=1 00:03:28.892 --rc genhtml_legend=1 00:03:28.892 --rc geninfo_all_blocks=1 00:03:28.892 --rc geninfo_unexecuted_blocks=1 00:03:28.892 00:03:28.892 ' 00:03:28.892 03:11:48 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:28.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.892 --rc genhtml_branch_coverage=1 00:03:28.892 --rc genhtml_function_coverage=1 00:03:28.892 --rc genhtml_legend=1 00:03:28.892 --rc geninfo_all_blocks=1 00:03:28.892 --rc geninfo_unexecuted_blocks=1 00:03:28.892 00:03:28.892 ' 00:03:28.892 03:11:48 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:28.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.892 --rc genhtml_branch_coverage=1 00:03:28.892 --rc genhtml_function_coverage=1 00:03:28.892 --rc genhtml_legend=1 00:03:28.892 --rc geninfo_all_blocks=1 00:03:28.892 --rc geninfo_unexecuted_blocks=1 00:03:28.892 00:03:28.892 ' 00:03:28.892 03:11:48 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:28.892 03:11:48 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:28.893 03:11:48 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:28.893 03:11:48 env -- common/autotest_common.sh@10 -- # set +x 00:03:28.893 ************************************ 00:03:28.893 START TEST env_memory 00:03:28.893 ************************************ 00:03:28.893 03:11:48 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:28.893 00:03:28.893 00:03:28.893 CUnit - A unit testing framework for C - Version 2.1-3 00:03:28.893 http://cunit.sourceforge.net/ 00:03:28.893 00:03:28.893 00:03:28.893 Suite: memory 00:03:28.893 Test: alloc and free memory map ...[2024-12-06 03:11:48.814405] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:28.893 passed 00:03:28.893 Test: mem map translation ...[2024-12-06 03:11:48.832796] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:28.893 [2024-12-06 03:11:48.832822] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:28.893 [2024-12-06 03:11:48.832855] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:28.893 [2024-12-06 03:11:48.832862] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:28.893 passed 00:03:28.893 Test: mem map registration ...[2024-12-06 03:11:48.872156] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:28.893 [2024-12-06 03:11:48.872170] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:28.893 passed 00:03:28.893 Test: mem map adjacent registrations ...passed 00:03:28.893 00:03:28.893 Run Summary: Type Total Ran Passed Failed Inactive 00:03:28.893 suites 1 1 n/a 0 0 00:03:28.893 tests 4 4 4 0 0 00:03:28.893 asserts 152 152 152 0 n/a 00:03:28.893 00:03:28.893 Elapsed time = 0.136 seconds 00:03:28.893 00:03:28.893 real 0m0.149s 00:03:28.893 user 0m0.140s 00:03:28.893 sys 0m0.009s 00:03:28.893 03:11:48 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:28.893 03:11:48 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:28.893 ************************************ 00:03:28.893 END TEST env_memory 00:03:28.893 ************************************ 00:03:28.893 03:11:48 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:28.893 03:11:48 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:28.893 03:11:48 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:28.893 03:11:48 env -- common/autotest_common.sh@10 -- # set +x 00:03:28.893 ************************************ 00:03:28.893 START TEST env_vtophys 00:03:28.893 ************************************ 00:03:28.893 03:11:48 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:28.893 EAL: lib.eal log level changed from notice to debug 00:03:28.893 EAL: Detected lcore 0 as core 0 on socket 0 00:03:28.893 EAL: Detected lcore 1 as core 1 on socket 0 00:03:28.893 EAL: Detected lcore 2 as core 2 on socket 0 00:03:28.893 EAL: Detected lcore 3 as core 3 on socket 0 00:03:28.893 EAL: Detected lcore 4 as core 4 on socket 0 00:03:28.893 EAL: Detected lcore 5 as core 5 on socket 0 00:03:28.893 EAL: Detected lcore 6 as core 6 on socket 0 00:03:28.893 EAL: Detected lcore 7 as core 8 on socket 0 00:03:28.893 EAL: Detected lcore 8 as core 9 on socket 0 00:03:28.893 EAL: Detected lcore 9 as core 10 on socket 0 00:03:28.893 EAL: Detected lcore 10 as core 11 on socket 0 00:03:28.893 EAL: Detected lcore 11 as core 12 on socket 0 00:03:28.893 EAL: Detected lcore 12 as core 13 on socket 0 00:03:28.893 EAL: Detected lcore 13 as core 16 on socket 0 00:03:28.893 EAL: Detected lcore 14 as core 17 on socket 0 00:03:28.893 EAL: Detected lcore 15 as core 18 on socket 0 00:03:28.893 EAL: Detected lcore 16 as core 19 on socket 0 00:03:28.893 EAL: Detected lcore 17 as core 20 on socket 0 00:03:28.893 EAL: Detected lcore 18 as core 21 on socket 0 00:03:28.893 EAL: Detected lcore 19 as core 25 on socket 0 00:03:28.893 EAL: Detected lcore 20 as core 26 on socket 0 00:03:28.893 EAL: Detected lcore 21 as core 27 on socket 0 00:03:28.893 EAL: Detected lcore 22 as core 28 on socket 0 00:03:28.893 EAL: Detected lcore 23 as core 29 on socket 0 00:03:28.893 EAL: Detected lcore 24 as core 0 on socket 1 00:03:28.893 EAL: Detected lcore 25 as core 1 on socket 1 00:03:28.893 EAL: Detected lcore 26 as core 2 on socket 1 00:03:28.893 EAL: Detected lcore 27 as core 3 on socket 1 00:03:28.893 EAL: Detected lcore 28 as core 4 on socket 1 00:03:28.893 EAL: Detected lcore 29 as core 5 on socket 1 00:03:28.893 EAL: Detected lcore 30 as core 6 on socket 1 00:03:28.893 EAL: Detected lcore 31 as core 9 on socket 1 00:03:28.893 EAL: Detected lcore 32 as core 10 on socket 1 00:03:28.893 EAL: Detected lcore 33 as core 11 on socket 1 00:03:28.893 EAL: Detected lcore 34 as core 12 on socket 1 00:03:28.893 EAL: Detected lcore 35 as core 13 on socket 1 00:03:28.893 EAL: Detected lcore 36 as core 16 on socket 1 00:03:28.893 EAL: Detected lcore 37 as core 17 on socket 1 00:03:28.893 EAL: Detected lcore 38 as core 18 on socket 1 00:03:28.893 EAL: Detected lcore 39 as core 19 on socket 1 00:03:28.893 EAL: Detected lcore 40 as core 20 on socket 1 00:03:28.893 EAL: Detected lcore 41 as core 21 on socket 1 00:03:28.893 EAL: Detected lcore 42 as core 24 on socket 1 00:03:28.893 EAL: Detected lcore 43 as core 25 on socket 1 00:03:28.893 EAL: Detected lcore 44 as core 26 on socket 1 00:03:28.893 EAL: Detected lcore 45 as core 27 on socket 1 00:03:28.893 EAL: Detected lcore 46 as core 28 on socket 1 00:03:28.893 EAL: Detected lcore 47 as core 29 on socket 1 00:03:28.893 EAL: Detected lcore 48 as core 0 on socket 0 00:03:28.893 EAL: Detected lcore 49 as core 1 on socket 0 00:03:28.893 EAL: Detected lcore 50 as core 2 on socket 0 00:03:28.893 EAL: Detected lcore 51 as core 3 on socket 0 00:03:28.893 EAL: Detected lcore 52 as core 4 on socket 0 00:03:28.893 EAL: Detected lcore 53 as core 5 on socket 0 00:03:28.893 EAL: Detected lcore 54 as core 6 on socket 0 00:03:28.893 EAL: Detected lcore 55 as core 8 on socket 0 00:03:28.893 EAL: Detected lcore 56 as core 9 on socket 0 00:03:28.893 EAL: Detected lcore 57 as core 10 on socket 0 00:03:28.893 EAL: Detected lcore 58 as core 11 on socket 0 00:03:28.893 EAL: Detected lcore 59 as core 12 on socket 0 00:03:28.893 EAL: Detected lcore 60 as core 13 on socket 0 00:03:28.893 EAL: Detected lcore 61 as core 16 on socket 0 00:03:28.893 EAL: Detected lcore 62 as core 17 on socket 0 00:03:28.893 EAL: Detected lcore 63 as core 18 on socket 0 00:03:28.893 EAL: Detected lcore 64 as core 19 on socket 0 00:03:28.893 EAL: Detected lcore 65 as core 20 on socket 0 00:03:28.893 EAL: Detected lcore 66 as core 21 on socket 0 00:03:28.893 EAL: Detected lcore 67 as core 25 on socket 0 00:03:28.893 EAL: Detected lcore 68 as core 26 on socket 0 00:03:28.893 EAL: Detected lcore 69 as core 27 on socket 0 00:03:28.893 EAL: Detected lcore 70 as core 28 on socket 0 00:03:28.893 EAL: Detected lcore 71 as core 29 on socket 0 00:03:28.893 EAL: Detected lcore 72 as core 0 on socket 1 00:03:28.893 EAL: Detected lcore 73 as core 1 on socket 1 00:03:28.893 EAL: Detected lcore 74 as core 2 on socket 1 00:03:28.893 EAL: Detected lcore 75 as core 3 on socket 1 00:03:28.893 EAL: Detected lcore 76 as core 4 on socket 1 00:03:28.893 EAL: Detected lcore 77 as core 5 on socket 1 00:03:28.893 EAL: Detected lcore 78 as core 6 on socket 1 00:03:28.893 EAL: Detected lcore 79 as core 9 on socket 1 00:03:28.893 EAL: Detected lcore 80 as core 10 on socket 1 00:03:28.893 EAL: Detected lcore 81 as core 11 on socket 1 00:03:28.893 EAL: Detected lcore 82 as core 12 on socket 1 00:03:28.893 EAL: Detected lcore 83 as core 13 on socket 1 00:03:28.893 EAL: Detected lcore 84 as core 16 on socket 1 00:03:28.893 EAL: Detected lcore 85 as core 17 on socket 1 00:03:28.893 EAL: Detected lcore 86 as core 18 on socket 1 00:03:28.893 EAL: Detected lcore 87 as core 19 on socket 1 00:03:28.893 EAL: Detected lcore 88 as core 20 on socket 1 00:03:28.893 EAL: Detected lcore 89 as core 21 on socket 1 00:03:28.893 EAL: Detected lcore 90 as core 24 on socket 1 00:03:28.893 EAL: Detected lcore 91 as core 25 on socket 1 00:03:28.893 EAL: Detected lcore 92 as core 26 on socket 1 00:03:28.893 EAL: Detected lcore 93 as core 27 on socket 1 00:03:28.893 EAL: Detected lcore 94 as core 28 on socket 1 00:03:28.893 EAL: Detected lcore 95 as core 29 on socket 1 00:03:28.893 EAL: Maximum logical cores by configuration: 128 00:03:28.893 EAL: Detected CPU lcores: 96 00:03:28.893 EAL: Detected NUMA nodes: 2 00:03:28.893 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:28.893 EAL: Detected shared linkage of DPDK 00:03:28.893 EAL: No shared files mode enabled, IPC will be disabled 00:03:29.154 EAL: Bus pci wants IOVA as 'DC' 00:03:29.154 EAL: Buses did not request a specific IOVA mode. 00:03:29.154 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:29.154 EAL: Selected IOVA mode 'VA' 00:03:29.154 EAL: Probing VFIO support... 00:03:29.154 EAL: IOMMU type 1 (Type 1) is supported 00:03:29.154 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:29.154 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:29.154 EAL: VFIO support initialized 00:03:29.154 EAL: Ask a virtual area of 0x2e000 bytes 00:03:29.154 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:29.154 EAL: Setting up physically contiguous memory... 00:03:29.154 EAL: Setting maximum number of open files to 524288 00:03:29.154 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:29.154 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:29.154 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:29.154 EAL: Ask a virtual area of 0x61000 bytes 00:03:29.154 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:29.154 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:29.154 EAL: Ask a virtual area of 0x400000000 bytes 00:03:29.154 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:29.154 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:29.154 EAL: Ask a virtual area of 0x61000 bytes 00:03:29.154 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:29.154 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:29.154 EAL: Ask a virtual area of 0x400000000 bytes 00:03:29.154 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:29.154 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:29.154 EAL: Ask a virtual area of 0x61000 bytes 00:03:29.154 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:29.154 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:29.154 EAL: Ask a virtual area of 0x400000000 bytes 00:03:29.154 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:29.154 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:29.154 EAL: Ask a virtual area of 0x61000 bytes 00:03:29.154 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:29.154 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:29.154 EAL: Ask a virtual area of 0x400000000 bytes 00:03:29.154 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:29.154 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:29.154 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:29.154 EAL: Ask a virtual area of 0x61000 bytes 00:03:29.154 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:29.154 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:29.154 EAL: Ask a virtual area of 0x400000000 bytes 00:03:29.154 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:29.154 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:29.154 EAL: Ask a virtual area of 0x61000 bytes 00:03:29.154 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:29.154 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:29.154 EAL: Ask a virtual area of 0x400000000 bytes 00:03:29.154 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:29.154 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:29.154 EAL: Ask a virtual area of 0x61000 bytes 00:03:29.154 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:29.154 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:29.154 EAL: Ask a virtual area of 0x400000000 bytes 00:03:29.154 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:29.154 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:29.154 EAL: Ask a virtual area of 0x61000 bytes 00:03:29.154 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:29.154 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:29.154 EAL: Ask a virtual area of 0x400000000 bytes 00:03:29.154 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:29.154 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:29.154 EAL: Hugepages will be freed exactly as allocated. 00:03:29.154 EAL: No shared files mode enabled, IPC is disabled 00:03:29.154 EAL: No shared files mode enabled, IPC is disabled 00:03:29.154 EAL: TSC frequency is ~2300000 KHz 00:03:29.154 EAL: Main lcore 0 is ready (tid=7fd5aac02a00;cpuset=[0]) 00:03:29.154 EAL: Trying to obtain current memory policy. 00:03:29.154 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:29.154 EAL: Restoring previous memory policy: 0 00:03:29.154 EAL: request: mp_malloc_sync 00:03:29.154 EAL: No shared files mode enabled, IPC is disabled 00:03:29.154 EAL: Heap on socket 0 was expanded by 2MB 00:03:29.154 EAL: No shared files mode enabled, IPC is disabled 00:03:29.154 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:29.154 EAL: Mem event callback 'spdk:(nil)' registered 00:03:29.154 00:03:29.154 00:03:29.154 CUnit - A unit testing framework for C - Version 2.1-3 00:03:29.154 http://cunit.sourceforge.net/ 00:03:29.154 00:03:29.154 00:03:29.154 Suite: components_suite 00:03:29.154 Test: vtophys_malloc_test ...passed 00:03:29.154 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:29.154 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:29.154 EAL: Restoring previous memory policy: 4 00:03:29.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.154 EAL: request: mp_malloc_sync 00:03:29.154 EAL: No shared files mode enabled, IPC is disabled 00:03:29.154 EAL: Heap on socket 0 was expanded by 4MB 00:03:29.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.154 EAL: request: mp_malloc_sync 00:03:29.154 EAL: No shared files mode enabled, IPC is disabled 00:03:29.154 EAL: Heap on socket 0 was shrunk by 4MB 00:03:29.154 EAL: Trying to obtain current memory policy. 00:03:29.154 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:29.154 EAL: Restoring previous memory policy: 4 00:03:29.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.154 EAL: request: mp_malloc_sync 00:03:29.154 EAL: No shared files mode enabled, IPC is disabled 00:03:29.154 EAL: Heap on socket 0 was expanded by 6MB 00:03:29.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.154 EAL: request: mp_malloc_sync 00:03:29.154 EAL: No shared files mode enabled, IPC is disabled 00:03:29.154 EAL: Heap on socket 0 was shrunk by 6MB 00:03:29.154 EAL: Trying to obtain current memory policy. 00:03:29.154 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:29.154 EAL: Restoring previous memory policy: 4 00:03:29.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.154 EAL: request: mp_malloc_sync 00:03:29.154 EAL: No shared files mode enabled, IPC is disabled 00:03:29.154 EAL: Heap on socket 0 was expanded by 10MB 00:03:29.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.154 EAL: request: mp_malloc_sync 00:03:29.154 EAL: No shared files mode enabled, IPC is disabled 00:03:29.154 EAL: Heap on socket 0 was shrunk by 10MB 00:03:29.154 EAL: Trying to obtain current memory policy. 00:03:29.154 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:29.154 EAL: Restoring previous memory policy: 4 00:03:29.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.154 EAL: request: mp_malloc_sync 00:03:29.154 EAL: No shared files mode enabled, IPC is disabled 00:03:29.154 EAL: Heap on socket 0 was expanded by 18MB 00:03:29.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.154 EAL: request: mp_malloc_sync 00:03:29.154 EAL: No shared files mode enabled, IPC is disabled 00:03:29.154 EAL: Heap on socket 0 was shrunk by 18MB 00:03:29.154 EAL: Trying to obtain current memory policy. 00:03:29.154 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:29.154 EAL: Restoring previous memory policy: 4 00:03:29.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.154 EAL: request: mp_malloc_sync 00:03:29.154 EAL: No shared files mode enabled, IPC is disabled 00:03:29.154 EAL: Heap on socket 0 was expanded by 34MB 00:03:29.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.154 EAL: request: mp_malloc_sync 00:03:29.154 EAL: No shared files mode enabled, IPC is disabled 00:03:29.154 EAL: Heap on socket 0 was shrunk by 34MB 00:03:29.154 EAL: Trying to obtain current memory policy. 00:03:29.154 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:29.154 EAL: Restoring previous memory policy: 4 00:03:29.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.154 EAL: request: mp_malloc_sync 00:03:29.155 EAL: No shared files mode enabled, IPC is disabled 00:03:29.155 EAL: Heap on socket 0 was expanded by 66MB 00:03:29.155 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.155 EAL: request: mp_malloc_sync 00:03:29.155 EAL: No shared files mode enabled, IPC is disabled 00:03:29.155 EAL: Heap on socket 0 was shrunk by 66MB 00:03:29.155 EAL: Trying to obtain current memory policy. 00:03:29.155 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:29.155 EAL: Restoring previous memory policy: 4 00:03:29.155 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.155 EAL: request: mp_malloc_sync 00:03:29.155 EAL: No shared files mode enabled, IPC is disabled 00:03:29.155 EAL: Heap on socket 0 was expanded by 130MB 00:03:29.155 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.155 EAL: request: mp_malloc_sync 00:03:29.155 EAL: No shared files mode enabled, IPC is disabled 00:03:29.155 EAL: Heap on socket 0 was shrunk by 130MB 00:03:29.155 EAL: Trying to obtain current memory policy. 00:03:29.155 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:29.155 EAL: Restoring previous memory policy: 4 00:03:29.155 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.155 EAL: request: mp_malloc_sync 00:03:29.155 EAL: No shared files mode enabled, IPC is disabled 00:03:29.155 EAL: Heap on socket 0 was expanded by 258MB 00:03:29.155 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.414 EAL: request: mp_malloc_sync 00:03:29.414 EAL: No shared files mode enabled, IPC is disabled 00:03:29.414 EAL: Heap on socket 0 was shrunk by 258MB 00:03:29.414 EAL: Trying to obtain current memory policy. 00:03:29.414 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:29.414 EAL: Restoring previous memory policy: 4 00:03:29.414 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.414 EAL: request: mp_malloc_sync 00:03:29.414 EAL: No shared files mode enabled, IPC is disabled 00:03:29.414 EAL: Heap on socket 0 was expanded by 514MB 00:03:29.414 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.674 EAL: request: mp_malloc_sync 00:03:29.674 EAL: No shared files mode enabled, IPC is disabled 00:03:29.674 EAL: Heap on socket 0 was shrunk by 514MB 00:03:29.674 EAL: Trying to obtain current memory policy. 00:03:29.674 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:29.674 EAL: Restoring previous memory policy: 4 00:03:29.674 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.674 EAL: request: mp_malloc_sync 00:03:29.674 EAL: No shared files mode enabled, IPC is disabled 00:03:29.674 EAL: Heap on socket 0 was expanded by 1026MB 00:03:29.935 EAL: Calling mem event callback 'spdk:(nil)' 00:03:30.194 EAL: request: mp_malloc_sync 00:03:30.194 EAL: No shared files mode enabled, IPC is disabled 00:03:30.194 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:30.194 passed 00:03:30.194 00:03:30.194 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.194 suites 1 1 n/a 0 0 00:03:30.194 tests 2 2 2 0 0 00:03:30.194 asserts 497 497 497 0 n/a 00:03:30.194 00:03:30.194 Elapsed time = 0.974 seconds 00:03:30.194 EAL: Calling mem event callback 'spdk:(nil)' 00:03:30.194 EAL: request: mp_malloc_sync 00:03:30.194 EAL: No shared files mode enabled, IPC is disabled 00:03:30.194 EAL: Heap on socket 0 was shrunk by 2MB 00:03:30.194 EAL: No shared files mode enabled, IPC is disabled 00:03:30.194 EAL: No shared files mode enabled, IPC is disabled 00:03:30.194 EAL: No shared files mode enabled, IPC is disabled 00:03:30.194 00:03:30.194 real 0m1.098s 00:03:30.194 user 0m0.655s 00:03:30.194 sys 0m0.414s 00:03:30.194 03:11:50 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:30.194 03:11:50 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:30.194 ************************************ 00:03:30.194 END TEST env_vtophys 00:03:30.194 ************************************ 00:03:30.194 03:11:50 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:30.194 03:11:50 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:30.194 03:11:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:30.194 03:11:50 env -- common/autotest_common.sh@10 -- # set +x 00:03:30.194 ************************************ 00:03:30.194 START TEST env_pci 00:03:30.194 ************************************ 00:03:30.194 03:11:50 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:30.194 00:03:30.194 00:03:30.194 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.194 http://cunit.sourceforge.net/ 00:03:30.194 00:03:30.194 00:03:30.194 Suite: pci 00:03:30.194 Test: pci_hook ...[2024-12-06 03:11:50.174930] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2418638 has claimed it 00:03:30.194 EAL: Cannot find device (10000:00:01.0) 00:03:30.194 EAL: Failed to attach device on primary process 00:03:30.194 passed 00:03:30.194 00:03:30.194 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.194 suites 1 1 n/a 0 0 00:03:30.194 tests 1 1 1 0 0 00:03:30.194 asserts 25 25 25 0 n/a 00:03:30.194 00:03:30.194 Elapsed time = 0.026 seconds 00:03:30.194 00:03:30.194 real 0m0.046s 00:03:30.194 user 0m0.013s 00:03:30.194 sys 0m0.033s 00:03:30.194 03:11:50 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:30.194 03:11:50 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:30.194 ************************************ 00:03:30.194 END TEST env_pci 00:03:30.194 ************************************ 00:03:30.194 03:11:50 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:30.194 03:11:50 env -- env/env.sh@15 -- # uname 00:03:30.194 03:11:50 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:30.194 03:11:50 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:30.194 03:11:50 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:30.194 03:11:50 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:30.194 03:11:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:30.194 03:11:50 env -- common/autotest_common.sh@10 -- # set +x 00:03:30.194 ************************************ 00:03:30.194 START TEST env_dpdk_post_init 00:03:30.194 ************************************ 00:03:30.194 03:11:50 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:30.194 EAL: Detected CPU lcores: 96 00:03:30.194 EAL: Detected NUMA nodes: 2 00:03:30.194 EAL: Detected shared linkage of DPDK 00:03:30.194 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:30.194 EAL: Selected IOVA mode 'VA' 00:03:30.194 EAL: VFIO support initialized 00:03:30.194 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:30.453 EAL: Using IOMMU type 1 (Type 1) 00:03:30.453 EAL: Ignore mapping IO port bar(1) 00:03:30.453 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:30.453 EAL: Ignore mapping IO port bar(1) 00:03:30.453 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:30.453 EAL: Ignore mapping IO port bar(1) 00:03:30.453 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:30.453 EAL: Ignore mapping IO port bar(1) 00:03:30.453 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:30.453 EAL: Ignore mapping IO port bar(1) 00:03:30.453 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:30.453 EAL: Ignore mapping IO port bar(1) 00:03:30.453 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:30.453 EAL: Ignore mapping IO port bar(1) 00:03:30.453 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:30.453 EAL: Ignore mapping IO port bar(1) 00:03:30.453 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:03:31.391 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:03:31.391 EAL: Ignore mapping IO port bar(1) 00:03:31.391 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:03:31.391 EAL: Ignore mapping IO port bar(1) 00:03:31.391 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:03:31.391 EAL: Ignore mapping IO port bar(1) 00:03:31.391 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:03:31.391 EAL: Ignore mapping IO port bar(1) 00:03:31.391 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:03:31.391 EAL: Ignore mapping IO port bar(1) 00:03:31.391 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:03:31.391 EAL: Ignore mapping IO port bar(1) 00:03:31.391 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:03:31.391 EAL: Ignore mapping IO port bar(1) 00:03:31.391 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:03:31.391 EAL: Ignore mapping IO port bar(1) 00:03:31.391 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:03:34.676 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:03:34.676 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:03:34.676 Starting DPDK initialization... 00:03:34.676 Starting SPDK post initialization... 00:03:34.676 SPDK NVMe probe 00:03:34.676 Attaching to 0000:5e:00.0 00:03:34.676 Attached to 0000:5e:00.0 00:03:34.676 Cleaning up... 00:03:34.676 00:03:34.676 real 0m4.321s 00:03:34.676 user 0m2.957s 00:03:34.676 sys 0m0.442s 00:03:34.676 03:11:54 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:34.676 03:11:54 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:34.676 ************************************ 00:03:34.676 END TEST env_dpdk_post_init 00:03:34.676 ************************************ 00:03:34.676 03:11:54 env -- env/env.sh@26 -- # uname 00:03:34.676 03:11:54 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:34.676 03:11:54 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:34.676 03:11:54 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:34.676 03:11:54 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:34.676 03:11:54 env -- common/autotest_common.sh@10 -- # set +x 00:03:34.676 ************************************ 00:03:34.676 START TEST env_mem_callbacks 00:03:34.676 ************************************ 00:03:34.676 03:11:54 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:34.676 EAL: Detected CPU lcores: 96 00:03:34.676 EAL: Detected NUMA nodes: 2 00:03:34.676 EAL: Detected shared linkage of DPDK 00:03:34.676 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:34.676 EAL: Selected IOVA mode 'VA' 00:03:34.676 EAL: VFIO support initialized 00:03:34.676 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:34.676 00:03:34.676 00:03:34.676 CUnit - A unit testing framework for C - Version 2.1-3 00:03:34.676 http://cunit.sourceforge.net/ 00:03:34.676 00:03:34.676 00:03:34.676 Suite: memory 00:03:34.676 Test: test ... 00:03:34.676 register 0x200000200000 2097152 00:03:34.676 malloc 3145728 00:03:34.676 register 0x200000400000 4194304 00:03:34.676 buf 0x200000500000 len 3145728 PASSED 00:03:34.676 malloc 64 00:03:34.676 buf 0x2000004fff40 len 64 PASSED 00:03:34.676 malloc 4194304 00:03:34.676 register 0x200000800000 6291456 00:03:34.676 buf 0x200000a00000 len 4194304 PASSED 00:03:34.676 free 0x200000500000 3145728 00:03:34.676 free 0x2000004fff40 64 00:03:34.676 unregister 0x200000400000 4194304 PASSED 00:03:34.676 free 0x200000a00000 4194304 00:03:34.676 unregister 0x200000800000 6291456 PASSED 00:03:34.676 malloc 8388608 00:03:34.676 register 0x200000400000 10485760 00:03:34.676 buf 0x200000600000 len 8388608 PASSED 00:03:34.676 free 0x200000600000 8388608 00:03:34.676 unregister 0x200000400000 10485760 PASSED 00:03:34.676 passed 00:03:34.676 00:03:34.676 Run Summary: Type Total Ran Passed Failed Inactive 00:03:34.677 suites 1 1 n/a 0 0 00:03:34.677 tests 1 1 1 0 0 00:03:34.677 asserts 15 15 15 0 n/a 00:03:34.677 00:03:34.677 Elapsed time = 0.006 seconds 00:03:34.677 00:03:34.677 real 0m0.055s 00:03:34.677 user 0m0.018s 00:03:34.677 sys 0m0.037s 00:03:34.677 03:11:54 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:34.677 03:11:54 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:34.677 ************************************ 00:03:34.677 END TEST env_mem_callbacks 00:03:34.677 ************************************ 00:03:34.677 00:03:34.677 real 0m6.192s 00:03:34.677 user 0m4.011s 00:03:34.677 sys 0m1.262s 00:03:34.677 03:11:54 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:34.677 03:11:54 env -- common/autotest_common.sh@10 -- # set +x 00:03:34.677 ************************************ 00:03:34.677 END TEST env 00:03:34.677 ************************************ 00:03:34.677 03:11:54 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:34.677 03:11:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:34.677 03:11:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:34.677 03:11:54 -- common/autotest_common.sh@10 -- # set +x 00:03:34.935 ************************************ 00:03:34.935 START TEST rpc 00:03:34.935 ************************************ 00:03:34.935 03:11:54 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:34.935 * Looking for test storage... 00:03:34.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:34.935 03:11:54 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:34.935 03:11:54 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:34.935 03:11:54 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:34.935 03:11:54 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:34.935 03:11:54 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:34.935 03:11:54 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:34.935 03:11:54 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:34.935 03:11:54 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:34.935 03:11:54 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:34.935 03:11:54 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:34.935 03:11:54 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:34.935 03:11:54 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:34.935 03:11:54 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:34.935 03:11:54 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:34.935 03:11:54 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:34.935 03:11:54 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:34.935 03:11:54 rpc -- scripts/common.sh@345 -- # : 1 00:03:34.935 03:11:54 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:34.935 03:11:54 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:34.935 03:11:54 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:34.935 03:11:54 rpc -- scripts/common.sh@353 -- # local d=1 00:03:34.935 03:11:54 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:34.935 03:11:54 rpc -- scripts/common.sh@355 -- # echo 1 00:03:34.935 03:11:54 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:34.935 03:11:54 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:34.935 03:11:54 rpc -- scripts/common.sh@353 -- # local d=2 00:03:34.935 03:11:54 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:34.935 03:11:54 rpc -- scripts/common.sh@355 -- # echo 2 00:03:34.935 03:11:54 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:34.936 03:11:54 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:34.936 03:11:54 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:34.936 03:11:54 rpc -- scripts/common.sh@368 -- # return 0 00:03:34.936 03:11:54 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:34.936 03:11:54 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:34.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.936 --rc genhtml_branch_coverage=1 00:03:34.936 --rc genhtml_function_coverage=1 00:03:34.936 --rc genhtml_legend=1 00:03:34.936 --rc geninfo_all_blocks=1 00:03:34.936 --rc geninfo_unexecuted_blocks=1 00:03:34.936 00:03:34.936 ' 00:03:34.936 03:11:54 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:34.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.936 --rc genhtml_branch_coverage=1 00:03:34.936 --rc genhtml_function_coverage=1 00:03:34.936 --rc genhtml_legend=1 00:03:34.936 --rc geninfo_all_blocks=1 00:03:34.936 --rc geninfo_unexecuted_blocks=1 00:03:34.936 00:03:34.936 ' 00:03:34.936 03:11:54 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:34.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.936 --rc genhtml_branch_coverage=1 00:03:34.936 --rc genhtml_function_coverage=1 00:03:34.936 --rc genhtml_legend=1 00:03:34.936 --rc geninfo_all_blocks=1 00:03:34.936 --rc geninfo_unexecuted_blocks=1 00:03:34.936 00:03:34.936 ' 00:03:34.936 03:11:54 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:34.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.936 --rc genhtml_branch_coverage=1 00:03:34.936 --rc genhtml_function_coverage=1 00:03:34.936 --rc genhtml_legend=1 00:03:34.936 --rc geninfo_all_blocks=1 00:03:34.936 --rc geninfo_unexecuted_blocks=1 00:03:34.936 00:03:34.936 ' 00:03:34.936 03:11:54 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2419469 00:03:34.936 03:11:54 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:34.936 03:11:54 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2419469 00:03:34.936 03:11:54 rpc -- common/autotest_common.sh@835 -- # '[' -z 2419469 ']' 00:03:34.936 03:11:54 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:34.936 03:11:54 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:34.936 03:11:54 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:34.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:34.936 03:11:54 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:34.936 03:11:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.936 03:11:54 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:34.936 [2024-12-06 03:11:55.038272] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:03:34.936 [2024-12-06 03:11:55.038321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2419469 ] 00:03:35.194 [2024-12-06 03:11:55.100506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:35.194 [2024-12-06 03:11:55.143639] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:35.194 [2024-12-06 03:11:55.143676] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2419469' to capture a snapshot of events at runtime. 00:03:35.194 [2024-12-06 03:11:55.143683] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:35.194 [2024-12-06 03:11:55.143690] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:35.194 [2024-12-06 03:11:55.143695] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2419469 for offline analysis/debug. 00:03:35.194 [2024-12-06 03:11:55.144229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:35.452 03:11:55 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:35.453 03:11:55 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:35.453 03:11:55 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:35.453 03:11:55 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:35.453 03:11:55 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:35.453 03:11:55 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:35.453 03:11:55 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:35.453 03:11:55 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:35.453 03:11:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:35.453 ************************************ 00:03:35.453 START TEST rpc_integrity 00:03:35.453 ************************************ 00:03:35.453 03:11:55 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:35.453 03:11:55 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:35.453 03:11:55 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.453 03:11:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:35.453 03:11:55 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.453 03:11:55 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:35.453 03:11:55 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:35.453 03:11:55 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:35.453 03:11:55 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:35.453 03:11:55 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.453 03:11:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:35.453 03:11:55 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.453 03:11:55 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:35.453 03:11:55 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:35.453 03:11:55 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.453 03:11:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:35.453 03:11:55 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.453 03:11:55 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:35.453 { 00:03:35.453 "name": "Malloc0", 00:03:35.453 "aliases": [ 00:03:35.453 "ff6514dc-a959-49fa-b183-03712e7357af" 00:03:35.453 ], 00:03:35.453 "product_name": "Malloc disk", 00:03:35.453 "block_size": 512, 00:03:35.453 "num_blocks": 16384, 00:03:35.453 "uuid": "ff6514dc-a959-49fa-b183-03712e7357af", 00:03:35.453 "assigned_rate_limits": { 00:03:35.453 "rw_ios_per_sec": 0, 00:03:35.453 "rw_mbytes_per_sec": 0, 00:03:35.453 "r_mbytes_per_sec": 0, 00:03:35.453 "w_mbytes_per_sec": 0 00:03:35.453 }, 00:03:35.453 "claimed": false, 00:03:35.453 "zoned": false, 00:03:35.453 "supported_io_types": { 00:03:35.453 "read": true, 00:03:35.453 "write": true, 00:03:35.453 "unmap": true, 00:03:35.453 "flush": true, 00:03:35.453 "reset": true, 00:03:35.453 "nvme_admin": false, 00:03:35.453 "nvme_io": false, 00:03:35.453 "nvme_io_md": false, 00:03:35.453 "write_zeroes": true, 00:03:35.453 "zcopy": true, 00:03:35.453 "get_zone_info": false, 00:03:35.453 "zone_management": false, 00:03:35.453 "zone_append": false, 00:03:35.453 "compare": false, 00:03:35.453 "compare_and_write": false, 00:03:35.453 "abort": true, 00:03:35.453 "seek_hole": false, 00:03:35.453 "seek_data": false, 00:03:35.453 "copy": true, 00:03:35.453 "nvme_iov_md": false 00:03:35.453 }, 00:03:35.453 "memory_domains": [ 00:03:35.453 { 00:03:35.453 "dma_device_id": "system", 00:03:35.453 "dma_device_type": 1 00:03:35.453 }, 00:03:35.453 { 00:03:35.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:35.453 "dma_device_type": 2 00:03:35.453 } 00:03:35.453 ], 00:03:35.453 "driver_specific": {} 00:03:35.453 } 00:03:35.453 ]' 00:03:35.453 03:11:55 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:35.453 03:11:55 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:35.453 03:11:55 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:35.453 03:11:55 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.453 03:11:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:35.453 [2024-12-06 03:11:55.503296] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:35.453 [2024-12-06 03:11:55.503328] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:35.453 [2024-12-06 03:11:55.503342] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1996100 00:03:35.453 [2024-12-06 03:11:55.503350] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:35.453 [2024-12-06 03:11:55.504444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:35.453 [2024-12-06 03:11:55.504467] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:35.453 Passthru0 00:03:35.453 03:11:55 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.453 03:11:55 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:35.453 03:11:55 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.453 03:11:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:35.453 03:11:55 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.453 03:11:55 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:35.453 { 00:03:35.453 "name": "Malloc0", 00:03:35.453 "aliases": [ 00:03:35.453 "ff6514dc-a959-49fa-b183-03712e7357af" 00:03:35.453 ], 00:03:35.453 "product_name": "Malloc disk", 00:03:35.453 "block_size": 512, 00:03:35.453 "num_blocks": 16384, 00:03:35.453 "uuid": "ff6514dc-a959-49fa-b183-03712e7357af", 00:03:35.453 "assigned_rate_limits": { 00:03:35.453 "rw_ios_per_sec": 0, 00:03:35.453 "rw_mbytes_per_sec": 0, 00:03:35.453 "r_mbytes_per_sec": 0, 00:03:35.453 "w_mbytes_per_sec": 0 00:03:35.453 }, 00:03:35.453 "claimed": true, 00:03:35.453 "claim_type": "exclusive_write", 00:03:35.453 "zoned": false, 00:03:35.453 "supported_io_types": { 00:03:35.453 "read": true, 00:03:35.453 "write": true, 00:03:35.453 "unmap": true, 00:03:35.453 "flush": true, 00:03:35.453 "reset": true, 00:03:35.453 "nvme_admin": false, 00:03:35.453 "nvme_io": false, 00:03:35.453 "nvme_io_md": false, 00:03:35.453 "write_zeroes": true, 00:03:35.453 "zcopy": true, 00:03:35.453 "get_zone_info": false, 00:03:35.453 "zone_management": false, 00:03:35.453 "zone_append": false, 00:03:35.453 "compare": false, 00:03:35.453 "compare_and_write": false, 00:03:35.453 "abort": true, 00:03:35.453 "seek_hole": false, 00:03:35.453 "seek_data": false, 00:03:35.453 "copy": true, 00:03:35.453 "nvme_iov_md": false 00:03:35.453 }, 00:03:35.453 "memory_domains": [ 00:03:35.453 { 00:03:35.453 "dma_device_id": "system", 00:03:35.453 "dma_device_type": 1 00:03:35.453 }, 00:03:35.453 { 00:03:35.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:35.453 "dma_device_type": 2 00:03:35.453 } 00:03:35.453 ], 00:03:35.453 "driver_specific": {} 00:03:35.453 }, 00:03:35.453 { 00:03:35.453 "name": "Passthru0", 00:03:35.453 "aliases": [ 00:03:35.453 "45b11206-6566-54ff-b3b5-c1163509748b" 00:03:35.453 ], 00:03:35.453 "product_name": "passthru", 00:03:35.453 "block_size": 512, 00:03:35.453 "num_blocks": 16384, 00:03:35.453 "uuid": "45b11206-6566-54ff-b3b5-c1163509748b", 00:03:35.453 "assigned_rate_limits": { 00:03:35.453 "rw_ios_per_sec": 0, 00:03:35.453 "rw_mbytes_per_sec": 0, 00:03:35.453 "r_mbytes_per_sec": 0, 00:03:35.453 "w_mbytes_per_sec": 0 00:03:35.453 }, 00:03:35.453 "claimed": false, 00:03:35.453 "zoned": false, 00:03:35.453 "supported_io_types": { 00:03:35.453 "read": true, 00:03:35.453 "write": true, 00:03:35.453 "unmap": true, 00:03:35.453 "flush": true, 00:03:35.453 "reset": true, 00:03:35.453 "nvme_admin": false, 00:03:35.453 "nvme_io": false, 00:03:35.453 "nvme_io_md": false, 00:03:35.453 "write_zeroes": true, 00:03:35.453 "zcopy": true, 00:03:35.453 "get_zone_info": false, 00:03:35.453 "zone_management": false, 00:03:35.453 "zone_append": false, 00:03:35.453 "compare": false, 00:03:35.453 "compare_and_write": false, 00:03:35.453 "abort": true, 00:03:35.453 "seek_hole": false, 00:03:35.453 "seek_data": false, 00:03:35.453 "copy": true, 00:03:35.453 "nvme_iov_md": false 00:03:35.453 }, 00:03:35.453 "memory_domains": [ 00:03:35.453 { 00:03:35.453 "dma_device_id": "system", 00:03:35.453 "dma_device_type": 1 00:03:35.453 }, 00:03:35.453 { 00:03:35.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:35.453 "dma_device_type": 2 00:03:35.453 } 00:03:35.453 ], 00:03:35.453 "driver_specific": { 00:03:35.453 "passthru": { 00:03:35.453 "name": "Passthru0", 00:03:35.453 "base_bdev_name": "Malloc0" 00:03:35.453 } 00:03:35.453 } 00:03:35.453 } 00:03:35.453 ]' 00:03:35.453 03:11:55 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:35.453 03:11:55 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:35.453 03:11:55 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:35.453 03:11:55 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.453 03:11:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:35.453 03:11:55 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.454 03:11:55 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:35.454 03:11:55 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.454 03:11:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:35.454 03:11:55 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.454 03:11:55 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:35.454 03:11:55 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.454 03:11:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:35.713 03:11:55 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.713 03:11:55 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:35.713 03:11:55 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:35.713 03:11:55 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:35.713 00:03:35.713 real 0m0.244s 00:03:35.713 user 0m0.158s 00:03:35.713 sys 0m0.024s 00:03:35.713 03:11:55 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:35.713 03:11:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:35.713 ************************************ 00:03:35.713 END TEST rpc_integrity 00:03:35.713 ************************************ 00:03:35.713 03:11:55 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:35.713 03:11:55 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:35.713 03:11:55 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:35.713 03:11:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:35.713 ************************************ 00:03:35.713 START TEST rpc_plugins 00:03:35.713 ************************************ 00:03:35.713 03:11:55 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:35.713 03:11:55 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:35.713 03:11:55 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.713 03:11:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:35.713 03:11:55 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.713 03:11:55 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:35.713 03:11:55 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:35.713 03:11:55 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.713 03:11:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:35.713 03:11:55 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.713 03:11:55 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:35.713 { 00:03:35.713 "name": "Malloc1", 00:03:35.713 "aliases": [ 00:03:35.713 "5a99ce4a-e2bd-43b9-85fe-2436bd58c88d" 00:03:35.713 ], 00:03:35.713 "product_name": "Malloc disk", 00:03:35.713 "block_size": 4096, 00:03:35.713 "num_blocks": 256, 00:03:35.713 "uuid": "5a99ce4a-e2bd-43b9-85fe-2436bd58c88d", 00:03:35.713 "assigned_rate_limits": { 00:03:35.713 "rw_ios_per_sec": 0, 00:03:35.713 "rw_mbytes_per_sec": 0, 00:03:35.713 "r_mbytes_per_sec": 0, 00:03:35.713 "w_mbytes_per_sec": 0 00:03:35.713 }, 00:03:35.713 "claimed": false, 00:03:35.713 "zoned": false, 00:03:35.713 "supported_io_types": { 00:03:35.713 "read": true, 00:03:35.713 "write": true, 00:03:35.713 "unmap": true, 00:03:35.713 "flush": true, 00:03:35.713 "reset": true, 00:03:35.713 "nvme_admin": false, 00:03:35.713 "nvme_io": false, 00:03:35.713 "nvme_io_md": false, 00:03:35.713 "write_zeroes": true, 00:03:35.713 "zcopy": true, 00:03:35.713 "get_zone_info": false, 00:03:35.713 "zone_management": false, 00:03:35.713 "zone_append": false, 00:03:35.713 "compare": false, 00:03:35.713 "compare_and_write": false, 00:03:35.713 "abort": true, 00:03:35.713 "seek_hole": false, 00:03:35.713 "seek_data": false, 00:03:35.713 "copy": true, 00:03:35.713 "nvme_iov_md": false 00:03:35.713 }, 00:03:35.713 "memory_domains": [ 00:03:35.713 { 00:03:35.713 "dma_device_id": "system", 00:03:35.713 "dma_device_type": 1 00:03:35.713 }, 00:03:35.713 { 00:03:35.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:35.713 "dma_device_type": 2 00:03:35.713 } 00:03:35.713 ], 00:03:35.713 "driver_specific": {} 00:03:35.713 } 00:03:35.713 ]' 00:03:35.713 03:11:55 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:35.713 03:11:55 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:35.713 03:11:55 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:35.713 03:11:55 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.713 03:11:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:35.713 03:11:55 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.713 03:11:55 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:35.713 03:11:55 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.713 03:11:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:35.713 03:11:55 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.713 03:11:55 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:35.713 03:11:55 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:35.713 03:11:55 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:35.713 00:03:35.713 real 0m0.134s 00:03:35.713 user 0m0.077s 00:03:35.713 sys 0m0.015s 00:03:35.713 03:11:55 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:35.713 03:11:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:35.713 ************************************ 00:03:35.713 END TEST rpc_plugins 00:03:35.713 ************************************ 00:03:35.972 03:11:55 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:35.972 03:11:55 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:35.972 03:11:55 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:35.972 03:11:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:35.972 ************************************ 00:03:35.972 START TEST rpc_trace_cmd_test 00:03:35.972 ************************************ 00:03:35.972 03:11:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:35.972 03:11:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:35.972 03:11:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:35.972 03:11:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.972 03:11:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:35.972 03:11:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.972 03:11:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:35.972 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2419469", 00:03:35.972 "tpoint_group_mask": "0x8", 00:03:35.972 "iscsi_conn": { 00:03:35.972 "mask": "0x2", 00:03:35.972 "tpoint_mask": "0x0" 00:03:35.972 }, 00:03:35.972 "scsi": { 00:03:35.972 "mask": "0x4", 00:03:35.972 "tpoint_mask": "0x0" 00:03:35.972 }, 00:03:35.972 "bdev": { 00:03:35.972 "mask": "0x8", 00:03:35.972 "tpoint_mask": "0xffffffffffffffff" 00:03:35.972 }, 00:03:35.972 "nvmf_rdma": { 00:03:35.972 "mask": "0x10", 00:03:35.972 "tpoint_mask": "0x0" 00:03:35.972 }, 00:03:35.972 "nvmf_tcp": { 00:03:35.972 "mask": "0x20", 00:03:35.972 "tpoint_mask": "0x0" 00:03:35.972 }, 00:03:35.972 "ftl": { 00:03:35.972 "mask": "0x40", 00:03:35.972 "tpoint_mask": "0x0" 00:03:35.972 }, 00:03:35.972 "blobfs": { 00:03:35.972 "mask": "0x80", 00:03:35.972 "tpoint_mask": "0x0" 00:03:35.972 }, 00:03:35.972 "dsa": { 00:03:35.972 "mask": "0x200", 00:03:35.972 "tpoint_mask": "0x0" 00:03:35.972 }, 00:03:35.972 "thread": { 00:03:35.972 "mask": "0x400", 00:03:35.972 "tpoint_mask": "0x0" 00:03:35.972 }, 00:03:35.972 "nvme_pcie": { 00:03:35.972 "mask": "0x800", 00:03:35.972 "tpoint_mask": "0x0" 00:03:35.972 }, 00:03:35.972 "iaa": { 00:03:35.972 "mask": "0x1000", 00:03:35.972 "tpoint_mask": "0x0" 00:03:35.972 }, 00:03:35.972 "nvme_tcp": { 00:03:35.972 "mask": "0x2000", 00:03:35.972 "tpoint_mask": "0x0" 00:03:35.972 }, 00:03:35.972 "bdev_nvme": { 00:03:35.972 "mask": "0x4000", 00:03:35.972 "tpoint_mask": "0x0" 00:03:35.972 }, 00:03:35.972 "sock": { 00:03:35.972 "mask": "0x8000", 00:03:35.972 "tpoint_mask": "0x0" 00:03:35.972 }, 00:03:35.972 "blob": { 00:03:35.972 "mask": "0x10000", 00:03:35.972 "tpoint_mask": "0x0" 00:03:35.972 }, 00:03:35.972 "bdev_raid": { 00:03:35.972 "mask": "0x20000", 00:03:35.972 "tpoint_mask": "0x0" 00:03:35.972 }, 00:03:35.972 "scheduler": { 00:03:35.972 "mask": "0x40000", 00:03:35.972 "tpoint_mask": "0x0" 00:03:35.972 } 00:03:35.972 }' 00:03:35.972 03:11:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:35.972 03:11:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:35.972 03:11:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:35.972 03:11:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:35.972 03:11:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:35.972 03:11:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:35.972 03:11:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:35.972 03:11:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:35.972 03:11:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:35.972 03:11:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:35.972 00:03:35.972 real 0m0.196s 00:03:35.972 user 0m0.168s 00:03:35.972 sys 0m0.021s 00:03:35.972 03:11:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:35.972 03:11:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:35.972 ************************************ 00:03:35.972 END TEST rpc_trace_cmd_test 00:03:35.972 ************************************ 00:03:36.232 03:11:56 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:36.232 03:11:56 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:36.232 03:11:56 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:36.232 03:11:56 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:36.232 03:11:56 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:36.232 03:11:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:36.232 ************************************ 00:03:36.232 START TEST rpc_daemon_integrity 00:03:36.232 ************************************ 00:03:36.232 03:11:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:36.232 03:11:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:36.232 03:11:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:36.232 03:11:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:36.232 03:11:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:36.232 03:11:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:36.232 03:11:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:36.232 03:11:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:36.232 03:11:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:36.232 03:11:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:36.232 03:11:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:36.232 03:11:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:36.232 03:11:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:36.232 03:11:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:36.232 03:11:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:36.232 03:11:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:36.232 03:11:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:36.232 03:11:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:36.232 { 00:03:36.232 "name": "Malloc2", 00:03:36.232 "aliases": [ 00:03:36.232 "d5fc04ba-f6f1-4639-b78d-8a240e01c195" 00:03:36.232 ], 00:03:36.232 "product_name": "Malloc disk", 00:03:36.232 "block_size": 512, 00:03:36.232 "num_blocks": 16384, 00:03:36.232 "uuid": "d5fc04ba-f6f1-4639-b78d-8a240e01c195", 00:03:36.232 "assigned_rate_limits": { 00:03:36.232 "rw_ios_per_sec": 0, 00:03:36.232 "rw_mbytes_per_sec": 0, 00:03:36.232 "r_mbytes_per_sec": 0, 00:03:36.232 "w_mbytes_per_sec": 0 00:03:36.232 }, 00:03:36.232 "claimed": false, 00:03:36.232 "zoned": false, 00:03:36.232 "supported_io_types": { 00:03:36.232 "read": true, 00:03:36.232 "write": true, 00:03:36.232 "unmap": true, 00:03:36.232 "flush": true, 00:03:36.232 "reset": true, 00:03:36.232 "nvme_admin": false, 00:03:36.232 "nvme_io": false, 00:03:36.232 "nvme_io_md": false, 00:03:36.232 "write_zeroes": true, 00:03:36.232 "zcopy": true, 00:03:36.232 "get_zone_info": false, 00:03:36.232 "zone_management": false, 00:03:36.232 "zone_append": false, 00:03:36.232 "compare": false, 00:03:36.232 "compare_and_write": false, 00:03:36.232 "abort": true, 00:03:36.232 "seek_hole": false, 00:03:36.232 "seek_data": false, 00:03:36.232 "copy": true, 00:03:36.232 "nvme_iov_md": false 00:03:36.232 }, 00:03:36.232 "memory_domains": [ 00:03:36.232 { 00:03:36.233 "dma_device_id": "system", 00:03:36.233 "dma_device_type": 1 00:03:36.233 }, 00:03:36.233 { 00:03:36.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:36.233 "dma_device_type": 2 00:03:36.233 } 00:03:36.233 ], 00:03:36.233 "driver_specific": {} 00:03:36.233 } 00:03:36.233 ]' 00:03:36.233 03:11:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:36.233 03:11:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:36.233 03:11:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:36.233 03:11:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:36.233 03:11:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:36.233 [2024-12-06 03:11:56.281432] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:36.233 [2024-12-06 03:11:56.281460] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:36.233 [2024-12-06 03:11:56.281473] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1854450 00:03:36.233 [2024-12-06 03:11:56.281479] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:36.233 [2024-12-06 03:11:56.282472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:36.233 [2024-12-06 03:11:56.282494] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:36.233 Passthru0 00:03:36.233 03:11:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:36.233 03:11:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:36.233 03:11:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:36.233 03:11:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:36.233 03:11:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:36.233 03:11:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:36.233 { 00:03:36.233 "name": "Malloc2", 00:03:36.233 "aliases": [ 00:03:36.233 "d5fc04ba-f6f1-4639-b78d-8a240e01c195" 00:03:36.233 ], 00:03:36.233 "product_name": "Malloc disk", 00:03:36.233 "block_size": 512, 00:03:36.233 "num_blocks": 16384, 00:03:36.233 "uuid": "d5fc04ba-f6f1-4639-b78d-8a240e01c195", 00:03:36.233 "assigned_rate_limits": { 00:03:36.233 "rw_ios_per_sec": 0, 00:03:36.233 "rw_mbytes_per_sec": 0, 00:03:36.233 "r_mbytes_per_sec": 0, 00:03:36.233 "w_mbytes_per_sec": 0 00:03:36.233 }, 00:03:36.233 "claimed": true, 00:03:36.233 "claim_type": "exclusive_write", 00:03:36.233 "zoned": false, 00:03:36.233 "supported_io_types": { 00:03:36.233 "read": true, 00:03:36.233 "write": true, 00:03:36.233 "unmap": true, 00:03:36.233 "flush": true, 00:03:36.233 "reset": true, 00:03:36.233 "nvme_admin": false, 00:03:36.233 "nvme_io": false, 00:03:36.233 "nvme_io_md": false, 00:03:36.233 "write_zeroes": true, 00:03:36.233 "zcopy": true, 00:03:36.233 "get_zone_info": false, 00:03:36.233 "zone_management": false, 00:03:36.233 "zone_append": false, 00:03:36.233 "compare": false, 00:03:36.233 "compare_and_write": false, 00:03:36.233 "abort": true, 00:03:36.233 "seek_hole": false, 00:03:36.233 "seek_data": false, 00:03:36.233 "copy": true, 00:03:36.233 "nvme_iov_md": false 00:03:36.233 }, 00:03:36.233 "memory_domains": [ 00:03:36.233 { 00:03:36.233 "dma_device_id": "system", 00:03:36.233 "dma_device_type": 1 00:03:36.233 }, 00:03:36.233 { 00:03:36.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:36.233 "dma_device_type": 2 00:03:36.233 } 00:03:36.233 ], 00:03:36.233 "driver_specific": {} 00:03:36.233 }, 00:03:36.233 { 00:03:36.233 "name": "Passthru0", 00:03:36.233 "aliases": [ 00:03:36.233 "83349234-bb0b-5e6e-958d-7f41e3a53dfb" 00:03:36.233 ], 00:03:36.233 "product_name": "passthru", 00:03:36.233 "block_size": 512, 00:03:36.233 "num_blocks": 16384, 00:03:36.233 "uuid": "83349234-bb0b-5e6e-958d-7f41e3a53dfb", 00:03:36.233 "assigned_rate_limits": { 00:03:36.233 "rw_ios_per_sec": 0, 00:03:36.233 "rw_mbytes_per_sec": 0, 00:03:36.233 "r_mbytes_per_sec": 0, 00:03:36.233 "w_mbytes_per_sec": 0 00:03:36.233 }, 00:03:36.233 "claimed": false, 00:03:36.233 "zoned": false, 00:03:36.233 "supported_io_types": { 00:03:36.233 "read": true, 00:03:36.233 "write": true, 00:03:36.233 "unmap": true, 00:03:36.233 "flush": true, 00:03:36.233 "reset": true, 00:03:36.233 "nvme_admin": false, 00:03:36.233 "nvme_io": false, 00:03:36.233 "nvme_io_md": false, 00:03:36.233 "write_zeroes": true, 00:03:36.233 "zcopy": true, 00:03:36.233 "get_zone_info": false, 00:03:36.233 "zone_management": false, 00:03:36.233 "zone_append": false, 00:03:36.233 "compare": false, 00:03:36.233 "compare_and_write": false, 00:03:36.233 "abort": true, 00:03:36.233 "seek_hole": false, 00:03:36.233 "seek_data": false, 00:03:36.233 "copy": true, 00:03:36.233 "nvme_iov_md": false 00:03:36.233 }, 00:03:36.233 "memory_domains": [ 00:03:36.233 { 00:03:36.233 "dma_device_id": "system", 00:03:36.233 "dma_device_type": 1 00:03:36.233 }, 00:03:36.233 { 00:03:36.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:36.233 "dma_device_type": 2 00:03:36.233 } 00:03:36.233 ], 00:03:36.233 "driver_specific": { 00:03:36.233 "passthru": { 00:03:36.233 "name": "Passthru0", 00:03:36.233 "base_bdev_name": "Malloc2" 00:03:36.233 } 00:03:36.233 } 00:03:36.233 } 00:03:36.233 ]' 00:03:36.233 03:11:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:36.233 03:11:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:36.233 03:11:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:36.233 03:11:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:36.233 03:11:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:36.233 03:11:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:36.233 03:11:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:36.233 03:11:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:36.233 03:11:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:36.233 03:11:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:36.233 03:11:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:36.233 03:11:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:36.233 03:11:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:36.493 03:11:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:36.493 03:11:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:36.493 03:11:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:36.493 03:11:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:36.493 00:03:36.493 real 0m0.264s 00:03:36.493 user 0m0.171s 00:03:36.493 sys 0m0.023s 00:03:36.493 03:11:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:36.493 03:11:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:36.493 ************************************ 00:03:36.493 END TEST rpc_daemon_integrity 00:03:36.493 ************************************ 00:03:36.493 03:11:56 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:36.493 03:11:56 rpc -- rpc/rpc.sh@84 -- # killprocess 2419469 00:03:36.493 03:11:56 rpc -- common/autotest_common.sh@954 -- # '[' -z 2419469 ']' 00:03:36.493 03:11:56 rpc -- common/autotest_common.sh@958 -- # kill -0 2419469 00:03:36.493 03:11:56 rpc -- common/autotest_common.sh@959 -- # uname 00:03:36.493 03:11:56 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:36.493 03:11:56 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2419469 00:03:36.493 03:11:56 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:36.493 03:11:56 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:36.493 03:11:56 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2419469' 00:03:36.493 killing process with pid 2419469 00:03:36.493 03:11:56 rpc -- common/autotest_common.sh@973 -- # kill 2419469 00:03:36.493 03:11:56 rpc -- common/autotest_common.sh@978 -- # wait 2419469 00:03:36.752 00:03:36.752 real 0m1.978s 00:03:36.752 user 0m2.528s 00:03:36.752 sys 0m0.617s 00:03:36.752 03:11:56 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:36.752 03:11:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:36.752 ************************************ 00:03:36.752 END TEST rpc 00:03:36.752 ************************************ 00:03:36.752 03:11:56 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:36.752 03:11:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:36.752 03:11:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:36.752 03:11:56 -- common/autotest_common.sh@10 -- # set +x 00:03:36.752 ************************************ 00:03:36.752 START TEST skip_rpc 00:03:36.752 ************************************ 00:03:36.752 03:11:56 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:37.012 * Looking for test storage... 00:03:37.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:37.012 03:11:56 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:37.012 03:11:56 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:37.012 03:11:56 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:37.012 03:11:57 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:37.012 03:11:57 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:37.012 03:11:57 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:37.012 03:11:57 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:37.012 03:11:57 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:37.012 03:11:57 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:37.012 03:11:57 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:37.012 03:11:57 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:37.012 03:11:57 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:37.012 03:11:57 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:37.012 03:11:57 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:37.012 03:11:57 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:37.012 03:11:57 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:37.012 03:11:57 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:37.012 03:11:57 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:37.012 03:11:57 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:37.012 03:11:57 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:37.012 03:11:57 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:37.012 03:11:57 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:37.012 03:11:57 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:37.012 03:11:57 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:37.012 03:11:57 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:37.012 03:11:57 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:37.012 03:11:57 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:37.012 03:11:57 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:37.012 03:11:57 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:37.012 03:11:57 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:37.012 03:11:57 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:37.012 03:11:57 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:37.012 03:11:57 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:37.012 03:11:57 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:37.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.012 --rc genhtml_branch_coverage=1 00:03:37.012 --rc genhtml_function_coverage=1 00:03:37.012 --rc genhtml_legend=1 00:03:37.012 --rc geninfo_all_blocks=1 00:03:37.012 --rc geninfo_unexecuted_blocks=1 00:03:37.012 00:03:37.012 ' 00:03:37.012 03:11:57 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:37.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.012 --rc genhtml_branch_coverage=1 00:03:37.012 --rc genhtml_function_coverage=1 00:03:37.012 --rc genhtml_legend=1 00:03:37.012 --rc geninfo_all_blocks=1 00:03:37.012 --rc geninfo_unexecuted_blocks=1 00:03:37.012 00:03:37.012 ' 00:03:37.012 03:11:57 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:37.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.012 --rc genhtml_branch_coverage=1 00:03:37.012 --rc genhtml_function_coverage=1 00:03:37.012 --rc genhtml_legend=1 00:03:37.012 --rc geninfo_all_blocks=1 00:03:37.012 --rc geninfo_unexecuted_blocks=1 00:03:37.012 00:03:37.012 ' 00:03:37.012 03:11:57 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:37.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.012 --rc genhtml_branch_coverage=1 00:03:37.012 --rc genhtml_function_coverage=1 00:03:37.012 --rc genhtml_legend=1 00:03:37.012 --rc geninfo_all_blocks=1 00:03:37.012 --rc geninfo_unexecuted_blocks=1 00:03:37.012 00:03:37.012 ' 00:03:37.012 03:11:57 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:37.012 03:11:57 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:37.012 03:11:57 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:37.013 03:11:57 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:37.013 03:11:57 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:37.013 03:11:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:37.013 ************************************ 00:03:37.013 START TEST skip_rpc 00:03:37.013 ************************************ 00:03:37.013 03:11:57 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:37.013 03:11:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2420110 00:03:37.013 03:11:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:37.013 03:11:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:37.013 03:11:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:37.013 [2024-12-06 03:11:57.136015] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:03:37.013 [2024-12-06 03:11:57.136052] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2420110 ] 00:03:37.272 [2024-12-06 03:11:57.197980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:37.272 [2024-12-06 03:11:57.238657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:42.556 03:12:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:42.556 03:12:02 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:42.556 03:12:02 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:42.556 03:12:02 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:42.556 03:12:02 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:42.556 03:12:02 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:42.556 03:12:02 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:42.556 03:12:02 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:42.556 03:12:02 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.556 03:12:02 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:42.556 03:12:02 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:42.556 03:12:02 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:42.556 03:12:02 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:42.556 03:12:02 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:42.556 03:12:02 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:42.556 03:12:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:42.556 03:12:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2420110 00:03:42.556 03:12:02 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2420110 ']' 00:03:42.556 03:12:02 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2420110 00:03:42.556 03:12:02 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:42.556 03:12:02 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:42.556 03:12:02 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2420110 00:03:42.556 03:12:02 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:42.556 03:12:02 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:42.556 03:12:02 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2420110' 00:03:42.556 killing process with pid 2420110 00:03:42.556 03:12:02 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2420110 00:03:42.556 03:12:02 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2420110 00:03:42.556 00:03:42.556 real 0m5.377s 00:03:42.556 user 0m5.130s 00:03:42.556 sys 0m0.285s 00:03:42.556 03:12:02 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:42.556 03:12:02 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:42.556 ************************************ 00:03:42.556 END TEST skip_rpc 00:03:42.556 ************************************ 00:03:42.556 03:12:02 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:42.556 03:12:02 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:42.556 03:12:02 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:42.556 03:12:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:42.556 ************************************ 00:03:42.556 START TEST skip_rpc_with_json 00:03:42.556 ************************************ 00:03:42.556 03:12:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:03:42.556 03:12:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:42.556 03:12:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2421177 00:03:42.556 03:12:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:42.556 03:12:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:42.556 03:12:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2421177 00:03:42.556 03:12:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2421177 ']' 00:03:42.556 03:12:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:42.556 03:12:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:42.556 03:12:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:42.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:42.556 03:12:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:42.556 03:12:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:42.556 [2024-12-06 03:12:02.586931] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:03:42.556 [2024-12-06 03:12:02.586983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2421177 ] 00:03:42.556 [2024-12-06 03:12:02.650427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:42.556 [2024-12-06 03:12:02.693143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:42.815 03:12:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:42.816 03:12:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:03:42.816 03:12:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:42.816 03:12:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.816 03:12:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:42.816 [2024-12-06 03:12:02.902354] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:42.816 request: 00:03:42.816 { 00:03:42.816 "trtype": "tcp", 00:03:42.816 "method": "nvmf_get_transports", 00:03:42.816 "req_id": 1 00:03:42.816 } 00:03:42.816 Got JSON-RPC error response 00:03:42.816 response: 00:03:42.816 { 00:03:42.816 "code": -19, 00:03:42.816 "message": "No such device" 00:03:42.816 } 00:03:42.816 03:12:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:42.816 03:12:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:42.816 03:12:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.816 03:12:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:42.816 [2024-12-06 03:12:02.914475] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:42.816 03:12:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.816 03:12:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:42.816 03:12:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.816 03:12:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:43.075 03:12:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:43.075 03:12:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:43.075 { 00:03:43.075 "subsystems": [ 00:03:43.075 { 00:03:43.075 "subsystem": "fsdev", 00:03:43.075 "config": [ 00:03:43.075 { 00:03:43.075 "method": "fsdev_set_opts", 00:03:43.075 "params": { 00:03:43.075 "fsdev_io_pool_size": 65535, 00:03:43.075 "fsdev_io_cache_size": 256 00:03:43.075 } 00:03:43.075 } 00:03:43.075 ] 00:03:43.075 }, 00:03:43.075 { 00:03:43.075 "subsystem": "vfio_user_target", 00:03:43.075 "config": null 00:03:43.075 }, 00:03:43.075 { 00:03:43.075 "subsystem": "keyring", 00:03:43.075 "config": [] 00:03:43.075 }, 00:03:43.075 { 00:03:43.075 "subsystem": "iobuf", 00:03:43.075 "config": [ 00:03:43.075 { 00:03:43.075 "method": "iobuf_set_options", 00:03:43.075 "params": { 00:03:43.075 "small_pool_count": 8192, 00:03:43.075 "large_pool_count": 1024, 00:03:43.075 "small_bufsize": 8192, 00:03:43.075 "large_bufsize": 135168, 00:03:43.075 "enable_numa": false 00:03:43.075 } 00:03:43.075 } 00:03:43.075 ] 00:03:43.075 }, 00:03:43.075 { 00:03:43.075 "subsystem": "sock", 00:03:43.075 "config": [ 00:03:43.075 { 00:03:43.075 "method": "sock_set_default_impl", 00:03:43.075 "params": { 00:03:43.075 "impl_name": "posix" 00:03:43.075 } 00:03:43.075 }, 00:03:43.075 { 00:03:43.075 "method": "sock_impl_set_options", 00:03:43.075 "params": { 00:03:43.075 "impl_name": "ssl", 00:03:43.075 "recv_buf_size": 4096, 00:03:43.075 "send_buf_size": 4096, 00:03:43.075 "enable_recv_pipe": true, 00:03:43.075 "enable_quickack": false, 00:03:43.075 "enable_placement_id": 0, 00:03:43.075 "enable_zerocopy_send_server": true, 00:03:43.075 "enable_zerocopy_send_client": false, 00:03:43.075 "zerocopy_threshold": 0, 00:03:43.075 "tls_version": 0, 00:03:43.075 "enable_ktls": false 00:03:43.075 } 00:03:43.075 }, 00:03:43.075 { 00:03:43.075 "method": "sock_impl_set_options", 00:03:43.075 "params": { 00:03:43.075 "impl_name": "posix", 00:03:43.075 "recv_buf_size": 2097152, 00:03:43.075 "send_buf_size": 2097152, 00:03:43.075 "enable_recv_pipe": true, 00:03:43.075 "enable_quickack": false, 00:03:43.075 "enable_placement_id": 0, 00:03:43.075 "enable_zerocopy_send_server": true, 00:03:43.075 "enable_zerocopy_send_client": false, 00:03:43.075 "zerocopy_threshold": 0, 00:03:43.075 "tls_version": 0, 00:03:43.075 "enable_ktls": false 00:03:43.076 } 00:03:43.076 } 00:03:43.076 ] 00:03:43.076 }, 00:03:43.076 { 00:03:43.076 "subsystem": "vmd", 00:03:43.076 "config": [] 00:03:43.076 }, 00:03:43.076 { 00:03:43.076 "subsystem": "accel", 00:03:43.076 "config": [ 00:03:43.076 { 00:03:43.076 "method": "accel_set_options", 00:03:43.076 "params": { 00:03:43.076 "small_cache_size": 128, 00:03:43.076 "large_cache_size": 16, 00:03:43.076 "task_count": 2048, 00:03:43.076 "sequence_count": 2048, 00:03:43.076 "buf_count": 2048 00:03:43.076 } 00:03:43.076 } 00:03:43.076 ] 00:03:43.076 }, 00:03:43.076 { 00:03:43.076 "subsystem": "bdev", 00:03:43.076 "config": [ 00:03:43.076 { 00:03:43.076 "method": "bdev_set_options", 00:03:43.076 "params": { 00:03:43.076 "bdev_io_pool_size": 65535, 00:03:43.076 "bdev_io_cache_size": 256, 00:03:43.076 "bdev_auto_examine": true, 00:03:43.076 "iobuf_small_cache_size": 128, 00:03:43.076 "iobuf_large_cache_size": 16 00:03:43.076 } 00:03:43.076 }, 00:03:43.076 { 00:03:43.076 "method": "bdev_raid_set_options", 00:03:43.076 "params": { 00:03:43.076 "process_window_size_kb": 1024, 00:03:43.076 "process_max_bandwidth_mb_sec": 0 00:03:43.076 } 00:03:43.076 }, 00:03:43.076 { 00:03:43.076 "method": "bdev_iscsi_set_options", 00:03:43.076 "params": { 00:03:43.076 "timeout_sec": 30 00:03:43.076 } 00:03:43.076 }, 00:03:43.076 { 00:03:43.076 "method": "bdev_nvme_set_options", 00:03:43.076 "params": { 00:03:43.076 "action_on_timeout": "none", 00:03:43.076 "timeout_us": 0, 00:03:43.076 "timeout_admin_us": 0, 00:03:43.076 "keep_alive_timeout_ms": 10000, 00:03:43.076 "arbitration_burst": 0, 00:03:43.076 "low_priority_weight": 0, 00:03:43.076 "medium_priority_weight": 0, 00:03:43.076 "high_priority_weight": 0, 00:03:43.076 "nvme_adminq_poll_period_us": 10000, 00:03:43.076 "nvme_ioq_poll_period_us": 0, 00:03:43.076 "io_queue_requests": 0, 00:03:43.076 "delay_cmd_submit": true, 00:03:43.076 "transport_retry_count": 4, 00:03:43.076 "bdev_retry_count": 3, 00:03:43.076 "transport_ack_timeout": 0, 00:03:43.076 "ctrlr_loss_timeout_sec": 0, 00:03:43.076 "reconnect_delay_sec": 0, 00:03:43.076 "fast_io_fail_timeout_sec": 0, 00:03:43.076 "disable_auto_failback": false, 00:03:43.076 "generate_uuids": false, 00:03:43.076 "transport_tos": 0, 00:03:43.076 "nvme_error_stat": false, 00:03:43.076 "rdma_srq_size": 0, 00:03:43.076 "io_path_stat": false, 00:03:43.076 "allow_accel_sequence": false, 00:03:43.076 "rdma_max_cq_size": 0, 00:03:43.076 "rdma_cm_event_timeout_ms": 0, 00:03:43.076 "dhchap_digests": [ 00:03:43.076 "sha256", 00:03:43.076 "sha384", 00:03:43.076 "sha512" 00:03:43.076 ], 00:03:43.076 "dhchap_dhgroups": [ 00:03:43.076 "null", 00:03:43.076 "ffdhe2048", 00:03:43.076 "ffdhe3072", 00:03:43.076 "ffdhe4096", 00:03:43.076 "ffdhe6144", 00:03:43.076 "ffdhe8192" 00:03:43.076 ] 00:03:43.076 } 00:03:43.076 }, 00:03:43.076 { 00:03:43.076 "method": "bdev_nvme_set_hotplug", 00:03:43.076 "params": { 00:03:43.076 "period_us": 100000, 00:03:43.076 "enable": false 00:03:43.076 } 00:03:43.076 }, 00:03:43.076 { 00:03:43.076 "method": "bdev_wait_for_examine" 00:03:43.076 } 00:03:43.076 ] 00:03:43.076 }, 00:03:43.076 { 00:03:43.076 "subsystem": "scsi", 00:03:43.076 "config": null 00:03:43.076 }, 00:03:43.076 { 00:03:43.076 "subsystem": "scheduler", 00:03:43.076 "config": [ 00:03:43.076 { 00:03:43.076 "method": "framework_set_scheduler", 00:03:43.076 "params": { 00:03:43.076 "name": "static" 00:03:43.076 } 00:03:43.076 } 00:03:43.076 ] 00:03:43.076 }, 00:03:43.076 { 00:03:43.076 "subsystem": "vhost_scsi", 00:03:43.076 "config": [] 00:03:43.076 }, 00:03:43.076 { 00:03:43.076 "subsystem": "vhost_blk", 00:03:43.076 "config": [] 00:03:43.076 }, 00:03:43.076 { 00:03:43.076 "subsystem": "ublk", 00:03:43.076 "config": [] 00:03:43.076 }, 00:03:43.076 { 00:03:43.076 "subsystem": "nbd", 00:03:43.076 "config": [] 00:03:43.076 }, 00:03:43.076 { 00:03:43.076 "subsystem": "nvmf", 00:03:43.076 "config": [ 00:03:43.076 { 00:03:43.076 "method": "nvmf_set_config", 00:03:43.076 "params": { 00:03:43.076 "discovery_filter": "match_any", 00:03:43.076 "admin_cmd_passthru": { 00:03:43.076 "identify_ctrlr": false 00:03:43.076 }, 00:03:43.076 "dhchap_digests": [ 00:03:43.076 "sha256", 00:03:43.076 "sha384", 00:03:43.076 "sha512" 00:03:43.076 ], 00:03:43.076 "dhchap_dhgroups": [ 00:03:43.076 "null", 00:03:43.076 "ffdhe2048", 00:03:43.076 "ffdhe3072", 00:03:43.076 "ffdhe4096", 00:03:43.076 "ffdhe6144", 00:03:43.076 "ffdhe8192" 00:03:43.076 ] 00:03:43.076 } 00:03:43.076 }, 00:03:43.076 { 00:03:43.076 "method": "nvmf_set_max_subsystems", 00:03:43.076 "params": { 00:03:43.076 "max_subsystems": 1024 00:03:43.076 } 00:03:43.076 }, 00:03:43.076 { 00:03:43.076 "method": "nvmf_set_crdt", 00:03:43.076 "params": { 00:03:43.076 "crdt1": 0, 00:03:43.076 "crdt2": 0, 00:03:43.076 "crdt3": 0 00:03:43.076 } 00:03:43.076 }, 00:03:43.076 { 00:03:43.076 "method": "nvmf_create_transport", 00:03:43.076 "params": { 00:03:43.076 "trtype": "TCP", 00:03:43.076 "max_queue_depth": 128, 00:03:43.076 "max_io_qpairs_per_ctrlr": 127, 00:03:43.076 "in_capsule_data_size": 4096, 00:03:43.076 "max_io_size": 131072, 00:03:43.076 "io_unit_size": 131072, 00:03:43.076 "max_aq_depth": 128, 00:03:43.076 "num_shared_buffers": 511, 00:03:43.076 "buf_cache_size": 4294967295, 00:03:43.076 "dif_insert_or_strip": false, 00:03:43.076 "zcopy": false, 00:03:43.076 "c2h_success": true, 00:03:43.076 "sock_priority": 0, 00:03:43.076 "abort_timeout_sec": 1, 00:03:43.076 "ack_timeout": 0, 00:03:43.076 "data_wr_pool_size": 0 00:03:43.076 } 00:03:43.076 } 00:03:43.076 ] 00:03:43.076 }, 00:03:43.076 { 00:03:43.076 "subsystem": "iscsi", 00:03:43.076 "config": [ 00:03:43.076 { 00:03:43.076 "method": "iscsi_set_options", 00:03:43.076 "params": { 00:03:43.076 "node_base": "iqn.2016-06.io.spdk", 00:03:43.076 "max_sessions": 128, 00:03:43.076 "max_connections_per_session": 2, 00:03:43.076 "max_queue_depth": 64, 00:03:43.076 "default_time2wait": 2, 00:03:43.076 "default_time2retain": 20, 00:03:43.076 "first_burst_length": 8192, 00:03:43.076 "immediate_data": true, 00:03:43.076 "allow_duplicated_isid": false, 00:03:43.076 "error_recovery_level": 0, 00:03:43.076 "nop_timeout": 60, 00:03:43.076 "nop_in_interval": 30, 00:03:43.076 "disable_chap": false, 00:03:43.076 "require_chap": false, 00:03:43.076 "mutual_chap": false, 00:03:43.076 "chap_group": 0, 00:03:43.076 "max_large_datain_per_connection": 64, 00:03:43.076 "max_r2t_per_connection": 4, 00:03:43.076 "pdu_pool_size": 36864, 00:03:43.076 "immediate_data_pool_size": 16384, 00:03:43.076 "data_out_pool_size": 2048 00:03:43.076 } 00:03:43.076 } 00:03:43.076 ] 00:03:43.076 } 00:03:43.076 ] 00:03:43.076 } 00:03:43.076 03:12:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:43.076 03:12:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2421177 00:03:43.076 03:12:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2421177 ']' 00:03:43.076 03:12:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2421177 00:03:43.076 03:12:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:43.076 03:12:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:43.076 03:12:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2421177 00:03:43.076 03:12:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:43.076 03:12:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:43.076 03:12:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2421177' 00:03:43.076 killing process with pid 2421177 00:03:43.076 03:12:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2421177 00:03:43.076 03:12:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2421177 00:03:43.336 03:12:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2421199 00:03:43.336 03:12:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:43.336 03:12:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:48.723 03:12:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2421199 00:03:48.723 03:12:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2421199 ']' 00:03:48.723 03:12:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2421199 00:03:48.723 03:12:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:48.723 03:12:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:48.723 03:12:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2421199 00:03:48.723 03:12:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:48.723 03:12:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:48.723 03:12:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2421199' 00:03:48.723 killing process with pid 2421199 00:03:48.723 03:12:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2421199 00:03:48.723 03:12:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2421199 00:03:48.723 03:12:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:48.723 03:12:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:48.723 00:03:48.723 real 0m6.272s 00:03:48.723 user 0m5.973s 00:03:48.723 sys 0m0.588s 00:03:48.723 03:12:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:48.723 03:12:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:48.723 ************************************ 00:03:48.723 END TEST skip_rpc_with_json 00:03:48.723 ************************************ 00:03:48.724 03:12:08 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:48.724 03:12:08 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:48.724 03:12:08 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:48.724 03:12:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.984 ************************************ 00:03:48.984 START TEST skip_rpc_with_delay 00:03:48.984 ************************************ 00:03:48.984 03:12:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:03:48.984 03:12:08 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:48.984 03:12:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:03:48.984 03:12:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:48.984 03:12:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:48.984 03:12:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:48.984 03:12:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:48.984 03:12:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:48.984 03:12:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:48.984 03:12:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:48.984 03:12:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:48.984 03:12:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:48.984 03:12:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:48.984 [2024-12-06 03:12:08.927273] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:48.984 03:12:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:03:48.984 03:12:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:48.984 03:12:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:48.984 03:12:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:48.984 00:03:48.984 real 0m0.068s 00:03:48.984 user 0m0.049s 00:03:48.984 sys 0m0.019s 00:03:48.984 03:12:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:48.984 03:12:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:48.984 ************************************ 00:03:48.984 END TEST skip_rpc_with_delay 00:03:48.984 ************************************ 00:03:48.984 03:12:08 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:48.984 03:12:08 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:48.984 03:12:08 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:48.984 03:12:08 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:48.984 03:12:08 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:48.984 03:12:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.984 ************************************ 00:03:48.984 START TEST exit_on_failed_rpc_init 00:03:48.984 ************************************ 00:03:48.984 03:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:03:48.984 03:12:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:48.984 03:12:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2422472 00:03:48.984 03:12:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2422472 00:03:48.984 03:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2422472 ']' 00:03:48.984 03:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:48.984 03:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:48.984 03:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:48.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:48.984 03:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:48.984 03:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:48.984 [2024-12-06 03:12:09.048309] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:03:48.984 [2024-12-06 03:12:09.048350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2422472 ] 00:03:48.984 [2024-12-06 03:12:09.108793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:49.244 [2024-12-06 03:12:09.153694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:49.244 03:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:49.244 03:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:03:49.244 03:12:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:49.244 03:12:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:49.244 03:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:03:49.244 03:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:49.244 03:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:49.244 03:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:49.244 03:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:49.244 03:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:49.244 03:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:49.244 03:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:49.244 03:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:49.244 03:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:49.244 03:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:49.504 [2024-12-06 03:12:09.423848] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:03:49.504 [2024-12-06 03:12:09.423896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2422702 ] 00:03:49.504 [2024-12-06 03:12:09.485507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:49.504 [2024-12-06 03:12:09.526549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:49.504 [2024-12-06 03:12:09.526603] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:49.504 [2024-12-06 03:12:09.526612] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:49.504 [2024-12-06 03:12:09.526618] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:49.504 03:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:03:49.504 03:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:49.504 03:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:03:49.504 03:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:03:49.504 03:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:03:49.504 03:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:49.504 03:12:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:49.504 03:12:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2422472 00:03:49.504 03:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2422472 ']' 00:03:49.504 03:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2422472 00:03:49.504 03:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:03:49.504 03:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:49.504 03:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2422472 00:03:49.504 03:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:49.504 03:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:49.505 03:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2422472' 00:03:49.505 killing process with pid 2422472 00:03:49.505 03:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2422472 00:03:49.505 03:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2422472 00:03:50.075 00:03:50.075 real 0m0.902s 00:03:50.075 user 0m0.965s 00:03:50.075 sys 0m0.363s 00:03:50.075 03:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:50.076 03:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:50.076 ************************************ 00:03:50.076 END TEST exit_on_failed_rpc_init 00:03:50.076 ************************************ 00:03:50.076 03:12:09 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:50.076 00:03:50.076 real 0m13.072s 00:03:50.076 user 0m12.328s 00:03:50.076 sys 0m1.524s 00:03:50.076 03:12:09 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:50.076 03:12:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:50.076 ************************************ 00:03:50.076 END TEST skip_rpc 00:03:50.076 ************************************ 00:03:50.076 03:12:09 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:50.076 03:12:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.076 03:12:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.076 03:12:09 -- common/autotest_common.sh@10 -- # set +x 00:03:50.076 ************************************ 00:03:50.076 START TEST rpc_client 00:03:50.076 ************************************ 00:03:50.076 03:12:10 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:50.076 * Looking for test storage... 00:03:50.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:50.076 03:12:10 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:50.076 03:12:10 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:03:50.076 03:12:10 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:50.076 03:12:10 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:50.076 03:12:10 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:50.076 03:12:10 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:50.076 03:12:10 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:50.076 03:12:10 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:03:50.076 03:12:10 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:03:50.076 03:12:10 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:03:50.076 03:12:10 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:03:50.076 03:12:10 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:03:50.076 03:12:10 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:03:50.076 03:12:10 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:03:50.076 03:12:10 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:50.076 03:12:10 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:03:50.076 03:12:10 rpc_client -- scripts/common.sh@345 -- # : 1 00:03:50.076 03:12:10 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:50.076 03:12:10 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:50.076 03:12:10 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:03:50.076 03:12:10 rpc_client -- scripts/common.sh@353 -- # local d=1 00:03:50.076 03:12:10 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:50.076 03:12:10 rpc_client -- scripts/common.sh@355 -- # echo 1 00:03:50.076 03:12:10 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:03:50.076 03:12:10 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:03:50.076 03:12:10 rpc_client -- scripts/common.sh@353 -- # local d=2 00:03:50.076 03:12:10 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:50.076 03:12:10 rpc_client -- scripts/common.sh@355 -- # echo 2 00:03:50.076 03:12:10 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:03:50.076 03:12:10 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:50.076 03:12:10 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:50.076 03:12:10 rpc_client -- scripts/common.sh@368 -- # return 0 00:03:50.076 03:12:10 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:50.076 03:12:10 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:50.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.076 --rc genhtml_branch_coverage=1 00:03:50.076 --rc genhtml_function_coverage=1 00:03:50.076 --rc genhtml_legend=1 00:03:50.076 --rc geninfo_all_blocks=1 00:03:50.076 --rc geninfo_unexecuted_blocks=1 00:03:50.076 00:03:50.076 ' 00:03:50.076 03:12:10 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:50.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.076 --rc genhtml_branch_coverage=1 00:03:50.076 --rc genhtml_function_coverage=1 00:03:50.076 --rc genhtml_legend=1 00:03:50.076 --rc geninfo_all_blocks=1 00:03:50.076 --rc geninfo_unexecuted_blocks=1 00:03:50.076 00:03:50.076 ' 00:03:50.076 03:12:10 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:50.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.076 --rc genhtml_branch_coverage=1 00:03:50.076 --rc genhtml_function_coverage=1 00:03:50.076 --rc genhtml_legend=1 00:03:50.076 --rc geninfo_all_blocks=1 00:03:50.076 --rc geninfo_unexecuted_blocks=1 00:03:50.076 00:03:50.076 ' 00:03:50.076 03:12:10 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:50.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.076 --rc genhtml_branch_coverage=1 00:03:50.076 --rc genhtml_function_coverage=1 00:03:50.076 --rc genhtml_legend=1 00:03:50.076 --rc geninfo_all_blocks=1 00:03:50.076 --rc geninfo_unexecuted_blocks=1 00:03:50.076 00:03:50.076 ' 00:03:50.076 03:12:10 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:50.076 OK 00:03:50.076 03:12:10 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:50.076 00:03:50.076 real 0m0.184s 00:03:50.076 user 0m0.107s 00:03:50.076 sys 0m0.089s 00:03:50.076 03:12:10 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:50.076 03:12:10 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:50.076 ************************************ 00:03:50.076 END TEST rpc_client 00:03:50.076 ************************************ 00:03:50.337 03:12:10 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:50.337 03:12:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.337 03:12:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.337 03:12:10 -- common/autotest_common.sh@10 -- # set +x 00:03:50.337 ************************************ 00:03:50.337 START TEST json_config 00:03:50.337 ************************************ 00:03:50.337 03:12:10 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:50.337 03:12:10 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:50.337 03:12:10 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:03:50.337 03:12:10 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:50.337 03:12:10 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:50.337 03:12:10 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:50.337 03:12:10 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:50.337 03:12:10 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:50.337 03:12:10 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:03:50.337 03:12:10 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:03:50.337 03:12:10 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:03:50.337 03:12:10 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:03:50.337 03:12:10 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:03:50.337 03:12:10 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:03:50.337 03:12:10 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:03:50.337 03:12:10 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:50.337 03:12:10 json_config -- scripts/common.sh@344 -- # case "$op" in 00:03:50.337 03:12:10 json_config -- scripts/common.sh@345 -- # : 1 00:03:50.337 03:12:10 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:50.337 03:12:10 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:50.337 03:12:10 json_config -- scripts/common.sh@365 -- # decimal 1 00:03:50.337 03:12:10 json_config -- scripts/common.sh@353 -- # local d=1 00:03:50.337 03:12:10 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:50.337 03:12:10 json_config -- scripts/common.sh@355 -- # echo 1 00:03:50.337 03:12:10 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:03:50.337 03:12:10 json_config -- scripts/common.sh@366 -- # decimal 2 00:03:50.337 03:12:10 json_config -- scripts/common.sh@353 -- # local d=2 00:03:50.337 03:12:10 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:50.337 03:12:10 json_config -- scripts/common.sh@355 -- # echo 2 00:03:50.337 03:12:10 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:03:50.337 03:12:10 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:50.337 03:12:10 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:50.337 03:12:10 json_config -- scripts/common.sh@368 -- # return 0 00:03:50.337 03:12:10 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:50.337 03:12:10 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:50.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.337 --rc genhtml_branch_coverage=1 00:03:50.337 --rc genhtml_function_coverage=1 00:03:50.337 --rc genhtml_legend=1 00:03:50.337 --rc geninfo_all_blocks=1 00:03:50.337 --rc geninfo_unexecuted_blocks=1 00:03:50.337 00:03:50.337 ' 00:03:50.337 03:12:10 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:50.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.337 --rc genhtml_branch_coverage=1 00:03:50.337 --rc genhtml_function_coverage=1 00:03:50.337 --rc genhtml_legend=1 00:03:50.337 --rc geninfo_all_blocks=1 00:03:50.337 --rc geninfo_unexecuted_blocks=1 00:03:50.337 00:03:50.337 ' 00:03:50.337 03:12:10 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:50.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.337 --rc genhtml_branch_coverage=1 00:03:50.337 --rc genhtml_function_coverage=1 00:03:50.337 --rc genhtml_legend=1 00:03:50.337 --rc geninfo_all_blocks=1 00:03:50.337 --rc geninfo_unexecuted_blocks=1 00:03:50.337 00:03:50.337 ' 00:03:50.337 03:12:10 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:50.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.337 --rc genhtml_branch_coverage=1 00:03:50.337 --rc genhtml_function_coverage=1 00:03:50.337 --rc genhtml_legend=1 00:03:50.337 --rc geninfo_all_blocks=1 00:03:50.337 --rc geninfo_unexecuted_blocks=1 00:03:50.337 00:03:50.337 ' 00:03:50.337 03:12:10 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:50.337 03:12:10 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:50.337 03:12:10 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:50.337 03:12:10 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:50.337 03:12:10 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:50.337 03:12:10 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:50.337 03:12:10 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:50.337 03:12:10 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:50.337 03:12:10 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:50.337 03:12:10 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:50.337 03:12:10 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:50.337 03:12:10 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:50.337 03:12:10 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:03:50.337 03:12:10 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:03:50.337 03:12:10 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:50.337 03:12:10 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:50.337 03:12:10 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:50.337 03:12:10 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:50.337 03:12:10 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:50.337 03:12:10 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:03:50.337 03:12:10 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:50.337 03:12:10 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:50.337 03:12:10 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:50.337 03:12:10 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:50.337 03:12:10 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:50.337 03:12:10 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:50.337 03:12:10 json_config -- paths/export.sh@5 -- # export PATH 00:03:50.337 03:12:10 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:50.337 03:12:10 json_config -- nvmf/common.sh@51 -- # : 0 00:03:50.337 03:12:10 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:50.337 03:12:10 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:50.337 03:12:10 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:50.337 03:12:10 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:50.337 03:12:10 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:50.337 03:12:10 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:50.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:50.337 03:12:10 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:50.337 03:12:10 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:50.337 03:12:10 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:50.337 03:12:10 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:50.337 03:12:10 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:50.337 03:12:10 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:50.337 03:12:10 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:50.337 03:12:10 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:50.337 03:12:10 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:50.337 03:12:10 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:50.337 03:12:10 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:50.337 03:12:10 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:50.337 03:12:10 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:50.338 03:12:10 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:50.338 03:12:10 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:50.338 03:12:10 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:50.338 03:12:10 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:50.338 03:12:10 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:50.338 03:12:10 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:03:50.338 INFO: JSON configuration test init 00:03:50.338 03:12:10 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:03:50.338 03:12:10 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:03:50.338 03:12:10 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:50.338 03:12:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:50.338 03:12:10 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:03:50.338 03:12:10 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:50.338 03:12:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:50.338 03:12:10 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:03:50.338 03:12:10 json_config -- json_config/common.sh@9 -- # local app=target 00:03:50.338 03:12:10 json_config -- json_config/common.sh@10 -- # shift 00:03:50.338 03:12:10 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:50.338 03:12:10 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:50.338 03:12:10 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:50.338 03:12:10 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:50.338 03:12:10 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:50.338 03:12:10 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2423013 00:03:50.338 03:12:10 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:50.338 Waiting for target to run... 00:03:50.338 03:12:10 json_config -- json_config/common.sh@25 -- # waitforlisten 2423013 /var/tmp/spdk_tgt.sock 00:03:50.338 03:12:10 json_config -- common/autotest_common.sh@835 -- # '[' -z 2423013 ']' 00:03:50.338 03:12:10 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:50.338 03:12:10 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:50.338 03:12:10 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:50.338 03:12:10 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:50.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:50.338 03:12:10 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:50.338 03:12:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:50.597 [2024-12-06 03:12:10.505971] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:03:50.597 [2024-12-06 03:12:10.506024] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2423013 ] 00:03:50.856 [2024-12-06 03:12:10.793693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:50.856 [2024-12-06 03:12:10.828290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:51.422 03:12:11 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:51.422 03:12:11 json_config -- common/autotest_common.sh@868 -- # return 0 00:03:51.422 03:12:11 json_config -- json_config/common.sh@26 -- # echo '' 00:03:51.422 00:03:51.422 03:12:11 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:03:51.422 03:12:11 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:03:51.422 03:12:11 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:51.422 03:12:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:51.422 03:12:11 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:03:51.422 03:12:11 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:03:51.423 03:12:11 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:51.423 03:12:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:51.423 03:12:11 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:51.423 03:12:11 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:03:51.423 03:12:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:54.709 03:12:14 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:03:54.709 03:12:14 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:54.709 03:12:14 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:54.709 03:12:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:54.709 03:12:14 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:54.709 03:12:14 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:54.709 03:12:14 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:54.709 03:12:14 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:03:54.709 03:12:14 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:03:54.709 03:12:14 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:03:54.709 03:12:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:54.709 03:12:14 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:03:54.709 03:12:14 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:03:54.709 03:12:14 json_config -- json_config/json_config.sh@51 -- # local get_types 00:03:54.709 03:12:14 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:03:54.709 03:12:14 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:03:54.709 03:12:14 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:03:54.709 03:12:14 json_config -- json_config/json_config.sh@54 -- # sort 00:03:54.709 03:12:14 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:03:54.709 03:12:14 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:03:54.709 03:12:14 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:03:54.709 03:12:14 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:03:54.709 03:12:14 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:54.709 03:12:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:54.709 03:12:14 json_config -- json_config/json_config.sh@62 -- # return 0 00:03:54.709 03:12:14 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:03:54.709 03:12:14 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:03:54.709 03:12:14 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:03:54.709 03:12:14 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:03:54.709 03:12:14 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:03:54.709 03:12:14 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:03:54.709 03:12:14 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:54.709 03:12:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:54.709 03:12:14 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:54.709 03:12:14 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:03:54.709 03:12:14 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:03:54.709 03:12:14 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:54.709 03:12:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:54.968 MallocForNvmf0 00:03:54.968 03:12:14 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:54.968 03:12:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:54.968 MallocForNvmf1 00:03:54.968 03:12:15 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:54.968 03:12:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:55.226 [2024-12-06 03:12:15.272702] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:55.226 03:12:15 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:55.226 03:12:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:55.483 03:12:15 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:55.483 03:12:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:55.741 03:12:15 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:55.741 03:12:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:55.741 03:12:15 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:55.741 03:12:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:55.999 [2024-12-06 03:12:15.991000] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:55.999 03:12:16 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:03:55.999 03:12:16 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:55.999 03:12:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:55.999 03:12:16 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:03:55.999 03:12:16 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:55.999 03:12:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:55.999 03:12:16 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:03:55.999 03:12:16 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:55.999 03:12:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:56.257 MallocBdevForConfigChangeCheck 00:03:56.258 03:12:16 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:03:56.258 03:12:16 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:56.258 03:12:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.258 03:12:16 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:03:56.258 03:12:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:56.515 03:12:16 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:03:56.516 INFO: shutting down applications... 00:03:56.516 03:12:16 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:03:56.516 03:12:16 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:03:56.516 03:12:16 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:03:56.516 03:12:16 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:58.417 Calling clear_iscsi_subsystem 00:03:58.417 Calling clear_nvmf_subsystem 00:03:58.417 Calling clear_nbd_subsystem 00:03:58.417 Calling clear_ublk_subsystem 00:03:58.417 Calling clear_vhost_blk_subsystem 00:03:58.417 Calling clear_vhost_scsi_subsystem 00:03:58.417 Calling clear_bdev_subsystem 00:03:58.417 03:12:18 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:03:58.417 03:12:18 json_config -- json_config/json_config.sh@350 -- # count=100 00:03:58.417 03:12:18 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:03:58.417 03:12:18 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:58.417 03:12:18 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:58.417 03:12:18 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:58.417 03:12:18 json_config -- json_config/json_config.sh@352 -- # break 00:03:58.417 03:12:18 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:03:58.417 03:12:18 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:03:58.417 03:12:18 json_config -- json_config/common.sh@31 -- # local app=target 00:03:58.417 03:12:18 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:58.417 03:12:18 json_config -- json_config/common.sh@35 -- # [[ -n 2423013 ]] 00:03:58.417 03:12:18 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2423013 00:03:58.417 03:12:18 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:58.417 03:12:18 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:58.417 03:12:18 json_config -- json_config/common.sh@41 -- # kill -0 2423013 00:03:58.417 03:12:18 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:03:58.984 03:12:19 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:03:58.984 03:12:19 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:58.984 03:12:19 json_config -- json_config/common.sh@41 -- # kill -0 2423013 00:03:58.984 03:12:19 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:58.984 03:12:19 json_config -- json_config/common.sh@43 -- # break 00:03:58.984 03:12:19 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:58.984 03:12:19 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:58.984 SPDK target shutdown done 00:03:58.984 03:12:19 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:03:58.984 INFO: relaunching applications... 00:03:58.984 03:12:19 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:58.984 03:12:19 json_config -- json_config/common.sh@9 -- # local app=target 00:03:58.984 03:12:19 json_config -- json_config/common.sh@10 -- # shift 00:03:58.984 03:12:19 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:58.984 03:12:19 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:58.984 03:12:19 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:58.984 03:12:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:58.984 03:12:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:58.984 03:12:19 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2424661 00:03:58.984 03:12:19 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:58.984 Waiting for target to run... 00:03:58.984 03:12:19 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:58.984 03:12:19 json_config -- json_config/common.sh@25 -- # waitforlisten 2424661 /var/tmp/spdk_tgt.sock 00:03:58.984 03:12:19 json_config -- common/autotest_common.sh@835 -- # '[' -z 2424661 ']' 00:03:58.984 03:12:19 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:58.984 03:12:19 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:58.984 03:12:19 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:58.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:58.984 03:12:19 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:58.984 03:12:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.984 [2024-12-06 03:12:19.106661] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:03:58.984 [2024-12-06 03:12:19.106721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2424661 ] 00:03:59.551 [2024-12-06 03:12:19.551224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:59.551 [2024-12-06 03:12:19.609986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.841 [2024-12-06 03:12:22.646527] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:02.841 [2024-12-06 03:12:22.678865] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:03.409 03:12:23 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:03.409 03:12:23 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:03.409 03:12:23 json_config -- json_config/common.sh@26 -- # echo '' 00:04:03.409 00:04:03.409 03:12:23 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:03.409 03:12:23 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:03.409 INFO: Checking if target configuration is the same... 00:04:03.409 03:12:23 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:03.409 03:12:23 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:03.409 03:12:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:03.409 + '[' 2 -ne 2 ']' 00:04:03.409 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:03.409 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:03.409 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:03.409 +++ basename /dev/fd/62 00:04:03.409 ++ mktemp /tmp/62.XXX 00:04:03.409 + tmp_file_1=/tmp/62.767 00:04:03.409 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:03.409 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:03.409 + tmp_file_2=/tmp/spdk_tgt_config.json.tTE 00:04:03.409 + ret=0 00:04:03.409 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:03.668 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:03.668 + diff -u /tmp/62.767 /tmp/spdk_tgt_config.json.tTE 00:04:03.668 + echo 'INFO: JSON config files are the same' 00:04:03.668 INFO: JSON config files are the same 00:04:03.668 + rm /tmp/62.767 /tmp/spdk_tgt_config.json.tTE 00:04:03.668 + exit 0 00:04:03.668 03:12:23 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:03.668 03:12:23 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:03.668 INFO: changing configuration and checking if this can be detected... 00:04:03.668 03:12:23 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:03.669 03:12:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:03.928 03:12:23 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:03.928 03:12:23 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:03.928 03:12:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:03.928 + '[' 2 -ne 2 ']' 00:04:03.928 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:03.928 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:03.928 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:03.928 +++ basename /dev/fd/62 00:04:03.928 ++ mktemp /tmp/62.XXX 00:04:03.928 + tmp_file_1=/tmp/62.OAp 00:04:03.928 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:03.928 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:03.928 + tmp_file_2=/tmp/spdk_tgt_config.json.IDE 00:04:03.928 + ret=0 00:04:03.928 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:04.187 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:04.187 + diff -u /tmp/62.OAp /tmp/spdk_tgt_config.json.IDE 00:04:04.187 + ret=1 00:04:04.187 + echo '=== Start of file: /tmp/62.OAp ===' 00:04:04.187 + cat /tmp/62.OAp 00:04:04.187 + echo '=== End of file: /tmp/62.OAp ===' 00:04:04.187 + echo '' 00:04:04.187 + echo '=== Start of file: /tmp/spdk_tgt_config.json.IDE ===' 00:04:04.187 + cat /tmp/spdk_tgt_config.json.IDE 00:04:04.187 + echo '=== End of file: /tmp/spdk_tgt_config.json.IDE ===' 00:04:04.187 + echo '' 00:04:04.187 + rm /tmp/62.OAp /tmp/spdk_tgt_config.json.IDE 00:04:04.187 + exit 1 00:04:04.187 03:12:24 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:04.187 INFO: configuration change detected. 00:04:04.187 03:12:24 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:04.187 03:12:24 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:04.187 03:12:24 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:04.187 03:12:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.187 03:12:24 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:04.187 03:12:24 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:04.187 03:12:24 json_config -- json_config/json_config.sh@324 -- # [[ -n 2424661 ]] 00:04:04.187 03:12:24 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:04.187 03:12:24 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:04.187 03:12:24 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:04.187 03:12:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.187 03:12:24 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:04.187 03:12:24 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:04.187 03:12:24 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:04.187 03:12:24 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:04.187 03:12:24 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:04.187 03:12:24 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:04.187 03:12:24 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:04.187 03:12:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.447 03:12:24 json_config -- json_config/json_config.sh@330 -- # killprocess 2424661 00:04:04.447 03:12:24 json_config -- common/autotest_common.sh@954 -- # '[' -z 2424661 ']' 00:04:04.447 03:12:24 json_config -- common/autotest_common.sh@958 -- # kill -0 2424661 00:04:04.447 03:12:24 json_config -- common/autotest_common.sh@959 -- # uname 00:04:04.447 03:12:24 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:04.447 03:12:24 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2424661 00:04:04.447 03:12:24 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:04.447 03:12:24 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:04.447 03:12:24 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2424661' 00:04:04.447 killing process with pid 2424661 00:04:04.447 03:12:24 json_config -- common/autotest_common.sh@973 -- # kill 2424661 00:04:04.447 03:12:24 json_config -- common/autotest_common.sh@978 -- # wait 2424661 00:04:05.824 03:12:25 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:05.824 03:12:25 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:05.824 03:12:25 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:05.824 03:12:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:05.824 03:12:25 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:05.824 03:12:25 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:05.824 INFO: Success 00:04:05.824 00:04:05.824 real 0m15.637s 00:04:05.824 user 0m16.138s 00:04:05.824 sys 0m2.502s 00:04:05.824 03:12:25 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.824 03:12:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:05.824 ************************************ 00:04:05.824 END TEST json_config 00:04:05.824 ************************************ 00:04:05.824 03:12:25 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:05.824 03:12:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.824 03:12:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.824 03:12:25 -- common/autotest_common.sh@10 -- # set +x 00:04:06.084 ************************************ 00:04:06.084 START TEST json_config_extra_key 00:04:06.084 ************************************ 00:04:06.084 03:12:25 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:06.084 03:12:26 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:06.084 03:12:26 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:06.084 03:12:26 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:06.084 03:12:26 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:06.084 03:12:26 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:06.084 03:12:26 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:06.084 03:12:26 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:06.084 03:12:26 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:06.084 03:12:26 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:06.084 03:12:26 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:06.084 03:12:26 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:06.084 03:12:26 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:06.084 03:12:26 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:06.084 03:12:26 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:06.084 03:12:26 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:06.084 03:12:26 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:06.084 03:12:26 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:06.084 03:12:26 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:06.084 03:12:26 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:06.084 03:12:26 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:06.084 03:12:26 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:06.084 03:12:26 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:06.084 03:12:26 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:06.084 03:12:26 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:06.084 03:12:26 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:06.084 03:12:26 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:06.084 03:12:26 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:06.084 03:12:26 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:06.084 03:12:26 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:06.084 03:12:26 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:06.084 03:12:26 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:06.084 03:12:26 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:06.084 03:12:26 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:06.084 03:12:26 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:06.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.084 --rc genhtml_branch_coverage=1 00:04:06.084 --rc genhtml_function_coverage=1 00:04:06.084 --rc genhtml_legend=1 00:04:06.084 --rc geninfo_all_blocks=1 00:04:06.084 --rc geninfo_unexecuted_blocks=1 00:04:06.084 00:04:06.084 ' 00:04:06.084 03:12:26 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:06.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.084 --rc genhtml_branch_coverage=1 00:04:06.084 --rc genhtml_function_coverage=1 00:04:06.084 --rc genhtml_legend=1 00:04:06.084 --rc geninfo_all_blocks=1 00:04:06.084 --rc geninfo_unexecuted_blocks=1 00:04:06.084 00:04:06.084 ' 00:04:06.084 03:12:26 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:06.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.084 --rc genhtml_branch_coverage=1 00:04:06.084 --rc genhtml_function_coverage=1 00:04:06.084 --rc genhtml_legend=1 00:04:06.084 --rc geninfo_all_blocks=1 00:04:06.084 --rc geninfo_unexecuted_blocks=1 00:04:06.084 00:04:06.084 ' 00:04:06.084 03:12:26 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:06.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.084 --rc genhtml_branch_coverage=1 00:04:06.084 --rc genhtml_function_coverage=1 00:04:06.084 --rc genhtml_legend=1 00:04:06.084 --rc geninfo_all_blocks=1 00:04:06.084 --rc geninfo_unexecuted_blocks=1 00:04:06.084 00:04:06.084 ' 00:04:06.084 03:12:26 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:06.084 03:12:26 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:06.084 03:12:26 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:06.084 03:12:26 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:06.084 03:12:26 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:06.084 03:12:26 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:06.084 03:12:26 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:06.084 03:12:26 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:06.084 03:12:26 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:06.084 03:12:26 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:06.084 03:12:26 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:06.084 03:12:26 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:06.084 03:12:26 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:06.084 03:12:26 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:06.084 03:12:26 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:06.084 03:12:26 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:06.084 03:12:26 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:06.084 03:12:26 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:06.084 03:12:26 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:06.084 03:12:26 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:06.084 03:12:26 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:06.084 03:12:26 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:06.084 03:12:26 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:06.085 03:12:26 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.085 03:12:26 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.085 03:12:26 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.085 03:12:26 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:06.085 03:12:26 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.085 03:12:26 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:06.085 03:12:26 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:06.085 03:12:26 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:06.085 03:12:26 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:06.085 03:12:26 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:06.085 03:12:26 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:06.085 03:12:26 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:06.085 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:06.085 03:12:26 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:06.085 03:12:26 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:06.085 03:12:26 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:06.085 03:12:26 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:06.085 03:12:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:06.085 03:12:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:06.085 03:12:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:06.085 03:12:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:06.085 03:12:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:06.085 03:12:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:06.085 03:12:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:06.085 03:12:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:06.085 03:12:26 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:06.085 03:12:26 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:06.085 INFO: launching applications... 00:04:06.085 03:12:26 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:06.085 03:12:26 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:06.085 03:12:26 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:06.085 03:12:26 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:06.085 03:12:26 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:06.085 03:12:26 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:06.085 03:12:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:06.085 03:12:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:06.085 03:12:26 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2425940 00:04:06.085 03:12:26 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:06.085 Waiting for target to run... 00:04:06.085 03:12:26 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2425940 /var/tmp/spdk_tgt.sock 00:04:06.085 03:12:26 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2425940 ']' 00:04:06.085 03:12:26 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:06.085 03:12:26 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:06.085 03:12:26 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:06.085 03:12:26 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:06.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:06.085 03:12:26 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:06.085 03:12:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:06.085 [2024-12-06 03:12:26.196482] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:04:06.085 [2024-12-06 03:12:26.196529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2425940 ] 00:04:06.655 [2024-12-06 03:12:26.643876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.655 [2024-12-06 03:12:26.702000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.915 03:12:27 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:06.915 03:12:27 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:06.915 03:12:27 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:06.915 00:04:06.915 03:12:27 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:06.915 INFO: shutting down applications... 00:04:06.915 03:12:27 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:06.915 03:12:27 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:06.915 03:12:27 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:06.915 03:12:27 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2425940 ]] 00:04:06.915 03:12:27 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2425940 00:04:06.915 03:12:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:06.915 03:12:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:06.915 03:12:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2425940 00:04:06.915 03:12:27 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:07.485 03:12:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:07.485 03:12:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:07.485 03:12:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2425940 00:04:07.485 03:12:27 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:07.486 03:12:27 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:07.486 03:12:27 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:07.486 03:12:27 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:07.486 SPDK target shutdown done 00:04:07.486 03:12:27 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:07.486 Success 00:04:07.486 00:04:07.486 real 0m1.564s 00:04:07.486 user 0m1.185s 00:04:07.486 sys 0m0.563s 00:04:07.486 03:12:27 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.486 03:12:27 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:07.486 ************************************ 00:04:07.486 END TEST json_config_extra_key 00:04:07.486 ************************************ 00:04:07.486 03:12:27 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:07.486 03:12:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.486 03:12:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.486 03:12:27 -- common/autotest_common.sh@10 -- # set +x 00:04:07.486 ************************************ 00:04:07.486 START TEST alias_rpc 00:04:07.486 ************************************ 00:04:07.486 03:12:27 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:07.745 * Looking for test storage... 00:04:07.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:07.745 03:12:27 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:07.745 03:12:27 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:07.745 03:12:27 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:07.745 03:12:27 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:07.745 03:12:27 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.745 03:12:27 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.745 03:12:27 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.745 03:12:27 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.745 03:12:27 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.745 03:12:27 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.745 03:12:27 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.745 03:12:27 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.745 03:12:27 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.745 03:12:27 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.745 03:12:27 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.745 03:12:27 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:07.745 03:12:27 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:07.745 03:12:27 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.745 03:12:27 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.745 03:12:27 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:07.745 03:12:27 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:07.745 03:12:27 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.745 03:12:27 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:07.745 03:12:27 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.745 03:12:27 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:07.745 03:12:27 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:07.745 03:12:27 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.745 03:12:27 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:07.745 03:12:27 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.745 03:12:27 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.745 03:12:27 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.745 03:12:27 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:07.745 03:12:27 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.745 03:12:27 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:07.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.745 --rc genhtml_branch_coverage=1 00:04:07.745 --rc genhtml_function_coverage=1 00:04:07.745 --rc genhtml_legend=1 00:04:07.746 --rc geninfo_all_blocks=1 00:04:07.746 --rc geninfo_unexecuted_blocks=1 00:04:07.746 00:04:07.746 ' 00:04:07.746 03:12:27 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:07.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.746 --rc genhtml_branch_coverage=1 00:04:07.746 --rc genhtml_function_coverage=1 00:04:07.746 --rc genhtml_legend=1 00:04:07.746 --rc geninfo_all_blocks=1 00:04:07.746 --rc geninfo_unexecuted_blocks=1 00:04:07.746 00:04:07.746 ' 00:04:07.746 03:12:27 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:07.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.746 --rc genhtml_branch_coverage=1 00:04:07.746 --rc genhtml_function_coverage=1 00:04:07.746 --rc genhtml_legend=1 00:04:07.746 --rc geninfo_all_blocks=1 00:04:07.746 --rc geninfo_unexecuted_blocks=1 00:04:07.746 00:04:07.746 ' 00:04:07.746 03:12:27 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:07.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.746 --rc genhtml_branch_coverage=1 00:04:07.746 --rc genhtml_function_coverage=1 00:04:07.746 --rc genhtml_legend=1 00:04:07.746 --rc geninfo_all_blocks=1 00:04:07.746 --rc geninfo_unexecuted_blocks=1 00:04:07.746 00:04:07.746 ' 00:04:07.746 03:12:27 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:07.746 03:12:27 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2426231 00:04:07.746 03:12:27 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:07.746 03:12:27 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2426231 00:04:07.746 03:12:27 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2426231 ']' 00:04:07.746 03:12:27 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:07.746 03:12:27 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:07.746 03:12:27 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:07.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:07.746 03:12:27 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:07.746 03:12:27 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.746 [2024-12-06 03:12:27.823124] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:04:07.746 [2024-12-06 03:12:27.823173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2426231 ] 00:04:08.005 [2024-12-06 03:12:27.886378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.005 [2024-12-06 03:12:27.926601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.005 03:12:28 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:08.005 03:12:28 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:08.005 03:12:28 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:08.264 03:12:28 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2426231 00:04:08.264 03:12:28 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2426231 ']' 00:04:08.264 03:12:28 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2426231 00:04:08.264 03:12:28 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:08.264 03:12:28 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:08.264 03:12:28 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2426231 00:04:08.522 03:12:28 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:08.523 03:12:28 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:08.523 03:12:28 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2426231' 00:04:08.523 killing process with pid 2426231 00:04:08.523 03:12:28 alias_rpc -- common/autotest_common.sh@973 -- # kill 2426231 00:04:08.523 03:12:28 alias_rpc -- common/autotest_common.sh@978 -- # wait 2426231 00:04:08.781 00:04:08.781 real 0m1.117s 00:04:08.782 user 0m1.153s 00:04:08.782 sys 0m0.395s 00:04:08.782 03:12:28 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.782 03:12:28 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.782 ************************************ 00:04:08.782 END TEST alias_rpc 00:04:08.782 ************************************ 00:04:08.782 03:12:28 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:08.782 03:12:28 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:08.782 03:12:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.782 03:12:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.782 03:12:28 -- common/autotest_common.sh@10 -- # set +x 00:04:08.782 ************************************ 00:04:08.782 START TEST spdkcli_tcp 00:04:08.782 ************************************ 00:04:08.782 03:12:28 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:08.782 * Looking for test storage... 00:04:08.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:08.782 03:12:28 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:08.782 03:12:28 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:08.782 03:12:28 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:09.041 03:12:28 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:09.041 03:12:28 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:09.041 03:12:28 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:09.041 03:12:28 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:09.041 03:12:28 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:09.041 03:12:28 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:09.041 03:12:28 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:09.041 03:12:28 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:09.041 03:12:28 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:09.041 03:12:28 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:09.041 03:12:28 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:09.041 03:12:28 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:09.041 03:12:28 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:09.041 03:12:28 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:09.041 03:12:28 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:09.041 03:12:28 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:09.041 03:12:28 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:09.041 03:12:28 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:09.041 03:12:28 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:09.041 03:12:28 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:09.041 03:12:28 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:09.041 03:12:28 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:09.041 03:12:28 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:09.041 03:12:28 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:09.041 03:12:28 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:09.041 03:12:28 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:09.041 03:12:28 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:09.042 03:12:28 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:09.042 03:12:28 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:09.042 03:12:28 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:09.042 03:12:28 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:09.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.042 --rc genhtml_branch_coverage=1 00:04:09.042 --rc genhtml_function_coverage=1 00:04:09.042 --rc genhtml_legend=1 00:04:09.042 --rc geninfo_all_blocks=1 00:04:09.042 --rc geninfo_unexecuted_blocks=1 00:04:09.042 00:04:09.042 ' 00:04:09.042 03:12:28 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:09.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.042 --rc genhtml_branch_coverage=1 00:04:09.042 --rc genhtml_function_coverage=1 00:04:09.042 --rc genhtml_legend=1 00:04:09.042 --rc geninfo_all_blocks=1 00:04:09.042 --rc geninfo_unexecuted_blocks=1 00:04:09.042 00:04:09.042 ' 00:04:09.042 03:12:28 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:09.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.042 --rc genhtml_branch_coverage=1 00:04:09.042 --rc genhtml_function_coverage=1 00:04:09.042 --rc genhtml_legend=1 00:04:09.042 --rc geninfo_all_blocks=1 00:04:09.042 --rc geninfo_unexecuted_blocks=1 00:04:09.042 00:04:09.042 ' 00:04:09.042 03:12:28 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:09.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.042 --rc genhtml_branch_coverage=1 00:04:09.042 --rc genhtml_function_coverage=1 00:04:09.042 --rc genhtml_legend=1 00:04:09.042 --rc geninfo_all_blocks=1 00:04:09.042 --rc geninfo_unexecuted_blocks=1 00:04:09.042 00:04:09.042 ' 00:04:09.042 03:12:28 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:09.042 03:12:28 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:09.042 03:12:28 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:09.042 03:12:28 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:09.042 03:12:28 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:09.042 03:12:28 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:09.042 03:12:28 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:09.042 03:12:28 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:09.042 03:12:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:09.042 03:12:28 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2426526 00:04:09.042 03:12:28 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:09.042 03:12:28 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2426526 00:04:09.042 03:12:28 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2426526 ']' 00:04:09.042 03:12:28 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:09.042 03:12:28 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:09.042 03:12:28 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:09.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:09.042 03:12:28 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:09.042 03:12:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:09.042 [2024-12-06 03:12:29.008289] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:04:09.042 [2024-12-06 03:12:29.008333] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2426526 ] 00:04:09.042 [2024-12-06 03:12:29.070936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:09.042 [2024-12-06 03:12:29.112284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:09.042 [2024-12-06 03:12:29.112287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.301 03:12:29 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:09.301 03:12:29 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:09.301 03:12:29 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2426530 00:04:09.301 03:12:29 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:09.301 03:12:29 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:09.561 [ 00:04:09.561 "bdev_malloc_delete", 00:04:09.561 "bdev_malloc_create", 00:04:09.561 "bdev_null_resize", 00:04:09.561 "bdev_null_delete", 00:04:09.561 "bdev_null_create", 00:04:09.561 "bdev_nvme_cuse_unregister", 00:04:09.561 "bdev_nvme_cuse_register", 00:04:09.561 "bdev_opal_new_user", 00:04:09.561 "bdev_opal_set_lock_state", 00:04:09.561 "bdev_opal_delete", 00:04:09.561 "bdev_opal_get_info", 00:04:09.561 "bdev_opal_create", 00:04:09.561 "bdev_nvme_opal_revert", 00:04:09.561 "bdev_nvme_opal_init", 00:04:09.561 "bdev_nvme_send_cmd", 00:04:09.561 "bdev_nvme_set_keys", 00:04:09.561 "bdev_nvme_get_path_iostat", 00:04:09.561 "bdev_nvme_get_mdns_discovery_info", 00:04:09.561 "bdev_nvme_stop_mdns_discovery", 00:04:09.561 "bdev_nvme_start_mdns_discovery", 00:04:09.561 "bdev_nvme_set_multipath_policy", 00:04:09.561 "bdev_nvme_set_preferred_path", 00:04:09.561 "bdev_nvme_get_io_paths", 00:04:09.561 "bdev_nvme_remove_error_injection", 00:04:09.561 "bdev_nvme_add_error_injection", 00:04:09.561 "bdev_nvme_get_discovery_info", 00:04:09.561 "bdev_nvme_stop_discovery", 00:04:09.561 "bdev_nvme_start_discovery", 00:04:09.561 "bdev_nvme_get_controller_health_info", 00:04:09.561 "bdev_nvme_disable_controller", 00:04:09.561 "bdev_nvme_enable_controller", 00:04:09.561 "bdev_nvme_reset_controller", 00:04:09.561 "bdev_nvme_get_transport_statistics", 00:04:09.561 "bdev_nvme_apply_firmware", 00:04:09.561 "bdev_nvme_detach_controller", 00:04:09.561 "bdev_nvme_get_controllers", 00:04:09.561 "bdev_nvme_attach_controller", 00:04:09.561 "bdev_nvme_set_hotplug", 00:04:09.561 "bdev_nvme_set_options", 00:04:09.561 "bdev_passthru_delete", 00:04:09.561 "bdev_passthru_create", 00:04:09.561 "bdev_lvol_set_parent_bdev", 00:04:09.561 "bdev_lvol_set_parent", 00:04:09.561 "bdev_lvol_check_shallow_copy", 00:04:09.561 "bdev_lvol_start_shallow_copy", 00:04:09.561 "bdev_lvol_grow_lvstore", 00:04:09.561 "bdev_lvol_get_lvols", 00:04:09.561 "bdev_lvol_get_lvstores", 00:04:09.561 "bdev_lvol_delete", 00:04:09.561 "bdev_lvol_set_read_only", 00:04:09.561 "bdev_lvol_resize", 00:04:09.561 "bdev_lvol_decouple_parent", 00:04:09.561 "bdev_lvol_inflate", 00:04:09.561 "bdev_lvol_rename", 00:04:09.561 "bdev_lvol_clone_bdev", 00:04:09.561 "bdev_lvol_clone", 00:04:09.561 "bdev_lvol_snapshot", 00:04:09.561 "bdev_lvol_create", 00:04:09.561 "bdev_lvol_delete_lvstore", 00:04:09.561 "bdev_lvol_rename_lvstore", 00:04:09.561 "bdev_lvol_create_lvstore", 00:04:09.561 "bdev_raid_set_options", 00:04:09.561 "bdev_raid_remove_base_bdev", 00:04:09.561 "bdev_raid_add_base_bdev", 00:04:09.561 "bdev_raid_delete", 00:04:09.561 "bdev_raid_create", 00:04:09.561 "bdev_raid_get_bdevs", 00:04:09.561 "bdev_error_inject_error", 00:04:09.561 "bdev_error_delete", 00:04:09.561 "bdev_error_create", 00:04:09.561 "bdev_split_delete", 00:04:09.561 "bdev_split_create", 00:04:09.561 "bdev_delay_delete", 00:04:09.561 "bdev_delay_create", 00:04:09.561 "bdev_delay_update_latency", 00:04:09.561 "bdev_zone_block_delete", 00:04:09.561 "bdev_zone_block_create", 00:04:09.561 "blobfs_create", 00:04:09.561 "blobfs_detect", 00:04:09.561 "blobfs_set_cache_size", 00:04:09.561 "bdev_aio_delete", 00:04:09.561 "bdev_aio_rescan", 00:04:09.561 "bdev_aio_create", 00:04:09.561 "bdev_ftl_set_property", 00:04:09.561 "bdev_ftl_get_properties", 00:04:09.561 "bdev_ftl_get_stats", 00:04:09.561 "bdev_ftl_unmap", 00:04:09.561 "bdev_ftl_unload", 00:04:09.561 "bdev_ftl_delete", 00:04:09.561 "bdev_ftl_load", 00:04:09.561 "bdev_ftl_create", 00:04:09.561 "bdev_virtio_attach_controller", 00:04:09.561 "bdev_virtio_scsi_get_devices", 00:04:09.561 "bdev_virtio_detach_controller", 00:04:09.561 "bdev_virtio_blk_set_hotplug", 00:04:09.561 "bdev_iscsi_delete", 00:04:09.561 "bdev_iscsi_create", 00:04:09.561 "bdev_iscsi_set_options", 00:04:09.561 "accel_error_inject_error", 00:04:09.561 "ioat_scan_accel_module", 00:04:09.561 "dsa_scan_accel_module", 00:04:09.561 "iaa_scan_accel_module", 00:04:09.561 "vfu_virtio_create_fs_endpoint", 00:04:09.561 "vfu_virtio_create_scsi_endpoint", 00:04:09.561 "vfu_virtio_scsi_remove_target", 00:04:09.561 "vfu_virtio_scsi_add_target", 00:04:09.561 "vfu_virtio_create_blk_endpoint", 00:04:09.561 "vfu_virtio_delete_endpoint", 00:04:09.561 "keyring_file_remove_key", 00:04:09.561 "keyring_file_add_key", 00:04:09.561 "keyring_linux_set_options", 00:04:09.561 "fsdev_aio_delete", 00:04:09.561 "fsdev_aio_create", 00:04:09.561 "iscsi_get_histogram", 00:04:09.561 "iscsi_enable_histogram", 00:04:09.561 "iscsi_set_options", 00:04:09.561 "iscsi_get_auth_groups", 00:04:09.561 "iscsi_auth_group_remove_secret", 00:04:09.561 "iscsi_auth_group_add_secret", 00:04:09.561 "iscsi_delete_auth_group", 00:04:09.561 "iscsi_create_auth_group", 00:04:09.561 "iscsi_set_discovery_auth", 00:04:09.561 "iscsi_get_options", 00:04:09.561 "iscsi_target_node_request_logout", 00:04:09.561 "iscsi_target_node_set_redirect", 00:04:09.561 "iscsi_target_node_set_auth", 00:04:09.561 "iscsi_target_node_add_lun", 00:04:09.561 "iscsi_get_stats", 00:04:09.561 "iscsi_get_connections", 00:04:09.561 "iscsi_portal_group_set_auth", 00:04:09.561 "iscsi_start_portal_group", 00:04:09.561 "iscsi_delete_portal_group", 00:04:09.561 "iscsi_create_portal_group", 00:04:09.561 "iscsi_get_portal_groups", 00:04:09.561 "iscsi_delete_target_node", 00:04:09.561 "iscsi_target_node_remove_pg_ig_maps", 00:04:09.561 "iscsi_target_node_add_pg_ig_maps", 00:04:09.561 "iscsi_create_target_node", 00:04:09.561 "iscsi_get_target_nodes", 00:04:09.561 "iscsi_delete_initiator_group", 00:04:09.561 "iscsi_initiator_group_remove_initiators", 00:04:09.561 "iscsi_initiator_group_add_initiators", 00:04:09.561 "iscsi_create_initiator_group", 00:04:09.561 "iscsi_get_initiator_groups", 00:04:09.561 "nvmf_set_crdt", 00:04:09.561 "nvmf_set_config", 00:04:09.561 "nvmf_set_max_subsystems", 00:04:09.561 "nvmf_stop_mdns_prr", 00:04:09.561 "nvmf_publish_mdns_prr", 00:04:09.561 "nvmf_subsystem_get_listeners", 00:04:09.561 "nvmf_subsystem_get_qpairs", 00:04:09.561 "nvmf_subsystem_get_controllers", 00:04:09.561 "nvmf_get_stats", 00:04:09.561 "nvmf_get_transports", 00:04:09.561 "nvmf_create_transport", 00:04:09.561 "nvmf_get_targets", 00:04:09.561 "nvmf_delete_target", 00:04:09.561 "nvmf_create_target", 00:04:09.561 "nvmf_subsystem_allow_any_host", 00:04:09.561 "nvmf_subsystem_set_keys", 00:04:09.561 "nvmf_subsystem_remove_host", 00:04:09.561 "nvmf_subsystem_add_host", 00:04:09.561 "nvmf_ns_remove_host", 00:04:09.561 "nvmf_ns_add_host", 00:04:09.561 "nvmf_subsystem_remove_ns", 00:04:09.561 "nvmf_subsystem_set_ns_ana_group", 00:04:09.561 "nvmf_subsystem_add_ns", 00:04:09.561 "nvmf_subsystem_listener_set_ana_state", 00:04:09.561 "nvmf_discovery_get_referrals", 00:04:09.561 "nvmf_discovery_remove_referral", 00:04:09.561 "nvmf_discovery_add_referral", 00:04:09.561 "nvmf_subsystem_remove_listener", 00:04:09.561 "nvmf_subsystem_add_listener", 00:04:09.561 "nvmf_delete_subsystem", 00:04:09.561 "nvmf_create_subsystem", 00:04:09.561 "nvmf_get_subsystems", 00:04:09.561 "env_dpdk_get_mem_stats", 00:04:09.561 "nbd_get_disks", 00:04:09.561 "nbd_stop_disk", 00:04:09.561 "nbd_start_disk", 00:04:09.561 "ublk_recover_disk", 00:04:09.561 "ublk_get_disks", 00:04:09.561 "ublk_stop_disk", 00:04:09.561 "ublk_start_disk", 00:04:09.561 "ublk_destroy_target", 00:04:09.561 "ublk_create_target", 00:04:09.561 "virtio_blk_create_transport", 00:04:09.561 "virtio_blk_get_transports", 00:04:09.561 "vhost_controller_set_coalescing", 00:04:09.561 "vhost_get_controllers", 00:04:09.561 "vhost_delete_controller", 00:04:09.561 "vhost_create_blk_controller", 00:04:09.561 "vhost_scsi_controller_remove_target", 00:04:09.561 "vhost_scsi_controller_add_target", 00:04:09.562 "vhost_start_scsi_controller", 00:04:09.562 "vhost_create_scsi_controller", 00:04:09.562 "thread_set_cpumask", 00:04:09.562 "scheduler_set_options", 00:04:09.562 "framework_get_governor", 00:04:09.562 "framework_get_scheduler", 00:04:09.562 "framework_set_scheduler", 00:04:09.562 "framework_get_reactors", 00:04:09.562 "thread_get_io_channels", 00:04:09.562 "thread_get_pollers", 00:04:09.562 "thread_get_stats", 00:04:09.562 "framework_monitor_context_switch", 00:04:09.562 "spdk_kill_instance", 00:04:09.562 "log_enable_timestamps", 00:04:09.562 "log_get_flags", 00:04:09.562 "log_clear_flag", 00:04:09.562 "log_set_flag", 00:04:09.562 "log_get_level", 00:04:09.562 "log_set_level", 00:04:09.562 "log_get_print_level", 00:04:09.562 "log_set_print_level", 00:04:09.562 "framework_enable_cpumask_locks", 00:04:09.562 "framework_disable_cpumask_locks", 00:04:09.562 "framework_wait_init", 00:04:09.562 "framework_start_init", 00:04:09.562 "scsi_get_devices", 00:04:09.562 "bdev_get_histogram", 00:04:09.562 "bdev_enable_histogram", 00:04:09.562 "bdev_set_qos_limit", 00:04:09.562 "bdev_set_qd_sampling_period", 00:04:09.562 "bdev_get_bdevs", 00:04:09.562 "bdev_reset_iostat", 00:04:09.562 "bdev_get_iostat", 00:04:09.562 "bdev_examine", 00:04:09.562 "bdev_wait_for_examine", 00:04:09.562 "bdev_set_options", 00:04:09.562 "accel_get_stats", 00:04:09.562 "accel_set_options", 00:04:09.562 "accel_set_driver", 00:04:09.562 "accel_crypto_key_destroy", 00:04:09.562 "accel_crypto_keys_get", 00:04:09.562 "accel_crypto_key_create", 00:04:09.562 "accel_assign_opc", 00:04:09.562 "accel_get_module_info", 00:04:09.562 "accel_get_opc_assignments", 00:04:09.562 "vmd_rescan", 00:04:09.562 "vmd_remove_device", 00:04:09.562 "vmd_enable", 00:04:09.562 "sock_get_default_impl", 00:04:09.562 "sock_set_default_impl", 00:04:09.562 "sock_impl_set_options", 00:04:09.562 "sock_impl_get_options", 00:04:09.562 "iobuf_get_stats", 00:04:09.562 "iobuf_set_options", 00:04:09.562 "keyring_get_keys", 00:04:09.562 "vfu_tgt_set_base_path", 00:04:09.562 "framework_get_pci_devices", 00:04:09.562 "framework_get_config", 00:04:09.562 "framework_get_subsystems", 00:04:09.562 "fsdev_set_opts", 00:04:09.562 "fsdev_get_opts", 00:04:09.562 "trace_get_info", 00:04:09.562 "trace_get_tpoint_group_mask", 00:04:09.562 "trace_disable_tpoint_group", 00:04:09.562 "trace_enable_tpoint_group", 00:04:09.562 "trace_clear_tpoint_mask", 00:04:09.562 "trace_set_tpoint_mask", 00:04:09.562 "notify_get_notifications", 00:04:09.562 "notify_get_types", 00:04:09.562 "spdk_get_version", 00:04:09.562 "rpc_get_methods" 00:04:09.562 ] 00:04:09.562 03:12:29 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:09.562 03:12:29 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:09.562 03:12:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:09.562 03:12:29 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:09.562 03:12:29 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2426526 00:04:09.562 03:12:29 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2426526 ']' 00:04:09.562 03:12:29 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2426526 00:04:09.562 03:12:29 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:09.562 03:12:29 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:09.562 03:12:29 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2426526 00:04:09.562 03:12:29 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:09.562 03:12:29 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:09.562 03:12:29 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2426526' 00:04:09.562 killing process with pid 2426526 00:04:09.562 03:12:29 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2426526 00:04:09.562 03:12:29 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2426526 00:04:09.848 00:04:09.848 real 0m1.120s 00:04:09.848 user 0m1.885s 00:04:09.848 sys 0m0.433s 00:04:09.848 03:12:29 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.848 03:12:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:09.848 ************************************ 00:04:09.848 END TEST spdkcli_tcp 00:04:09.848 ************************************ 00:04:09.848 03:12:29 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:09.848 03:12:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.848 03:12:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.848 03:12:29 -- common/autotest_common.sh@10 -- # set +x 00:04:09.848 ************************************ 00:04:09.848 START TEST dpdk_mem_utility 00:04:09.848 ************************************ 00:04:09.848 03:12:29 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:10.107 * Looking for test storage... 00:04:10.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:10.107 03:12:30 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:10.107 03:12:30 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:10.107 03:12:30 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:10.107 03:12:30 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:10.107 03:12:30 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:10.107 03:12:30 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:10.107 03:12:30 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:10.107 03:12:30 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:10.107 03:12:30 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:10.107 03:12:30 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:10.107 03:12:30 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:10.107 03:12:30 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:10.107 03:12:30 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:10.107 03:12:30 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:10.107 03:12:30 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:10.107 03:12:30 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:10.107 03:12:30 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:10.107 03:12:30 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:10.107 03:12:30 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:10.107 03:12:30 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:10.107 03:12:30 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:10.107 03:12:30 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:10.107 03:12:30 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:10.107 03:12:30 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:10.107 03:12:30 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:10.107 03:12:30 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:10.107 03:12:30 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:10.107 03:12:30 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:10.107 03:12:30 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:10.107 03:12:30 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:10.107 03:12:30 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:10.107 03:12:30 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:10.107 03:12:30 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:10.107 03:12:30 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:10.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.107 --rc genhtml_branch_coverage=1 00:04:10.107 --rc genhtml_function_coverage=1 00:04:10.107 --rc genhtml_legend=1 00:04:10.107 --rc geninfo_all_blocks=1 00:04:10.107 --rc geninfo_unexecuted_blocks=1 00:04:10.107 00:04:10.107 ' 00:04:10.107 03:12:30 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:10.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.107 --rc genhtml_branch_coverage=1 00:04:10.107 --rc genhtml_function_coverage=1 00:04:10.107 --rc genhtml_legend=1 00:04:10.107 --rc geninfo_all_blocks=1 00:04:10.107 --rc geninfo_unexecuted_blocks=1 00:04:10.107 00:04:10.107 ' 00:04:10.107 03:12:30 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:10.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.107 --rc genhtml_branch_coverage=1 00:04:10.107 --rc genhtml_function_coverage=1 00:04:10.107 --rc genhtml_legend=1 00:04:10.107 --rc geninfo_all_blocks=1 00:04:10.107 --rc geninfo_unexecuted_blocks=1 00:04:10.107 00:04:10.107 ' 00:04:10.107 03:12:30 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:10.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.107 --rc genhtml_branch_coverage=1 00:04:10.107 --rc genhtml_function_coverage=1 00:04:10.107 --rc genhtml_legend=1 00:04:10.107 --rc geninfo_all_blocks=1 00:04:10.107 --rc geninfo_unexecuted_blocks=1 00:04:10.107 00:04:10.107 ' 00:04:10.107 03:12:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:10.107 03:12:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2426828 00:04:10.107 03:12:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:10.107 03:12:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2426828 00:04:10.107 03:12:30 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2426828 ']' 00:04:10.107 03:12:30 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:10.107 03:12:30 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:10.107 03:12:30 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:10.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:10.107 03:12:30 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:10.107 03:12:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:10.107 [2024-12-06 03:12:30.184458] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:04:10.107 [2024-12-06 03:12:30.184508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2426828 ] 00:04:10.367 [2024-12-06 03:12:30.246816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.367 [2024-12-06 03:12:30.290094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.367 03:12:30 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:10.367 03:12:30 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:10.367 03:12:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:10.367 03:12:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:10.367 03:12:30 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.367 03:12:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:10.367 { 00:04:10.367 "filename": "/tmp/spdk_mem_dump.txt" 00:04:10.367 } 00:04:10.367 03:12:30 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.367 03:12:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:10.627 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:10.627 1 heaps totaling size 818.000000 MiB 00:04:10.627 size: 818.000000 MiB heap id: 0 00:04:10.627 end heaps---------- 00:04:10.627 9 mempools totaling size 603.782043 MiB 00:04:10.627 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:10.627 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:10.627 size: 100.555481 MiB name: bdev_io_2426828 00:04:10.627 size: 50.003479 MiB name: msgpool_2426828 00:04:10.627 size: 36.509338 MiB name: fsdev_io_2426828 00:04:10.627 size: 21.763794 MiB name: PDU_Pool 00:04:10.627 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:10.627 size: 4.133484 MiB name: evtpool_2426828 00:04:10.627 size: 0.026123 MiB name: Session_Pool 00:04:10.627 end mempools------- 00:04:10.627 6 memzones totaling size 4.142822 MiB 00:04:10.627 size: 1.000366 MiB name: RG_ring_0_2426828 00:04:10.627 size: 1.000366 MiB name: RG_ring_1_2426828 00:04:10.627 size: 1.000366 MiB name: RG_ring_4_2426828 00:04:10.627 size: 1.000366 MiB name: RG_ring_5_2426828 00:04:10.627 size: 0.125366 MiB name: RG_ring_2_2426828 00:04:10.627 size: 0.015991 MiB name: RG_ring_3_2426828 00:04:10.627 end memzones------- 00:04:10.627 03:12:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:10.627 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:10.627 list of free elements. size: 10.852478 MiB 00:04:10.627 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:10.627 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:10.627 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:10.627 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:10.627 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:10.627 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:10.627 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:10.627 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:10.627 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:10.627 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:10.627 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:10.627 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:10.627 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:10.627 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:10.627 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:10.627 list of standard malloc elements. size: 199.218628 MiB 00:04:10.627 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:10.627 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:10.627 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:10.627 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:10.627 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:10.627 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:10.627 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:10.627 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:10.627 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:10.627 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:10.627 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:10.628 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:10.628 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:10.628 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:10.628 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:10.628 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:10.628 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:10.628 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:10.628 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:10.628 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:10.628 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:10.628 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:10.628 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:10.628 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:10.628 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:10.628 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:10.628 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:10.628 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:10.628 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:10.628 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:10.628 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:10.628 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:10.628 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:10.628 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:10.628 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:10.628 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:10.628 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:10.628 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:10.628 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:10.628 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:10.628 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:10.628 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:10.628 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:10.628 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:10.628 list of memzone associated elements. size: 607.928894 MiB 00:04:10.628 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:10.628 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:10.628 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:10.628 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:10.628 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:10.628 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_2426828_0 00:04:10.628 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:10.628 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2426828_0 00:04:10.628 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:10.628 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2426828_0 00:04:10.628 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:10.628 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:10.628 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:10.628 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:10.628 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:10.628 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2426828_0 00:04:10.628 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:10.628 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2426828 00:04:10.628 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:10.628 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2426828 00:04:10.628 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:10.628 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:10.628 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:10.628 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:10.628 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:10.628 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:10.628 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:10.628 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:10.628 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:10.628 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2426828 00:04:10.628 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:10.628 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2426828 00:04:10.628 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:10.628 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2426828 00:04:10.628 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:10.628 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2426828 00:04:10.628 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:10.628 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2426828 00:04:10.628 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:10.628 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2426828 00:04:10.628 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:10.628 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:10.628 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:10.628 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:10.628 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:10.628 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:10.628 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:10.628 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2426828 00:04:10.628 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:10.628 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2426828 00:04:10.628 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:10.628 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:10.628 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:10.628 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:10.628 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:10.628 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2426828 00:04:10.628 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:10.628 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:10.628 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:10.628 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2426828 00:04:10.628 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:10.628 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2426828 00:04:10.628 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:10.628 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2426828 00:04:10.628 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:10.628 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:10.628 03:12:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:10.628 03:12:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2426828 00:04:10.628 03:12:30 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2426828 ']' 00:04:10.628 03:12:30 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2426828 00:04:10.628 03:12:30 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:10.628 03:12:30 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:10.628 03:12:30 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2426828 00:04:10.628 03:12:30 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:10.628 03:12:30 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:10.628 03:12:30 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2426828' 00:04:10.628 killing process with pid 2426828 00:04:10.628 03:12:30 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2426828 00:04:10.628 03:12:30 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2426828 00:04:10.887 00:04:10.887 real 0m0.976s 00:04:10.887 user 0m0.897s 00:04:10.887 sys 0m0.400s 00:04:10.887 03:12:30 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.887 03:12:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:10.887 ************************************ 00:04:10.887 END TEST dpdk_mem_utility 00:04:10.887 ************************************ 00:04:10.887 03:12:30 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:10.887 03:12:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.887 03:12:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.887 03:12:30 -- common/autotest_common.sh@10 -- # set +x 00:04:10.887 ************************************ 00:04:10.887 START TEST event 00:04:10.887 ************************************ 00:04:10.887 03:12:31 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:11.147 * Looking for test storage... 00:04:11.147 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:11.147 03:12:31 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:11.147 03:12:31 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:11.147 03:12:31 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:11.147 03:12:31 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:11.147 03:12:31 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:11.147 03:12:31 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:11.147 03:12:31 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:11.147 03:12:31 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:11.147 03:12:31 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:11.147 03:12:31 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:11.147 03:12:31 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:11.147 03:12:31 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:11.147 03:12:31 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:11.147 03:12:31 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:11.147 03:12:31 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:11.147 03:12:31 event -- scripts/common.sh@344 -- # case "$op" in 00:04:11.147 03:12:31 event -- scripts/common.sh@345 -- # : 1 00:04:11.147 03:12:31 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:11.147 03:12:31 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:11.147 03:12:31 event -- scripts/common.sh@365 -- # decimal 1 00:04:11.147 03:12:31 event -- scripts/common.sh@353 -- # local d=1 00:04:11.147 03:12:31 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:11.147 03:12:31 event -- scripts/common.sh@355 -- # echo 1 00:04:11.147 03:12:31 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:11.147 03:12:31 event -- scripts/common.sh@366 -- # decimal 2 00:04:11.147 03:12:31 event -- scripts/common.sh@353 -- # local d=2 00:04:11.147 03:12:31 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:11.147 03:12:31 event -- scripts/common.sh@355 -- # echo 2 00:04:11.147 03:12:31 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:11.147 03:12:31 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:11.147 03:12:31 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:11.147 03:12:31 event -- scripts/common.sh@368 -- # return 0 00:04:11.147 03:12:31 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:11.147 03:12:31 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:11.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.147 --rc genhtml_branch_coverage=1 00:04:11.147 --rc genhtml_function_coverage=1 00:04:11.147 --rc genhtml_legend=1 00:04:11.147 --rc geninfo_all_blocks=1 00:04:11.147 --rc geninfo_unexecuted_blocks=1 00:04:11.147 00:04:11.147 ' 00:04:11.147 03:12:31 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:11.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.147 --rc genhtml_branch_coverage=1 00:04:11.147 --rc genhtml_function_coverage=1 00:04:11.147 --rc genhtml_legend=1 00:04:11.147 --rc geninfo_all_blocks=1 00:04:11.147 --rc geninfo_unexecuted_blocks=1 00:04:11.147 00:04:11.147 ' 00:04:11.147 03:12:31 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:11.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.147 --rc genhtml_branch_coverage=1 00:04:11.147 --rc genhtml_function_coverage=1 00:04:11.147 --rc genhtml_legend=1 00:04:11.147 --rc geninfo_all_blocks=1 00:04:11.147 --rc geninfo_unexecuted_blocks=1 00:04:11.147 00:04:11.147 ' 00:04:11.147 03:12:31 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:11.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.147 --rc genhtml_branch_coverage=1 00:04:11.147 --rc genhtml_function_coverage=1 00:04:11.147 --rc genhtml_legend=1 00:04:11.147 --rc geninfo_all_blocks=1 00:04:11.147 --rc geninfo_unexecuted_blocks=1 00:04:11.147 00:04:11.147 ' 00:04:11.147 03:12:31 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:11.147 03:12:31 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:11.147 03:12:31 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:11.147 03:12:31 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:11.147 03:12:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.147 03:12:31 event -- common/autotest_common.sh@10 -- # set +x 00:04:11.147 ************************************ 00:04:11.147 START TEST event_perf 00:04:11.147 ************************************ 00:04:11.147 03:12:31 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:11.147 Running I/O for 1 seconds...[2024-12-06 03:12:31.229890] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:04:11.147 [2024-12-06 03:12:31.230077] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2427082 ] 00:04:11.407 [2024-12-06 03:12:31.295041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:11.407 [2024-12-06 03:12:31.339289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:11.407 [2024-12-06 03:12:31.339388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:11.407 [2024-12-06 03:12:31.339462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:11.407 [2024-12-06 03:12:31.339464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.345 Running I/O for 1 seconds... 00:04:12.345 lcore 0: 204797 00:04:12.345 lcore 1: 204795 00:04:12.345 lcore 2: 204796 00:04:12.345 lcore 3: 204797 00:04:12.345 done. 00:04:12.345 00:04:12.345 real 0m1.170s 00:04:12.345 user 0m4.101s 00:04:12.345 sys 0m0.068s 00:04:12.345 03:12:32 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.345 03:12:32 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:12.345 ************************************ 00:04:12.345 END TEST event_perf 00:04:12.345 ************************************ 00:04:12.345 03:12:32 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:12.345 03:12:32 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:12.345 03:12:32 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.345 03:12:32 event -- common/autotest_common.sh@10 -- # set +x 00:04:12.345 ************************************ 00:04:12.345 START TEST event_reactor 00:04:12.345 ************************************ 00:04:12.345 03:12:32 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:12.345 [2024-12-06 03:12:32.461882] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:04:12.345 [2024-12-06 03:12:32.461959] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2427250 ] 00:04:12.604 [2024-12-06 03:12:32.526344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.604 [2024-12-06 03:12:32.567716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.541 test_start 00:04:13.541 oneshot 00:04:13.541 tick 100 00:04:13.541 tick 100 00:04:13.541 tick 250 00:04:13.541 tick 100 00:04:13.541 tick 100 00:04:13.541 tick 100 00:04:13.541 tick 250 00:04:13.541 tick 500 00:04:13.541 tick 100 00:04:13.541 tick 100 00:04:13.541 tick 250 00:04:13.541 tick 100 00:04:13.541 tick 100 00:04:13.541 test_end 00:04:13.541 00:04:13.541 real 0m1.163s 00:04:13.541 user 0m1.102s 00:04:13.541 sys 0m0.058s 00:04:13.541 03:12:33 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.541 03:12:33 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:13.541 ************************************ 00:04:13.541 END TEST event_reactor 00:04:13.541 ************************************ 00:04:13.541 03:12:33 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:13.541 03:12:33 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:13.541 03:12:33 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.541 03:12:33 event -- common/autotest_common.sh@10 -- # set +x 00:04:13.541 ************************************ 00:04:13.541 START TEST event_reactor_perf 00:04:13.541 ************************************ 00:04:13.541 03:12:33 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:13.800 [2024-12-06 03:12:33.697456] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:04:13.800 [2024-12-06 03:12:33.697526] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2427417 ] 00:04:13.800 [2024-12-06 03:12:33.766037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.800 [2024-12-06 03:12:33.805622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.753 test_start 00:04:14.753 test_end 00:04:14.753 Performance: 508093 events per second 00:04:14.753 00:04:14.753 real 0m1.167s 00:04:14.753 user 0m1.093s 00:04:14.753 sys 0m0.070s 00:04:14.753 03:12:34 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.753 03:12:34 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:14.753 ************************************ 00:04:14.753 END TEST event_reactor_perf 00:04:14.753 ************************************ 00:04:14.753 03:12:34 event -- event/event.sh@49 -- # uname -s 00:04:14.753 03:12:34 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:14.753 03:12:34 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:14.753 03:12:34 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.753 03:12:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.753 03:12:34 event -- common/autotest_common.sh@10 -- # set +x 00:04:15.011 ************************************ 00:04:15.011 START TEST event_scheduler 00:04:15.011 ************************************ 00:04:15.011 03:12:34 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:15.011 * Looking for test storage... 00:04:15.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:15.011 03:12:35 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:15.011 03:12:35 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:04:15.011 03:12:35 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:15.011 03:12:35 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:15.011 03:12:35 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.011 03:12:35 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.011 03:12:35 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.011 03:12:35 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.011 03:12:35 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.011 03:12:35 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.011 03:12:35 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.011 03:12:35 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.011 03:12:35 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.011 03:12:35 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.011 03:12:35 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.011 03:12:35 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:15.011 03:12:35 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:15.011 03:12:35 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.011 03:12:35 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.011 03:12:35 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:15.011 03:12:35 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:15.011 03:12:35 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.011 03:12:35 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:15.011 03:12:35 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.011 03:12:35 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:15.011 03:12:35 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:15.011 03:12:35 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.011 03:12:35 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:15.011 03:12:35 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.011 03:12:35 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.011 03:12:35 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.011 03:12:35 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:15.011 03:12:35 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.011 03:12:35 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:15.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.011 --rc genhtml_branch_coverage=1 00:04:15.011 --rc genhtml_function_coverage=1 00:04:15.011 --rc genhtml_legend=1 00:04:15.011 --rc geninfo_all_blocks=1 00:04:15.011 --rc geninfo_unexecuted_blocks=1 00:04:15.011 00:04:15.011 ' 00:04:15.011 03:12:35 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:15.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.011 --rc genhtml_branch_coverage=1 00:04:15.011 --rc genhtml_function_coverage=1 00:04:15.011 --rc genhtml_legend=1 00:04:15.011 --rc geninfo_all_blocks=1 00:04:15.011 --rc geninfo_unexecuted_blocks=1 00:04:15.011 00:04:15.011 ' 00:04:15.011 03:12:35 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:15.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.011 --rc genhtml_branch_coverage=1 00:04:15.011 --rc genhtml_function_coverage=1 00:04:15.011 --rc genhtml_legend=1 00:04:15.011 --rc geninfo_all_blocks=1 00:04:15.011 --rc geninfo_unexecuted_blocks=1 00:04:15.011 00:04:15.011 ' 00:04:15.011 03:12:35 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:15.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.011 --rc genhtml_branch_coverage=1 00:04:15.011 --rc genhtml_function_coverage=1 00:04:15.011 --rc genhtml_legend=1 00:04:15.011 --rc geninfo_all_blocks=1 00:04:15.011 --rc geninfo_unexecuted_blocks=1 00:04:15.011 00:04:15.011 ' 00:04:15.011 03:12:35 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:15.011 03:12:35 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2427731 00:04:15.011 03:12:35 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:15.011 03:12:35 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:15.011 03:12:35 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2427731 00:04:15.011 03:12:35 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2427731 ']' 00:04:15.011 03:12:35 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.011 03:12:35 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:15.011 03:12:35 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.011 03:12:35 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:15.011 03:12:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:15.011 [2024-12-06 03:12:35.136235] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:04:15.011 [2024-12-06 03:12:35.136285] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2427731 ] 00:04:15.268 [2024-12-06 03:12:35.197656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:15.268 [2024-12-06 03:12:35.241444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.268 [2024-12-06 03:12:35.241532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:15.268 [2024-12-06 03:12:35.241618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:15.268 [2024-12-06 03:12:35.241620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:15.268 03:12:35 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:15.268 03:12:35 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:15.268 03:12:35 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:15.268 03:12:35 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.268 03:12:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:15.268 [2024-12-06 03:12:35.310226] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:15.268 [2024-12-06 03:12:35.310244] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:15.268 [2024-12-06 03:12:35.310254] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:15.268 [2024-12-06 03:12:35.310260] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:15.268 [2024-12-06 03:12:35.310265] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:15.268 03:12:35 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.268 03:12:35 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:15.268 03:12:35 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.268 03:12:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:15.268 [2024-12-06 03:12:35.385693] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:15.268 03:12:35 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.268 03:12:35 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:15.268 03:12:35 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.268 03:12:35 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.268 03:12:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:15.527 ************************************ 00:04:15.527 START TEST scheduler_create_thread 00:04:15.527 ************************************ 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.527 2 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.527 3 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.527 4 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.527 5 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.527 6 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.527 7 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.527 8 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.527 9 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.527 10 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.527 03:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:16.902 03:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.902 03:12:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:16.902 03:12:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:16.902 03:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.902 03:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:18.277 03:12:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.277 00:04:18.277 real 0m2.621s 00:04:18.277 user 0m0.023s 00:04:18.277 sys 0m0.006s 00:04:18.277 03:12:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.277 03:12:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:18.277 ************************************ 00:04:18.277 END TEST scheduler_create_thread 00:04:18.277 ************************************ 00:04:18.277 03:12:38 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:18.277 03:12:38 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2427731 00:04:18.277 03:12:38 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2427731 ']' 00:04:18.277 03:12:38 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2427731 00:04:18.277 03:12:38 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:18.277 03:12:38 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:18.277 03:12:38 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2427731 00:04:18.277 03:12:38 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:18.277 03:12:38 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:18.277 03:12:38 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2427731' 00:04:18.277 killing process with pid 2427731 00:04:18.277 03:12:38 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2427731 00:04:18.277 03:12:38 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2427731 00:04:18.536 [2024-12-06 03:12:38.523852] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:18.794 00:04:18.794 real 0m3.777s 00:04:18.794 user 0m5.713s 00:04:18.794 sys 0m0.370s 00:04:18.794 03:12:38 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.794 03:12:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:18.794 ************************************ 00:04:18.794 END TEST event_scheduler 00:04:18.794 ************************************ 00:04:18.794 03:12:38 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:18.794 03:12:38 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:18.794 03:12:38 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.794 03:12:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.794 03:12:38 event -- common/autotest_common.sh@10 -- # set +x 00:04:18.794 ************************************ 00:04:18.794 START TEST app_repeat 00:04:18.794 ************************************ 00:04:18.794 03:12:38 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:18.794 03:12:38 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.794 03:12:38 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.794 03:12:38 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:18.794 03:12:38 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:18.794 03:12:38 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:18.794 03:12:38 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:18.795 03:12:38 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:18.795 03:12:38 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2428426 00:04:18.795 03:12:38 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:18.795 03:12:38 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:18.795 03:12:38 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2428426' 00:04:18.795 Process app_repeat pid: 2428426 00:04:18.795 03:12:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:18.795 03:12:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:18.795 spdk_app_start Round 0 00:04:18.795 03:12:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2428426 /var/tmp/spdk-nbd.sock 00:04:18.795 03:12:38 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2428426 ']' 00:04:18.795 03:12:38 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:18.795 03:12:38 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:18.795 03:12:38 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:18.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:18.795 03:12:38 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:18.795 03:12:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:18.795 [2024-12-06 03:12:38.806486] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:04:18.795 [2024-12-06 03:12:38.806537] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2428426 ] 00:04:18.795 [2024-12-06 03:12:38.871069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:18.795 [2024-12-06 03:12:38.912987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:18.795 [2024-12-06 03:12:38.912990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.053 03:12:38 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:19.053 03:12:38 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:19.053 03:12:38 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:19.053 Malloc0 00:04:19.312 03:12:39 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:19.312 Malloc1 00:04:19.312 03:12:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:19.312 03:12:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.312 03:12:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:19.312 03:12:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:19.312 03:12:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.312 03:12:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:19.312 03:12:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:19.312 03:12:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.312 03:12:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:19.312 03:12:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:19.312 03:12:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.312 03:12:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:19.312 03:12:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:19.312 03:12:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:19.312 03:12:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:19.312 03:12:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:19.570 /dev/nbd0 00:04:19.570 03:12:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:19.570 03:12:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:19.570 03:12:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:19.570 03:12:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:19.570 03:12:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:19.570 03:12:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:19.570 03:12:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:19.570 03:12:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:19.570 03:12:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:19.570 03:12:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:19.570 03:12:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:19.570 1+0 records in 00:04:19.570 1+0 records out 00:04:19.570 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213771 s, 19.2 MB/s 00:04:19.570 03:12:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:19.570 03:12:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:19.570 03:12:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:19.570 03:12:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:19.570 03:12:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:19.570 03:12:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:19.570 03:12:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:19.571 03:12:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:19.837 /dev/nbd1 00:04:19.837 03:12:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:19.837 03:12:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:19.837 03:12:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:19.837 03:12:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:19.837 03:12:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:19.837 03:12:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:19.837 03:12:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:19.837 03:12:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:19.837 03:12:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:19.837 03:12:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:19.837 03:12:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:19.837 1+0 records in 00:04:19.837 1+0 records out 00:04:19.837 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182039 s, 22.5 MB/s 00:04:19.837 03:12:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:19.837 03:12:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:19.837 03:12:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:19.837 03:12:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:19.837 03:12:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:19.837 03:12:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:19.837 03:12:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:19.837 03:12:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:19.837 03:12:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.837 03:12:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:20.096 03:12:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:20.096 { 00:04:20.096 "nbd_device": "/dev/nbd0", 00:04:20.096 "bdev_name": "Malloc0" 00:04:20.096 }, 00:04:20.096 { 00:04:20.096 "nbd_device": "/dev/nbd1", 00:04:20.096 "bdev_name": "Malloc1" 00:04:20.096 } 00:04:20.096 ]' 00:04:20.096 03:12:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:20.096 { 00:04:20.096 "nbd_device": "/dev/nbd0", 00:04:20.096 "bdev_name": "Malloc0" 00:04:20.096 }, 00:04:20.096 { 00:04:20.096 "nbd_device": "/dev/nbd1", 00:04:20.096 "bdev_name": "Malloc1" 00:04:20.096 } 00:04:20.096 ]' 00:04:20.096 03:12:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:20.096 03:12:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:20.096 /dev/nbd1' 00:04:20.096 03:12:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:20.096 /dev/nbd1' 00:04:20.096 03:12:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:20.096 03:12:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:20.096 03:12:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:20.096 03:12:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:20.096 03:12:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:20.096 03:12:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:20.096 03:12:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:20.096 03:12:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:20.096 03:12:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:20.096 03:12:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:20.096 03:12:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:20.096 03:12:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:20.096 256+0 records in 00:04:20.096 256+0 records out 00:04:20.096 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106756 s, 98.2 MB/s 00:04:20.096 03:12:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:20.096 03:12:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:20.096 256+0 records in 00:04:20.096 256+0 records out 00:04:20.096 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144787 s, 72.4 MB/s 00:04:20.096 03:12:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:20.096 03:12:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:20.096 256+0 records in 00:04:20.096 256+0 records out 00:04:20.096 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0161973 s, 64.7 MB/s 00:04:20.096 03:12:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:20.096 03:12:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:20.096 03:12:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:20.096 03:12:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:20.096 03:12:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:20.096 03:12:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:20.096 03:12:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:20.096 03:12:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:20.096 03:12:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:20.096 03:12:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:20.096 03:12:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:20.096 03:12:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:20.096 03:12:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:20.096 03:12:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.096 03:12:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:20.096 03:12:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:20.096 03:12:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:20.096 03:12:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:20.096 03:12:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:20.355 03:12:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:20.355 03:12:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:20.355 03:12:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:20.355 03:12:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:20.355 03:12:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:20.355 03:12:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:20.355 03:12:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:20.355 03:12:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:20.355 03:12:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:20.355 03:12:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:20.613 03:12:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:20.613 03:12:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:20.613 03:12:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:20.613 03:12:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:20.613 03:12:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:20.613 03:12:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:20.613 03:12:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:20.613 03:12:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:20.613 03:12:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:20.613 03:12:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.613 03:12:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:20.870 03:12:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:20.870 03:12:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:20.870 03:12:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:20.870 03:12:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:20.870 03:12:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:20.870 03:12:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:20.870 03:12:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:20.870 03:12:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:20.870 03:12:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:20.870 03:12:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:20.870 03:12:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:20.870 03:12:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:20.870 03:12:40 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:21.128 03:12:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:21.128 [2024-12-06 03:12:41.250581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:21.387 [2024-12-06 03:12:41.289234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:21.387 [2024-12-06 03:12:41.289236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.387 [2024-12-06 03:12:41.329858] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:21.387 [2024-12-06 03:12:41.329897] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:24.666 03:12:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:24.666 03:12:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:24.666 spdk_app_start Round 1 00:04:24.666 03:12:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2428426 /var/tmp/spdk-nbd.sock 00:04:24.666 03:12:44 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2428426 ']' 00:04:24.666 03:12:44 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:24.666 03:12:44 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:24.666 03:12:44 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:24.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:24.666 03:12:44 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:24.666 03:12:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:24.666 03:12:44 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:24.666 03:12:44 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:24.666 03:12:44 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:24.666 Malloc0 00:04:24.666 03:12:44 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:24.666 Malloc1 00:04:24.666 03:12:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:24.666 03:12:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.666 03:12:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:24.666 03:12:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:24.666 03:12:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.666 03:12:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:24.666 03:12:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:24.666 03:12:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.666 03:12:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:24.666 03:12:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:24.666 03:12:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.666 03:12:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:24.666 03:12:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:24.666 03:12:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:24.666 03:12:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:24.667 03:12:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:24.925 /dev/nbd0 00:04:24.925 03:12:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:24.925 03:12:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:24.925 03:12:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:24.925 03:12:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:24.925 03:12:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:24.925 03:12:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:24.925 03:12:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:24.925 03:12:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:24.925 03:12:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:24.925 03:12:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:24.925 03:12:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:24.925 1+0 records in 00:04:24.925 1+0 records out 00:04:24.925 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188319 s, 21.8 MB/s 00:04:24.925 03:12:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:24.925 03:12:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:24.925 03:12:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:24.925 03:12:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:24.925 03:12:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:24.925 03:12:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:24.925 03:12:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:24.925 03:12:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:25.181 /dev/nbd1 00:04:25.181 03:12:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:25.181 03:12:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:25.181 03:12:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:25.181 03:12:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:25.181 03:12:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:25.181 03:12:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:25.181 03:12:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:25.181 03:12:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:25.181 03:12:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:25.181 03:12:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:25.181 03:12:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:25.181 1+0 records in 00:04:25.181 1+0 records out 00:04:25.181 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220905 s, 18.5 MB/s 00:04:25.181 03:12:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:25.181 03:12:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:25.181 03:12:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:25.181 03:12:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:25.181 03:12:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:25.181 03:12:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:25.181 03:12:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:25.181 03:12:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:25.181 03:12:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.181 03:12:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:25.439 03:12:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:25.439 { 00:04:25.439 "nbd_device": "/dev/nbd0", 00:04:25.439 "bdev_name": "Malloc0" 00:04:25.439 }, 00:04:25.439 { 00:04:25.439 "nbd_device": "/dev/nbd1", 00:04:25.439 "bdev_name": "Malloc1" 00:04:25.439 } 00:04:25.439 ]' 00:04:25.439 03:12:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:25.439 03:12:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:25.439 { 00:04:25.439 "nbd_device": "/dev/nbd0", 00:04:25.439 "bdev_name": "Malloc0" 00:04:25.439 }, 00:04:25.439 { 00:04:25.439 "nbd_device": "/dev/nbd1", 00:04:25.439 "bdev_name": "Malloc1" 00:04:25.439 } 00:04:25.439 ]' 00:04:25.439 03:12:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:25.439 /dev/nbd1' 00:04:25.439 03:12:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:25.439 /dev/nbd1' 00:04:25.439 03:12:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:25.439 03:12:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:25.439 03:12:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:25.439 03:12:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:25.439 03:12:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:25.439 03:12:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:25.439 03:12:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.439 03:12:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:25.439 03:12:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:25.439 03:12:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:25.439 03:12:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:25.439 03:12:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:25.439 256+0 records in 00:04:25.439 256+0 records out 00:04:25.439 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106276 s, 98.7 MB/s 00:04:25.439 03:12:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:25.439 03:12:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:25.439 256+0 records in 00:04:25.439 256+0 records out 00:04:25.439 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137598 s, 76.2 MB/s 00:04:25.439 03:12:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:25.440 03:12:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:25.440 256+0 records in 00:04:25.440 256+0 records out 00:04:25.440 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.015128 s, 69.3 MB/s 00:04:25.440 03:12:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:25.440 03:12:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.440 03:12:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:25.440 03:12:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:25.440 03:12:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:25.440 03:12:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:25.440 03:12:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:25.440 03:12:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:25.440 03:12:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:25.440 03:12:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:25.440 03:12:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:25.440 03:12:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:25.440 03:12:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:25.440 03:12:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.440 03:12:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.440 03:12:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:25.440 03:12:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:25.440 03:12:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:25.440 03:12:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:25.696 03:12:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:25.696 03:12:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:25.696 03:12:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:25.696 03:12:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:25.696 03:12:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:25.696 03:12:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:25.696 03:12:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:25.696 03:12:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:25.696 03:12:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:25.696 03:12:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:25.953 03:12:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:25.953 03:12:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:25.953 03:12:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:25.953 03:12:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:25.953 03:12:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:25.953 03:12:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:25.953 03:12:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:25.953 03:12:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:25.953 03:12:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:25.953 03:12:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.953 03:12:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:26.210 03:12:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:26.210 03:12:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:26.210 03:12:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:26.210 03:12:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:26.210 03:12:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:26.210 03:12:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:26.210 03:12:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:26.210 03:12:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:26.210 03:12:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:26.210 03:12:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:26.210 03:12:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:26.210 03:12:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:26.210 03:12:46 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:26.468 03:12:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:26.468 [2024-12-06 03:12:46.539400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:26.468 [2024-12-06 03:12:46.576427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:26.468 [2024-12-06 03:12:46.576430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.726 [2024-12-06 03:12:46.618238] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:26.726 [2024-12-06 03:12:46.618276] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:29.284 03:12:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:29.284 03:12:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:29.284 spdk_app_start Round 2 00:04:29.284 03:12:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2428426 /var/tmp/spdk-nbd.sock 00:04:29.284 03:12:49 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2428426 ']' 00:04:29.284 03:12:49 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:29.284 03:12:49 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.284 03:12:49 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:29.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:29.284 03:12:49 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.284 03:12:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:29.542 03:12:49 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:29.542 03:12:49 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:29.542 03:12:49 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:29.801 Malloc0 00:04:29.801 03:12:49 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:30.061 Malloc1 00:04:30.061 03:12:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:30.061 03:12:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.061 03:12:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:30.061 03:12:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:30.061 03:12:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.061 03:12:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:30.061 03:12:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:30.061 03:12:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.061 03:12:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:30.061 03:12:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:30.061 03:12:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.061 03:12:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:30.061 03:12:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:30.061 03:12:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:30.061 03:12:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:30.061 03:12:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:30.061 /dev/nbd0 00:04:30.061 03:12:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:30.061 03:12:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:30.061 03:12:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:30.061 03:12:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:30.061 03:12:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:30.061 03:12:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:30.061 03:12:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:30.061 03:12:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:30.061 03:12:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:30.061 03:12:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:30.061 03:12:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:30.061 1+0 records in 00:04:30.061 1+0 records out 00:04:30.061 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185688 s, 22.1 MB/s 00:04:30.320 03:12:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:30.320 03:12:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:30.320 03:12:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:30.320 03:12:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:30.320 03:12:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:30.320 03:12:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:30.320 03:12:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:30.320 03:12:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:30.320 /dev/nbd1 00:04:30.320 03:12:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:30.320 03:12:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:30.320 03:12:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:30.320 03:12:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:30.320 03:12:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:30.320 03:12:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:30.320 03:12:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:30.320 03:12:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:30.320 03:12:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:30.320 03:12:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:30.320 03:12:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:30.579 1+0 records in 00:04:30.579 1+0 records out 00:04:30.579 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211565 s, 19.4 MB/s 00:04:30.579 03:12:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:30.579 03:12:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:30.579 03:12:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:30.579 03:12:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:30.579 03:12:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:30.579 03:12:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:30.579 03:12:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:30.579 03:12:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:30.579 03:12:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.579 03:12:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:30.579 03:12:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:30.579 { 00:04:30.579 "nbd_device": "/dev/nbd0", 00:04:30.579 "bdev_name": "Malloc0" 00:04:30.579 }, 00:04:30.579 { 00:04:30.579 "nbd_device": "/dev/nbd1", 00:04:30.579 "bdev_name": "Malloc1" 00:04:30.579 } 00:04:30.579 ]' 00:04:30.579 03:12:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:30.579 { 00:04:30.579 "nbd_device": "/dev/nbd0", 00:04:30.579 "bdev_name": "Malloc0" 00:04:30.579 }, 00:04:30.579 { 00:04:30.579 "nbd_device": "/dev/nbd1", 00:04:30.579 "bdev_name": "Malloc1" 00:04:30.579 } 00:04:30.579 ]' 00:04:30.579 03:12:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:30.579 03:12:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:30.579 /dev/nbd1' 00:04:30.579 03:12:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:30.579 /dev/nbd1' 00:04:30.579 03:12:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:30.579 03:12:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:30.579 03:12:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:30.579 03:12:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:30.579 03:12:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:30.579 03:12:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:30.579 03:12:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.579 03:12:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:30.579 03:12:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:30.579 03:12:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:30.579 03:12:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:30.579 03:12:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:30.579 256+0 records in 00:04:30.579 256+0 records out 00:04:30.579 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106722 s, 98.3 MB/s 00:04:30.579 03:12:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:30.579 03:12:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:30.837 256+0 records in 00:04:30.837 256+0 records out 00:04:30.837 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140151 s, 74.8 MB/s 00:04:30.837 03:12:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:30.837 03:12:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:30.837 256+0 records in 00:04:30.837 256+0 records out 00:04:30.837 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151643 s, 69.1 MB/s 00:04:30.837 03:12:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:30.837 03:12:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.837 03:12:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:30.837 03:12:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:30.837 03:12:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:30.837 03:12:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:30.837 03:12:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:30.837 03:12:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:30.837 03:12:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:30.837 03:12:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:30.837 03:12:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:30.837 03:12:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:30.838 03:12:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:30.838 03:12:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.838 03:12:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.838 03:12:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:30.838 03:12:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:30.838 03:12:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:30.838 03:12:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:30.838 03:12:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:30.838 03:12:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:31.095 03:12:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:31.095 03:12:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:31.095 03:12:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:31.095 03:12:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:31.095 03:12:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:31.095 03:12:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:31.095 03:12:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:31.095 03:12:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:31.095 03:12:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:31.095 03:12:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:31.095 03:12:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:31.095 03:12:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:31.095 03:12:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:31.095 03:12:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:31.095 03:12:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:31.095 03:12:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:31.095 03:12:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:31.095 03:12:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.095 03:12:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:31.354 03:12:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:31.354 03:12:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:31.354 03:12:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:31.354 03:12:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:31.354 03:12:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:31.354 03:12:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:31.354 03:12:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:31.354 03:12:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:31.354 03:12:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:31.354 03:12:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:31.354 03:12:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:31.354 03:12:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:31.354 03:12:51 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:31.612 03:12:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:31.871 [2024-12-06 03:12:51.800916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:31.871 [2024-12-06 03:12:51.838199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:31.871 [2024-12-06 03:12:51.838202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.871 [2024-12-06 03:12:51.879276] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:31.871 [2024-12-06 03:12:51.879318] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:35.156 03:12:54 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2428426 /var/tmp/spdk-nbd.sock 00:04:35.156 03:12:54 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2428426 ']' 00:04:35.156 03:12:54 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:35.156 03:12:54 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.156 03:12:54 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:35.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:35.156 03:12:54 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.156 03:12:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:35.156 03:12:54 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:35.156 03:12:54 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:35.156 03:12:54 event.app_repeat -- event/event.sh@39 -- # killprocess 2428426 00:04:35.156 03:12:54 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2428426 ']' 00:04:35.156 03:12:54 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2428426 00:04:35.156 03:12:54 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:35.156 03:12:54 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:35.156 03:12:54 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2428426 00:04:35.156 03:12:54 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:35.156 03:12:54 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:35.156 03:12:54 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2428426' 00:04:35.156 killing process with pid 2428426 00:04:35.156 03:12:54 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2428426 00:04:35.156 03:12:54 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2428426 00:04:35.156 spdk_app_start is called in Round 0. 00:04:35.156 Shutdown signal received, stop current app iteration 00:04:35.156 Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 reinitialization... 00:04:35.156 spdk_app_start is called in Round 1. 00:04:35.156 Shutdown signal received, stop current app iteration 00:04:35.156 Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 reinitialization... 00:04:35.156 spdk_app_start is called in Round 2. 00:04:35.156 Shutdown signal received, stop current app iteration 00:04:35.156 Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 reinitialization... 00:04:35.156 spdk_app_start is called in Round 3. 00:04:35.156 Shutdown signal received, stop current app iteration 00:04:35.156 03:12:55 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:35.156 03:12:55 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:35.156 00:04:35.156 real 0m16.260s 00:04:35.156 user 0m35.598s 00:04:35.156 sys 0m2.569s 00:04:35.156 03:12:55 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.156 03:12:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:35.156 ************************************ 00:04:35.156 END TEST app_repeat 00:04:35.156 ************************************ 00:04:35.156 03:12:55 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:35.156 03:12:55 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:35.156 03:12:55 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.156 03:12:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.156 03:12:55 event -- common/autotest_common.sh@10 -- # set +x 00:04:35.156 ************************************ 00:04:35.156 START TEST cpu_locks 00:04:35.156 ************************************ 00:04:35.156 03:12:55 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:35.156 * Looking for test storage... 00:04:35.156 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:35.156 03:12:55 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:35.157 03:12:55 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:04:35.157 03:12:55 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:35.157 03:12:55 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:35.157 03:12:55 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.157 03:12:55 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.157 03:12:55 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.157 03:12:55 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.157 03:12:55 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.157 03:12:55 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.157 03:12:55 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.157 03:12:55 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.157 03:12:55 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.157 03:12:55 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.157 03:12:55 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.157 03:12:55 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:35.157 03:12:55 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:35.157 03:12:55 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.157 03:12:55 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.157 03:12:55 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:35.157 03:12:55 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:35.157 03:12:55 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.157 03:12:55 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:35.157 03:12:55 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.157 03:12:55 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:35.157 03:12:55 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:35.157 03:12:55 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.157 03:12:55 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:35.157 03:12:55 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.157 03:12:55 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.157 03:12:55 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.157 03:12:55 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:35.157 03:12:55 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.157 03:12:55 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:35.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.157 --rc genhtml_branch_coverage=1 00:04:35.157 --rc genhtml_function_coverage=1 00:04:35.157 --rc genhtml_legend=1 00:04:35.157 --rc geninfo_all_blocks=1 00:04:35.157 --rc geninfo_unexecuted_blocks=1 00:04:35.157 00:04:35.157 ' 00:04:35.157 03:12:55 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:35.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.157 --rc genhtml_branch_coverage=1 00:04:35.157 --rc genhtml_function_coverage=1 00:04:35.157 --rc genhtml_legend=1 00:04:35.157 --rc geninfo_all_blocks=1 00:04:35.157 --rc geninfo_unexecuted_blocks=1 00:04:35.157 00:04:35.157 ' 00:04:35.157 03:12:55 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:35.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.157 --rc genhtml_branch_coverage=1 00:04:35.157 --rc genhtml_function_coverage=1 00:04:35.157 --rc genhtml_legend=1 00:04:35.157 --rc geninfo_all_blocks=1 00:04:35.157 --rc geninfo_unexecuted_blocks=1 00:04:35.157 00:04:35.157 ' 00:04:35.157 03:12:55 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:35.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.157 --rc genhtml_branch_coverage=1 00:04:35.157 --rc genhtml_function_coverage=1 00:04:35.157 --rc genhtml_legend=1 00:04:35.157 --rc geninfo_all_blocks=1 00:04:35.157 --rc geninfo_unexecuted_blocks=1 00:04:35.157 00:04:35.157 ' 00:04:35.157 03:12:55 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:35.157 03:12:55 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:35.157 03:12:55 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:35.157 03:12:55 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:35.157 03:12:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.157 03:12:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.157 03:12:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:35.415 ************************************ 00:04:35.415 START TEST default_locks 00:04:35.415 ************************************ 00:04:35.415 03:12:55 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:35.415 03:12:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2431425 00:04:35.415 03:12:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2431425 00:04:35.415 03:12:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:35.415 03:12:55 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2431425 ']' 00:04:35.415 03:12:55 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.415 03:12:55 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.415 03:12:55 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.415 03:12:55 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.415 03:12:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:35.415 [2024-12-06 03:12:55.352356] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:04:35.415 [2024-12-06 03:12:55.352399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2431425 ] 00:04:35.416 [2024-12-06 03:12:55.415033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.416 [2024-12-06 03:12:55.455336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.673 03:12:55 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:35.673 03:12:55 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:35.673 03:12:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2431425 00:04:35.673 03:12:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2431425 00:04:35.673 03:12:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:36.240 lslocks: write error 00:04:36.240 03:12:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2431425 00:04:36.240 03:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2431425 ']' 00:04:36.240 03:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2431425 00:04:36.240 03:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:36.240 03:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.240 03:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2431425 00:04:36.240 03:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:36.240 03:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:36.240 03:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2431425' 00:04:36.240 killing process with pid 2431425 00:04:36.240 03:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2431425 00:04:36.240 03:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2431425 00:04:36.498 03:12:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2431425 00:04:36.498 03:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:36.498 03:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2431425 00:04:36.498 03:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:36.498 03:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:36.498 03:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:36.498 03:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:36.498 03:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2431425 00:04:36.498 03:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2431425 ']' 00:04:36.498 03:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.498 03:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:36.498 03:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.498 03:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:36.498 03:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:36.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2431425) - No such process 00:04:36.498 ERROR: process (pid: 2431425) is no longer running 00:04:36.498 03:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:36.498 03:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:36.498 03:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:36.498 03:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:36.498 03:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:36.498 03:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:36.498 03:12:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:36.498 03:12:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:36.498 03:12:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:36.498 03:12:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:36.498 00:04:36.498 real 0m1.234s 00:04:36.498 user 0m1.212s 00:04:36.498 sys 0m0.526s 00:04:36.498 03:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.498 03:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:36.498 ************************************ 00:04:36.498 END TEST default_locks 00:04:36.498 ************************************ 00:04:36.498 03:12:56 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:36.498 03:12:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.498 03:12:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.498 03:12:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:36.498 ************************************ 00:04:36.498 START TEST default_locks_via_rpc 00:04:36.498 ************************************ 00:04:36.498 03:12:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:36.498 03:12:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2431690 00:04:36.498 03:12:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2431690 00:04:36.498 03:12:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2431690 ']' 00:04:36.498 03:12:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.498 03:12:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:36.498 03:12:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.498 03:12:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:36.498 03:12:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:36.498 03:12:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.756 [2024-12-06 03:12:56.651760] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:04:36.756 [2024-12-06 03:12:56.651807] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2431690 ] 00:04:36.756 [2024-12-06 03:12:56.713525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.756 [2024-12-06 03:12:56.756484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.015 03:12:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:37.015 03:12:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:37.015 03:12:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:37.015 03:12:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.015 03:12:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.015 03:12:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.015 03:12:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:37.015 03:12:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:37.015 03:12:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:37.015 03:12:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:37.015 03:12:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:37.015 03:12:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.015 03:12:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.015 03:12:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.015 03:12:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2431690 00:04:37.015 03:12:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2431690 00:04:37.015 03:12:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:37.015 03:12:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2431690 00:04:37.015 03:12:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2431690 ']' 00:04:37.015 03:12:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2431690 00:04:37.015 03:12:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:37.015 03:12:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:37.015 03:12:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2431690 00:04:37.273 03:12:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:37.273 03:12:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:37.273 03:12:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2431690' 00:04:37.273 killing process with pid 2431690 00:04:37.273 03:12:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2431690 00:04:37.273 03:12:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2431690 00:04:37.532 00:04:37.532 real 0m0.896s 00:04:37.532 user 0m0.841s 00:04:37.532 sys 0m0.420s 00:04:37.532 03:12:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.532 03:12:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.532 ************************************ 00:04:37.532 END TEST default_locks_via_rpc 00:04:37.532 ************************************ 00:04:37.532 03:12:57 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:37.532 03:12:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.532 03:12:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.532 03:12:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:37.532 ************************************ 00:04:37.532 START TEST non_locking_app_on_locked_coremask 00:04:37.532 ************************************ 00:04:37.532 03:12:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:37.532 03:12:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2431937 00:04:37.532 03:12:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2431937 /var/tmp/spdk.sock 00:04:37.532 03:12:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2431937 ']' 00:04:37.532 03:12:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.532 03:12:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.532 03:12:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.532 03:12:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:37.532 03:12:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.532 03:12:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:37.532 [2024-12-06 03:12:57.610637] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:04:37.532 [2024-12-06 03:12:57.610679] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2431937 ] 00:04:37.791 [2024-12-06 03:12:57.672422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.791 [2024-12-06 03:12:57.715143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.791 03:12:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:37.791 03:12:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:37.791 03:12:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2431947 00:04:37.792 03:12:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2431947 /var/tmp/spdk2.sock 00:04:37.792 03:12:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2431947 ']' 00:04:37.792 03:12:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:37.792 03:12:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.792 03:12:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:37.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:37.792 03:12:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:37.792 03:12:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.792 03:12:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:38.051 [2024-12-06 03:12:57.973097] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:04:38.051 [2024-12-06 03:12:57.973144] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2431947 ] 00:04:38.051 [2024-12-06 03:12:58.059793] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:38.051 [2024-12-06 03:12:58.059815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.051 [2024-12-06 03:12:58.145826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.987 03:12:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.987 03:12:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:38.987 03:12:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2431937 00:04:38.987 03:12:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2431937 00:04:38.987 03:12:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:39.246 lslocks: write error 00:04:39.246 03:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2431937 00:04:39.246 03:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2431937 ']' 00:04:39.246 03:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2431937 00:04:39.246 03:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:39.246 03:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.246 03:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2431937 00:04:39.246 03:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.246 03:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.246 03:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2431937' 00:04:39.246 killing process with pid 2431937 00:04:39.246 03:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2431937 00:04:39.246 03:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2431937 00:04:39.814 03:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2431947 00:04:39.814 03:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2431947 ']' 00:04:39.814 03:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2431947 00:04:39.814 03:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:39.814 03:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.814 03:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2431947 00:04:39.814 03:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.814 03:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.814 03:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2431947' 00:04:39.814 killing process with pid 2431947 00:04:39.814 03:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2431947 00:04:39.814 03:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2431947 00:04:40.073 00:04:40.073 real 0m2.613s 00:04:40.073 user 0m2.761s 00:04:40.073 sys 0m0.857s 00:04:40.073 03:13:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.073 03:13:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:40.073 ************************************ 00:04:40.073 END TEST non_locking_app_on_locked_coremask 00:04:40.073 ************************************ 00:04:40.073 03:13:00 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:40.073 03:13:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.074 03:13:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.074 03:13:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:40.332 ************************************ 00:04:40.332 START TEST locking_app_on_unlocked_coremask 00:04:40.333 ************************************ 00:04:40.333 03:13:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:04:40.333 03:13:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2432435 00:04:40.333 03:13:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2432435 /var/tmp/spdk.sock 00:04:40.333 03:13:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:40.333 03:13:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2432435 ']' 00:04:40.333 03:13:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.333 03:13:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.333 03:13:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.333 03:13:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.333 03:13:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:40.333 [2024-12-06 03:13:00.290408] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:04:40.333 [2024-12-06 03:13:00.290448] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2432435 ] 00:04:40.333 [2024-12-06 03:13:00.352810] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:40.333 [2024-12-06 03:13:00.352839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.333 [2024-12-06 03:13:00.391490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.592 03:13:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.592 03:13:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:40.592 03:13:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2432438 00:04:40.592 03:13:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2432438 /var/tmp/spdk2.sock 00:04:40.592 03:13:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:40.592 03:13:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2432438 ']' 00:04:40.592 03:13:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:40.592 03:13:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.592 03:13:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:40.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:40.592 03:13:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.592 03:13:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:40.592 [2024-12-06 03:13:00.652299] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:04:40.592 [2024-12-06 03:13:00.652341] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2432438 ] 00:04:40.852 [2024-12-06 03:13:00.743277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.852 [2024-12-06 03:13:00.824469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.420 03:13:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.420 03:13:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:41.420 03:13:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2432438 00:04:41.420 03:13:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2432438 00:04:41.420 03:13:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:41.988 lslocks: write error 00:04:41.988 03:13:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2432435 00:04:41.988 03:13:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2432435 ']' 00:04:41.989 03:13:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2432435 00:04:41.989 03:13:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:41.989 03:13:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:41.989 03:13:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2432435 00:04:41.989 03:13:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:41.989 03:13:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:41.989 03:13:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2432435' 00:04:41.989 killing process with pid 2432435 00:04:41.989 03:13:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2432435 00:04:41.989 03:13:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2432435 00:04:42.927 03:13:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2432438 00:04:42.927 03:13:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2432438 ']' 00:04:42.927 03:13:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2432438 00:04:42.927 03:13:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:42.927 03:13:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:42.928 03:13:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2432438 00:04:42.928 03:13:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:42.928 03:13:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:42.928 03:13:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2432438' 00:04:42.928 killing process with pid 2432438 00:04:42.928 03:13:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2432438 00:04:42.928 03:13:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2432438 00:04:42.928 00:04:42.928 real 0m2.818s 00:04:42.928 user 0m2.976s 00:04:42.928 sys 0m0.935s 00:04:42.928 03:13:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.928 03:13:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:42.928 ************************************ 00:04:42.928 END TEST locking_app_on_unlocked_coremask 00:04:42.928 ************************************ 00:04:43.188 03:13:03 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:43.188 03:13:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.188 03:13:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.188 03:13:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:43.188 ************************************ 00:04:43.188 START TEST locking_app_on_locked_coremask 00:04:43.188 ************************************ 00:04:43.188 03:13:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:04:43.188 03:13:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2432954 00:04:43.188 03:13:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2432954 /var/tmp/spdk.sock 00:04:43.188 03:13:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:43.188 03:13:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2432954 ']' 00:04:43.188 03:13:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.188 03:13:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.188 03:13:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.188 03:13:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.188 03:13:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:43.188 [2024-12-06 03:13:03.178206] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:04:43.188 [2024-12-06 03:13:03.178244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2432954 ] 00:04:43.188 [2024-12-06 03:13:03.240206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.188 [2024-12-06 03:13:03.282579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.447 03:13:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.447 03:13:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:43.447 03:13:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2432958 00:04:43.447 03:13:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2432958 /var/tmp/spdk2.sock 00:04:43.447 03:13:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:43.447 03:13:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:43.447 03:13:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2432958 /var/tmp/spdk2.sock 00:04:43.447 03:13:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:43.447 03:13:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:43.447 03:13:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:43.447 03:13:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:43.447 03:13:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2432958 /var/tmp/spdk2.sock 00:04:43.447 03:13:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2432958 ']' 00:04:43.447 03:13:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:43.447 03:13:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.447 03:13:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:43.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:43.447 03:13:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.447 03:13:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:43.447 [2024-12-06 03:13:03.535112] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:04:43.447 [2024-12-06 03:13:03.535152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2432958 ] 00:04:43.705 [2024-12-06 03:13:03.625498] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2432954 has claimed it. 00:04:43.705 [2024-12-06 03:13:03.625541] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:44.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2432958) - No such process 00:04:44.270 ERROR: process (pid: 2432958) is no longer running 00:04:44.270 03:13:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.270 03:13:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:44.270 03:13:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:44.270 03:13:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:44.270 03:13:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:44.270 03:13:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:44.270 03:13:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2432954 00:04:44.270 03:13:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2432954 00:04:44.270 03:13:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:44.530 lslocks: write error 00:04:44.530 03:13:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2432954 00:04:44.530 03:13:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2432954 ']' 00:04:44.530 03:13:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2432954 00:04:44.530 03:13:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:44.530 03:13:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:44.530 03:13:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2432954 00:04:44.530 03:13:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:44.530 03:13:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:44.530 03:13:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2432954' 00:04:44.530 killing process with pid 2432954 00:04:44.530 03:13:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2432954 00:04:44.530 03:13:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2432954 00:04:44.788 00:04:44.788 real 0m1.645s 00:04:44.788 user 0m1.734s 00:04:44.788 sys 0m0.557s 00:04:44.788 03:13:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.788 03:13:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:44.788 ************************************ 00:04:44.788 END TEST locking_app_on_locked_coremask 00:04:44.788 ************************************ 00:04:44.788 03:13:04 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:44.788 03:13:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.789 03:13:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.789 03:13:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:44.789 ************************************ 00:04:44.789 START TEST locking_overlapped_coremask 00:04:44.789 ************************************ 00:04:44.789 03:13:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:04:44.789 03:13:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2433222 00:04:44.789 03:13:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2433222 /var/tmp/spdk.sock 00:04:44.789 03:13:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2433222 ']' 00:04:44.789 03:13:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.789 03:13:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.789 03:13:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.789 03:13:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:44.789 03:13:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.789 03:13:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:44.789 [2024-12-06 03:13:04.885997] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:04:44.789 [2024-12-06 03:13:04.886041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2433222 ] 00:04:45.048 [2024-12-06 03:13:04.948187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:45.048 [2024-12-06 03:13:04.993290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.048 [2024-12-06 03:13:04.993387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.048 [2024-12-06 03:13:04.993387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:45.313 03:13:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.313 03:13:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:45.313 03:13:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2433248 00:04:45.313 03:13:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2433248 /var/tmp/spdk2.sock 00:04:45.313 03:13:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:45.313 03:13:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2433248 /var/tmp/spdk2.sock 00:04:45.313 03:13:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:45.313 03:13:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:45.313 03:13:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:45.313 03:13:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:45.313 03:13:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:45.313 03:13:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2433248 /var/tmp/spdk2.sock 00:04:45.313 03:13:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2433248 ']' 00:04:45.313 03:13:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:45.313 03:13:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.313 03:13:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:45.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:45.313 03:13:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.313 03:13:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:45.313 [2024-12-06 03:13:05.258016] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:04:45.313 [2024-12-06 03:13:05.258063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2433248 ] 00:04:45.313 [2024-12-06 03:13:05.350609] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2433222 has claimed it. 00:04:45.313 [2024-12-06 03:13:05.350649] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:45.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2433248) - No such process 00:04:45.881 ERROR: process (pid: 2433248) is no longer running 00:04:45.881 03:13:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.881 03:13:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:45.881 03:13:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:45.881 03:13:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:45.881 03:13:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:45.881 03:13:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:45.881 03:13:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:45.881 03:13:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:45.881 03:13:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:45.882 03:13:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:45.882 03:13:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2433222 00:04:45.882 03:13:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2433222 ']' 00:04:45.882 03:13:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2433222 00:04:45.882 03:13:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:04:45.882 03:13:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:45.882 03:13:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2433222 00:04:45.882 03:13:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:45.882 03:13:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:45.882 03:13:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2433222' 00:04:45.882 killing process with pid 2433222 00:04:45.882 03:13:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2433222 00:04:45.882 03:13:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2433222 00:04:46.140 00:04:46.140 real 0m1.425s 00:04:46.140 user 0m3.961s 00:04:46.140 sys 0m0.382s 00:04:46.140 03:13:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.140 03:13:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:46.140 ************************************ 00:04:46.140 END TEST locking_overlapped_coremask 00:04:46.140 ************************************ 00:04:46.398 03:13:06 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:46.398 03:13:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.398 03:13:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.398 03:13:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:46.398 ************************************ 00:04:46.398 START TEST locking_overlapped_coremask_via_rpc 00:04:46.398 ************************************ 00:04:46.398 03:13:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:04:46.398 03:13:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2433486 00:04:46.398 03:13:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2433486 /var/tmp/spdk.sock 00:04:46.398 03:13:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:46.398 03:13:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2433486 ']' 00:04:46.398 03:13:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.398 03:13:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.398 03:13:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.398 03:13:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.398 03:13:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.398 [2024-12-06 03:13:06.378801] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:04:46.398 [2024-12-06 03:13:06.378841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2433486 ] 00:04:46.398 [2024-12-06 03:13:06.439688] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:46.398 [2024-12-06 03:13:06.439719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:46.398 [2024-12-06 03:13:06.480248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.398 [2024-12-06 03:13:06.480343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.399 [2024-12-06 03:13:06.480343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:46.657 03:13:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:46.657 03:13:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:46.657 03:13:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2433588 00:04:46.657 03:13:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2433588 /var/tmp/spdk2.sock 00:04:46.657 03:13:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:46.657 03:13:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2433588 ']' 00:04:46.657 03:13:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:46.657 03:13:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.657 03:13:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:46.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:46.657 03:13:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.657 03:13:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.657 [2024-12-06 03:13:06.751863] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:04:46.657 [2024-12-06 03:13:06.751912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2433588 ] 00:04:46.915 [2024-12-06 03:13:06.845094] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:46.915 [2024-12-06 03:13:06.845127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:46.915 [2024-12-06 03:13:06.932433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:46.915 [2024-12-06 03:13:06.935994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:46.915 [2024-12-06 03:13:06.935995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:04:47.482 03:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.482 03:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:47.482 03:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:47.482 03:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.482 03:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.482 03:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.482 03:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:47.482 03:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:47.482 03:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:47.482 03:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:47.482 03:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:47.482 03:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:47.482 03:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:47.482 03:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:47.482 03:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.482 03:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.741 [2024-12-06 03:13:07.622021] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2433486 has claimed it. 00:04:47.741 request: 00:04:47.741 { 00:04:47.741 "method": "framework_enable_cpumask_locks", 00:04:47.741 "req_id": 1 00:04:47.741 } 00:04:47.741 Got JSON-RPC error response 00:04:47.741 response: 00:04:47.741 { 00:04:47.741 "code": -32603, 00:04:47.741 "message": "Failed to claim CPU core: 2" 00:04:47.741 } 00:04:47.741 03:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:47.741 03:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:47.741 03:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:47.741 03:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:47.741 03:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:47.741 03:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2433486 /var/tmp/spdk.sock 00:04:47.741 03:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2433486 ']' 00:04:47.741 03:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.741 03:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:47.741 03:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.741 03:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:47.741 03:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.741 03:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.741 03:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:47.741 03:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2433588 /var/tmp/spdk2.sock 00:04:47.741 03:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2433588 ']' 00:04:47.741 03:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:47.741 03:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:47.741 03:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:47.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:47.741 03:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:47.741 03:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.000 03:13:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.000 03:13:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:48.000 03:13:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:48.000 03:13:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:48.000 03:13:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:48.000 03:13:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:48.000 00:04:48.000 real 0m1.709s 00:04:48.000 user 0m0.833s 00:04:48.000 sys 0m0.133s 00:04:48.000 03:13:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.000 03:13:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.000 ************************************ 00:04:48.000 END TEST locking_overlapped_coremask_via_rpc 00:04:48.000 ************************************ 00:04:48.000 03:13:08 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:48.000 03:13:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2433486 ]] 00:04:48.000 03:13:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2433486 00:04:48.000 03:13:08 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2433486 ']' 00:04:48.000 03:13:08 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2433486 00:04:48.000 03:13:08 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:48.000 03:13:08 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.000 03:13:08 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2433486 00:04:48.000 03:13:08 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:48.000 03:13:08 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:48.000 03:13:08 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2433486' 00:04:48.000 killing process with pid 2433486 00:04:48.000 03:13:08 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2433486 00:04:48.000 03:13:08 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2433486 00:04:48.569 03:13:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2433588 ]] 00:04:48.569 03:13:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2433588 00:04:48.569 03:13:08 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2433588 ']' 00:04:48.569 03:13:08 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2433588 00:04:48.569 03:13:08 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:48.569 03:13:08 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.569 03:13:08 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2433588 00:04:48.569 03:13:08 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:48.569 03:13:08 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:48.569 03:13:08 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2433588' 00:04:48.569 killing process with pid 2433588 00:04:48.569 03:13:08 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2433588 00:04:48.569 03:13:08 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2433588 00:04:48.828 03:13:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:48.828 03:13:08 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:48.828 03:13:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2433486 ]] 00:04:48.828 03:13:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2433486 00:04:48.828 03:13:08 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2433486 ']' 00:04:48.828 03:13:08 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2433486 00:04:48.828 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2433486) - No such process 00:04:48.828 03:13:08 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2433486 is not found' 00:04:48.828 Process with pid 2433486 is not found 00:04:48.828 03:13:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2433588 ]] 00:04:48.828 03:13:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2433588 00:04:48.828 03:13:08 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2433588 ']' 00:04:48.828 03:13:08 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2433588 00:04:48.828 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2433588) - No such process 00:04:48.828 03:13:08 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2433588 is not found' 00:04:48.828 Process with pid 2433588 is not found 00:04:48.828 03:13:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:48.828 00:04:48.828 real 0m13.712s 00:04:48.828 user 0m24.149s 00:04:48.828 sys 0m4.758s 00:04:48.828 03:13:08 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.828 03:13:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:48.828 ************************************ 00:04:48.828 END TEST cpu_locks 00:04:48.828 ************************************ 00:04:48.828 00:04:48.828 real 0m37.827s 00:04:48.828 user 1m11.994s 00:04:48.828 sys 0m8.268s 00:04:48.828 03:13:08 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.828 03:13:08 event -- common/autotest_common.sh@10 -- # set +x 00:04:48.828 ************************************ 00:04:48.828 END TEST event 00:04:48.828 ************************************ 00:04:48.828 03:13:08 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:48.828 03:13:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.828 03:13:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.828 03:13:08 -- common/autotest_common.sh@10 -- # set +x 00:04:48.828 ************************************ 00:04:48.828 START TEST thread 00:04:48.828 ************************************ 00:04:48.828 03:13:08 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:49.088 * Looking for test storage... 00:04:49.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:49.088 03:13:08 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:49.088 03:13:08 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:04:49.088 03:13:08 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:49.088 03:13:09 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:49.088 03:13:09 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.088 03:13:09 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.088 03:13:09 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.088 03:13:09 thread -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.088 03:13:09 thread -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.088 03:13:09 thread -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.088 03:13:09 thread -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.088 03:13:09 thread -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.088 03:13:09 thread -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.088 03:13:09 thread -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.088 03:13:09 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.088 03:13:09 thread -- scripts/common.sh@344 -- # case "$op" in 00:04:49.088 03:13:09 thread -- scripts/common.sh@345 -- # : 1 00:04:49.088 03:13:09 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.088 03:13:09 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.088 03:13:09 thread -- scripts/common.sh@365 -- # decimal 1 00:04:49.088 03:13:09 thread -- scripts/common.sh@353 -- # local d=1 00:04:49.088 03:13:09 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.088 03:13:09 thread -- scripts/common.sh@355 -- # echo 1 00:04:49.088 03:13:09 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.088 03:13:09 thread -- scripts/common.sh@366 -- # decimal 2 00:04:49.089 03:13:09 thread -- scripts/common.sh@353 -- # local d=2 00:04:49.089 03:13:09 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.089 03:13:09 thread -- scripts/common.sh@355 -- # echo 2 00:04:49.089 03:13:09 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.089 03:13:09 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.089 03:13:09 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.089 03:13:09 thread -- scripts/common.sh@368 -- # return 0 00:04:49.089 03:13:09 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.089 03:13:09 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:49.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.089 --rc genhtml_branch_coverage=1 00:04:49.089 --rc genhtml_function_coverage=1 00:04:49.089 --rc genhtml_legend=1 00:04:49.089 --rc geninfo_all_blocks=1 00:04:49.089 --rc geninfo_unexecuted_blocks=1 00:04:49.089 00:04:49.089 ' 00:04:49.089 03:13:09 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:49.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.089 --rc genhtml_branch_coverage=1 00:04:49.089 --rc genhtml_function_coverage=1 00:04:49.089 --rc genhtml_legend=1 00:04:49.089 --rc geninfo_all_blocks=1 00:04:49.089 --rc geninfo_unexecuted_blocks=1 00:04:49.089 00:04:49.089 ' 00:04:49.089 03:13:09 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:49.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.089 --rc genhtml_branch_coverage=1 00:04:49.089 --rc genhtml_function_coverage=1 00:04:49.089 --rc genhtml_legend=1 00:04:49.089 --rc geninfo_all_blocks=1 00:04:49.089 --rc geninfo_unexecuted_blocks=1 00:04:49.089 00:04:49.089 ' 00:04:49.089 03:13:09 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:49.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.089 --rc genhtml_branch_coverage=1 00:04:49.089 --rc genhtml_function_coverage=1 00:04:49.089 --rc genhtml_legend=1 00:04:49.089 --rc geninfo_all_blocks=1 00:04:49.089 --rc geninfo_unexecuted_blocks=1 00:04:49.089 00:04:49.089 ' 00:04:49.089 03:13:09 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:49.089 03:13:09 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:49.089 03:13:09 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.089 03:13:09 thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.089 ************************************ 00:04:49.089 START TEST thread_poller_perf 00:04:49.089 ************************************ 00:04:49.089 03:13:09 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:49.089 [2024-12-06 03:13:09.123815] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:04:49.089 [2024-12-06 03:13:09.123882] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2434061 ] 00:04:49.089 [2024-12-06 03:13:09.190121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.349 [2024-12-06 03:13:09.232162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.349 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:50.289 [2024-12-06T02:13:10.430Z] ====================================== 00:04:50.289 [2024-12-06T02:13:10.430Z] busy:2307726578 (cyc) 00:04:50.289 [2024-12-06T02:13:10.430Z] total_run_count: 398000 00:04:50.289 [2024-12-06T02:13:10.430Z] tsc_hz: 2300000000 (cyc) 00:04:50.289 [2024-12-06T02:13:10.430Z] ====================================== 00:04:50.289 [2024-12-06T02:13:10.430Z] poller_cost: 5798 (cyc), 2520 (nsec) 00:04:50.289 00:04:50.289 real 0m1.179s 00:04:50.289 user 0m1.106s 00:04:50.289 sys 0m0.068s 00:04:50.289 03:13:10 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.289 03:13:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:50.289 ************************************ 00:04:50.289 END TEST thread_poller_perf 00:04:50.289 ************************************ 00:04:50.289 03:13:10 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:50.289 03:13:10 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:50.289 03:13:10 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.289 03:13:10 thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.289 ************************************ 00:04:50.289 START TEST thread_poller_perf 00:04:50.289 ************************************ 00:04:50.289 03:13:10 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:50.289 [2024-12-06 03:13:10.364793] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:04:50.289 [2024-12-06 03:13:10.364870] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2434310 ] 00:04:50.549 [2024-12-06 03:13:10.429675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.549 [2024-12-06 03:13:10.471406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.549 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:51.489 [2024-12-06T02:13:11.630Z] ====================================== 00:04:51.489 [2024-12-06T02:13:11.630Z] busy:2301774484 (cyc) 00:04:51.489 [2024-12-06T02:13:11.630Z] total_run_count: 5011000 00:04:51.489 [2024-12-06T02:13:11.630Z] tsc_hz: 2300000000 (cyc) 00:04:51.489 [2024-12-06T02:13:11.630Z] ====================================== 00:04:51.489 [2024-12-06T02:13:11.630Z] poller_cost: 459 (cyc), 199 (nsec) 00:04:51.489 00:04:51.489 real 0m1.163s 00:04:51.489 user 0m1.099s 00:04:51.489 sys 0m0.060s 00:04:51.489 03:13:11 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.489 03:13:11 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:51.489 ************************************ 00:04:51.489 END TEST thread_poller_perf 00:04:51.489 ************************************ 00:04:51.489 03:13:11 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:51.489 00:04:51.489 real 0m2.630s 00:04:51.489 user 0m2.362s 00:04:51.489 sys 0m0.279s 00:04:51.489 03:13:11 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.489 03:13:11 thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.489 ************************************ 00:04:51.489 END TEST thread 00:04:51.489 ************************************ 00:04:51.489 03:13:11 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:04:51.489 03:13:11 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:51.489 03:13:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.489 03:13:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.489 03:13:11 -- common/autotest_common.sh@10 -- # set +x 00:04:51.489 ************************************ 00:04:51.489 START TEST app_cmdline 00:04:51.489 ************************************ 00:04:51.489 03:13:11 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:51.749 * Looking for test storage... 00:04:51.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:51.749 03:13:11 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:51.749 03:13:11 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:04:51.749 03:13:11 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:51.749 03:13:11 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:51.749 03:13:11 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.749 03:13:11 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.749 03:13:11 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.749 03:13:11 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.749 03:13:11 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.749 03:13:11 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.749 03:13:11 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.749 03:13:11 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.749 03:13:11 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.749 03:13:11 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.749 03:13:11 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.749 03:13:11 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:04:51.749 03:13:11 app_cmdline -- scripts/common.sh@345 -- # : 1 00:04:51.749 03:13:11 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.749 03:13:11 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.749 03:13:11 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:04:51.749 03:13:11 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:04:51.749 03:13:11 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.749 03:13:11 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:04:51.749 03:13:11 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.749 03:13:11 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:04:51.749 03:13:11 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:04:51.749 03:13:11 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.749 03:13:11 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:04:51.749 03:13:11 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.749 03:13:11 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.749 03:13:11 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.749 03:13:11 app_cmdline -- scripts/common.sh@368 -- # return 0 00:04:51.749 03:13:11 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.749 03:13:11 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:51.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.749 --rc genhtml_branch_coverage=1 00:04:51.749 --rc genhtml_function_coverage=1 00:04:51.749 --rc genhtml_legend=1 00:04:51.749 --rc geninfo_all_blocks=1 00:04:51.749 --rc geninfo_unexecuted_blocks=1 00:04:51.749 00:04:51.749 ' 00:04:51.749 03:13:11 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:51.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.750 --rc genhtml_branch_coverage=1 00:04:51.750 --rc genhtml_function_coverage=1 00:04:51.750 --rc genhtml_legend=1 00:04:51.750 --rc geninfo_all_blocks=1 00:04:51.750 --rc geninfo_unexecuted_blocks=1 00:04:51.750 00:04:51.750 ' 00:04:51.750 03:13:11 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:51.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.750 --rc genhtml_branch_coverage=1 00:04:51.750 --rc genhtml_function_coverage=1 00:04:51.750 --rc genhtml_legend=1 00:04:51.750 --rc geninfo_all_blocks=1 00:04:51.750 --rc geninfo_unexecuted_blocks=1 00:04:51.750 00:04:51.750 ' 00:04:51.750 03:13:11 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:51.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.750 --rc genhtml_branch_coverage=1 00:04:51.750 --rc genhtml_function_coverage=1 00:04:51.750 --rc genhtml_legend=1 00:04:51.750 --rc geninfo_all_blocks=1 00:04:51.750 --rc geninfo_unexecuted_blocks=1 00:04:51.750 00:04:51.750 ' 00:04:51.750 03:13:11 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:04:51.750 03:13:11 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2434606 00:04:51.750 03:13:11 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2434606 00:04:51.750 03:13:11 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:04:51.750 03:13:11 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2434606 ']' 00:04:51.750 03:13:11 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.750 03:13:11 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.750 03:13:11 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.750 03:13:11 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.750 03:13:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:51.750 [2024-12-06 03:13:11.836255] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:04:51.750 [2024-12-06 03:13:11.836305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2434606 ] 00:04:52.009 [2024-12-06 03:13:11.898388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.009 [2024-12-06 03:13:11.939438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.267 03:13:12 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.267 03:13:12 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:04:52.267 03:13:12 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:04:52.267 { 00:04:52.267 "version": "SPDK v25.01-pre git sha1 05632f11a", 00:04:52.267 "fields": { 00:04:52.267 "major": 25, 00:04:52.267 "minor": 1, 00:04:52.267 "patch": 0, 00:04:52.267 "suffix": "-pre", 00:04:52.267 "commit": "05632f11a" 00:04:52.267 } 00:04:52.267 } 00:04:52.267 03:13:12 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:04:52.267 03:13:12 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:04:52.267 03:13:12 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:04:52.267 03:13:12 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:04:52.267 03:13:12 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:04:52.267 03:13:12 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:04:52.267 03:13:12 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.267 03:13:12 app_cmdline -- app/cmdline.sh@26 -- # sort 00:04:52.267 03:13:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:52.267 03:13:12 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.267 03:13:12 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:04:52.267 03:13:12 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:04:52.267 03:13:12 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:52.267 03:13:12 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:04:52.267 03:13:12 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:52.268 03:13:12 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:52.268 03:13:12 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.268 03:13:12 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:52.268 03:13:12 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.268 03:13:12 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:52.268 03:13:12 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.268 03:13:12 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:52.268 03:13:12 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:04:52.268 03:13:12 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:52.527 request: 00:04:52.527 { 00:04:52.527 "method": "env_dpdk_get_mem_stats", 00:04:52.527 "req_id": 1 00:04:52.527 } 00:04:52.527 Got JSON-RPC error response 00:04:52.527 response: 00:04:52.527 { 00:04:52.527 "code": -32601, 00:04:52.527 "message": "Method not found" 00:04:52.527 } 00:04:52.527 03:13:12 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:04:52.527 03:13:12 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:52.527 03:13:12 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:52.527 03:13:12 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:52.527 03:13:12 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2434606 00:04:52.527 03:13:12 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2434606 ']' 00:04:52.527 03:13:12 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2434606 00:04:52.527 03:13:12 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:04:52.527 03:13:12 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.527 03:13:12 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2434606 00:04:52.527 03:13:12 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.527 03:13:12 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.527 03:13:12 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2434606' 00:04:52.527 killing process with pid 2434606 00:04:52.527 03:13:12 app_cmdline -- common/autotest_common.sh@973 -- # kill 2434606 00:04:52.527 03:13:12 app_cmdline -- common/autotest_common.sh@978 -- # wait 2434606 00:04:53.095 00:04:53.095 real 0m1.327s 00:04:53.095 user 0m1.554s 00:04:53.095 sys 0m0.428s 00:04:53.095 03:13:12 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.095 03:13:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:53.095 ************************************ 00:04:53.095 END TEST app_cmdline 00:04:53.095 ************************************ 00:04:53.095 03:13:12 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:53.095 03:13:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.095 03:13:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.095 03:13:12 -- common/autotest_common.sh@10 -- # set +x 00:04:53.095 ************************************ 00:04:53.095 START TEST version 00:04:53.095 ************************************ 00:04:53.095 03:13:13 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:53.095 * Looking for test storage... 00:04:53.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:53.096 03:13:13 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:53.096 03:13:13 version -- common/autotest_common.sh@1711 -- # lcov --version 00:04:53.096 03:13:13 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:53.096 03:13:13 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:53.096 03:13:13 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.096 03:13:13 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.096 03:13:13 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.096 03:13:13 version -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.096 03:13:13 version -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.096 03:13:13 version -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.096 03:13:13 version -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.096 03:13:13 version -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.096 03:13:13 version -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.096 03:13:13 version -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.096 03:13:13 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.096 03:13:13 version -- scripts/common.sh@344 -- # case "$op" in 00:04:53.096 03:13:13 version -- scripts/common.sh@345 -- # : 1 00:04:53.096 03:13:13 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.096 03:13:13 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.096 03:13:13 version -- scripts/common.sh@365 -- # decimal 1 00:04:53.096 03:13:13 version -- scripts/common.sh@353 -- # local d=1 00:04:53.096 03:13:13 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.096 03:13:13 version -- scripts/common.sh@355 -- # echo 1 00:04:53.096 03:13:13 version -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.096 03:13:13 version -- scripts/common.sh@366 -- # decimal 2 00:04:53.096 03:13:13 version -- scripts/common.sh@353 -- # local d=2 00:04:53.096 03:13:13 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.096 03:13:13 version -- scripts/common.sh@355 -- # echo 2 00:04:53.096 03:13:13 version -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.096 03:13:13 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.096 03:13:13 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.096 03:13:13 version -- scripts/common.sh@368 -- # return 0 00:04:53.096 03:13:13 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.096 03:13:13 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:53.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.096 --rc genhtml_branch_coverage=1 00:04:53.096 --rc genhtml_function_coverage=1 00:04:53.096 --rc genhtml_legend=1 00:04:53.096 --rc geninfo_all_blocks=1 00:04:53.096 --rc geninfo_unexecuted_blocks=1 00:04:53.096 00:04:53.096 ' 00:04:53.096 03:13:13 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:53.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.096 --rc genhtml_branch_coverage=1 00:04:53.096 --rc genhtml_function_coverage=1 00:04:53.096 --rc genhtml_legend=1 00:04:53.096 --rc geninfo_all_blocks=1 00:04:53.096 --rc geninfo_unexecuted_blocks=1 00:04:53.096 00:04:53.096 ' 00:04:53.096 03:13:13 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:53.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.096 --rc genhtml_branch_coverage=1 00:04:53.096 --rc genhtml_function_coverage=1 00:04:53.096 --rc genhtml_legend=1 00:04:53.096 --rc geninfo_all_blocks=1 00:04:53.096 --rc geninfo_unexecuted_blocks=1 00:04:53.096 00:04:53.096 ' 00:04:53.096 03:13:13 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:53.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.096 --rc genhtml_branch_coverage=1 00:04:53.096 --rc genhtml_function_coverage=1 00:04:53.096 --rc genhtml_legend=1 00:04:53.096 --rc geninfo_all_blocks=1 00:04:53.096 --rc geninfo_unexecuted_blocks=1 00:04:53.096 00:04:53.096 ' 00:04:53.096 03:13:13 version -- app/version.sh@17 -- # get_header_version major 00:04:53.096 03:13:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:53.096 03:13:13 version -- app/version.sh@14 -- # cut -f2 00:04:53.096 03:13:13 version -- app/version.sh@14 -- # tr -d '"' 00:04:53.096 03:13:13 version -- app/version.sh@17 -- # major=25 00:04:53.096 03:13:13 version -- app/version.sh@18 -- # get_header_version minor 00:04:53.096 03:13:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:53.096 03:13:13 version -- app/version.sh@14 -- # cut -f2 00:04:53.096 03:13:13 version -- app/version.sh@14 -- # tr -d '"' 00:04:53.096 03:13:13 version -- app/version.sh@18 -- # minor=1 00:04:53.096 03:13:13 version -- app/version.sh@19 -- # get_header_version patch 00:04:53.096 03:13:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:53.096 03:13:13 version -- app/version.sh@14 -- # cut -f2 00:04:53.096 03:13:13 version -- app/version.sh@14 -- # tr -d '"' 00:04:53.096 03:13:13 version -- app/version.sh@19 -- # patch=0 00:04:53.096 03:13:13 version -- app/version.sh@20 -- # get_header_version suffix 00:04:53.096 03:13:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:53.096 03:13:13 version -- app/version.sh@14 -- # cut -f2 00:04:53.096 03:13:13 version -- app/version.sh@14 -- # tr -d '"' 00:04:53.096 03:13:13 version -- app/version.sh@20 -- # suffix=-pre 00:04:53.096 03:13:13 version -- app/version.sh@22 -- # version=25.1 00:04:53.096 03:13:13 version -- app/version.sh@25 -- # (( patch != 0 )) 00:04:53.096 03:13:13 version -- app/version.sh@28 -- # version=25.1rc0 00:04:53.096 03:13:13 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:04:53.096 03:13:13 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:04:53.357 03:13:13 version -- app/version.sh@30 -- # py_version=25.1rc0 00:04:53.357 03:13:13 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:04:53.357 00:04:53.357 real 0m0.234s 00:04:53.357 user 0m0.139s 00:04:53.357 sys 0m0.137s 00:04:53.357 03:13:13 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.357 03:13:13 version -- common/autotest_common.sh@10 -- # set +x 00:04:53.357 ************************************ 00:04:53.357 END TEST version 00:04:53.357 ************************************ 00:04:53.357 03:13:13 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:04:53.357 03:13:13 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:04:53.357 03:13:13 -- spdk/autotest.sh@194 -- # uname -s 00:04:53.357 03:13:13 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:04:53.357 03:13:13 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:53.357 03:13:13 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:53.357 03:13:13 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:04:53.357 03:13:13 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:04:53.357 03:13:13 -- spdk/autotest.sh@260 -- # timing_exit lib 00:04:53.357 03:13:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:53.357 03:13:13 -- common/autotest_common.sh@10 -- # set +x 00:04:53.357 03:13:13 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:04:53.357 03:13:13 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:04:53.357 03:13:13 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:04:53.357 03:13:13 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:04:53.357 03:13:13 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:04:53.357 03:13:13 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:04:53.357 03:13:13 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:53.357 03:13:13 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:53.357 03:13:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.357 03:13:13 -- common/autotest_common.sh@10 -- # set +x 00:04:53.357 ************************************ 00:04:53.357 START TEST nvmf_tcp 00:04:53.357 ************************************ 00:04:53.357 03:13:13 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:53.357 * Looking for test storage... 00:04:53.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:53.357 03:13:13 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:53.357 03:13:13 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:53.357 03:13:13 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:53.617 03:13:13 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:53.617 03:13:13 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.617 03:13:13 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.617 03:13:13 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.617 03:13:13 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.617 03:13:13 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.617 03:13:13 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.617 03:13:13 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.617 03:13:13 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.617 03:13:13 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.617 03:13:13 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.617 03:13:13 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.617 03:13:13 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:53.617 03:13:13 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:04:53.617 03:13:13 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.617 03:13:13 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.617 03:13:13 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:53.617 03:13:13 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:04:53.617 03:13:13 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.617 03:13:13 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:04:53.617 03:13:13 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.617 03:13:13 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:53.617 03:13:13 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:04:53.617 03:13:13 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.617 03:13:13 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:04:53.617 03:13:13 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.617 03:13:13 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.617 03:13:13 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.617 03:13:13 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:04:53.617 03:13:13 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.617 03:13:13 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:53.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.617 --rc genhtml_branch_coverage=1 00:04:53.617 --rc genhtml_function_coverage=1 00:04:53.617 --rc genhtml_legend=1 00:04:53.617 --rc geninfo_all_blocks=1 00:04:53.617 --rc geninfo_unexecuted_blocks=1 00:04:53.617 00:04:53.617 ' 00:04:53.617 03:13:13 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:53.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.617 --rc genhtml_branch_coverage=1 00:04:53.617 --rc genhtml_function_coverage=1 00:04:53.617 --rc genhtml_legend=1 00:04:53.617 --rc geninfo_all_blocks=1 00:04:53.617 --rc geninfo_unexecuted_blocks=1 00:04:53.617 00:04:53.617 ' 00:04:53.617 03:13:13 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:53.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.617 --rc genhtml_branch_coverage=1 00:04:53.617 --rc genhtml_function_coverage=1 00:04:53.617 --rc genhtml_legend=1 00:04:53.617 --rc geninfo_all_blocks=1 00:04:53.617 --rc geninfo_unexecuted_blocks=1 00:04:53.617 00:04:53.617 ' 00:04:53.617 03:13:13 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:53.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.618 --rc genhtml_branch_coverage=1 00:04:53.618 --rc genhtml_function_coverage=1 00:04:53.618 --rc genhtml_legend=1 00:04:53.618 --rc geninfo_all_blocks=1 00:04:53.618 --rc geninfo_unexecuted_blocks=1 00:04:53.618 00:04:53.618 ' 00:04:53.618 03:13:13 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:04:53.618 03:13:13 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:53.618 03:13:13 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:53.618 03:13:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:53.618 03:13:13 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.618 03:13:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:53.618 ************************************ 00:04:53.618 START TEST nvmf_target_core 00:04:53.618 ************************************ 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:53.618 * Looking for test storage... 00:04:53.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:53.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.618 --rc genhtml_branch_coverage=1 00:04:53.618 --rc genhtml_function_coverage=1 00:04:53.618 --rc genhtml_legend=1 00:04:53.618 --rc geninfo_all_blocks=1 00:04:53.618 --rc geninfo_unexecuted_blocks=1 00:04:53.618 00:04:53.618 ' 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:53.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.618 --rc genhtml_branch_coverage=1 00:04:53.618 --rc genhtml_function_coverage=1 00:04:53.618 --rc genhtml_legend=1 00:04:53.618 --rc geninfo_all_blocks=1 00:04:53.618 --rc geninfo_unexecuted_blocks=1 00:04:53.618 00:04:53.618 ' 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:53.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.618 --rc genhtml_branch_coverage=1 00:04:53.618 --rc genhtml_function_coverage=1 00:04:53.618 --rc genhtml_legend=1 00:04:53.618 --rc geninfo_all_blocks=1 00:04:53.618 --rc geninfo_unexecuted_blocks=1 00:04:53.618 00:04:53.618 ' 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:53.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.618 --rc genhtml_branch_coverage=1 00:04:53.618 --rc genhtml_function_coverage=1 00:04:53.618 --rc genhtml_legend=1 00:04:53.618 --rc geninfo_all_blocks=1 00:04:53.618 --rc geninfo_unexecuted_blocks=1 00:04:53.618 00:04:53.618 ' 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:53.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:53.618 03:13:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:04:53.619 03:13:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:04:53.619 03:13:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:04:53.619 03:13:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:53.619 03:13:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:53.619 03:13:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.619 03:13:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:53.879 ************************************ 00:04:53.879 START TEST nvmf_abort 00:04:53.879 ************************************ 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:53.879 * Looking for test storage... 00:04:53.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:53.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.879 --rc genhtml_branch_coverage=1 00:04:53.879 --rc genhtml_function_coverage=1 00:04:53.879 --rc genhtml_legend=1 00:04:53.879 --rc geninfo_all_blocks=1 00:04:53.879 --rc geninfo_unexecuted_blocks=1 00:04:53.879 00:04:53.879 ' 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:53.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.879 --rc genhtml_branch_coverage=1 00:04:53.879 --rc genhtml_function_coverage=1 00:04:53.879 --rc genhtml_legend=1 00:04:53.879 --rc geninfo_all_blocks=1 00:04:53.879 --rc geninfo_unexecuted_blocks=1 00:04:53.879 00:04:53.879 ' 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:53.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.879 --rc genhtml_branch_coverage=1 00:04:53.879 --rc genhtml_function_coverage=1 00:04:53.879 --rc genhtml_legend=1 00:04:53.879 --rc geninfo_all_blocks=1 00:04:53.879 --rc geninfo_unexecuted_blocks=1 00:04:53.879 00:04:53.879 ' 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:53.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.879 --rc genhtml_branch_coverage=1 00:04:53.879 --rc genhtml_function_coverage=1 00:04:53.879 --rc genhtml_legend=1 00:04:53.879 --rc geninfo_all_blocks=1 00:04:53.879 --rc geninfo_unexecuted_blocks=1 00:04:53.879 00:04:53.879 ' 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.879 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:04:53.880 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.880 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:04:53.880 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:53.880 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:53.880 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:53.880 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:53.880 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:53.880 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:53.880 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:53.880 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:53.880 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:53.880 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:53.880 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:04:53.880 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:04:53.880 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:04:53.880 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:53.880 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:53.880 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:53.880 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:53.880 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:53.880 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:53.880 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:53.880 03:13:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:53.880 03:13:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:53.880 03:13:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:53.880 03:13:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:04:53.880 03:13:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:59.146 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:59.146 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:04:59.146 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:04:59.146 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:04:59.146 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:04:59.406 Found 0000:86:00.0 (0x8086 - 0x159b) 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:04:59.406 Found 0000:86:00.1 (0x8086 - 0x159b) 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:04:59.406 Found net devices under 0000:86:00.0: cvl_0_0 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:04:59.406 Found net devices under 0000:86:00.1: cvl_0_1 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:04:59.406 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:04:59.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.441 ms 00:04:59.406 00:04:59.406 --- 10.0.0.2 ping statistics --- 00:04:59.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:59.406 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:04:59.406 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:04:59.406 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:04:59.406 00:04:59.406 --- 10.0.0.1 ping statistics --- 00:04:59.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:59.406 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:04:59.406 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:04:59.407 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:04:59.407 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:04:59.407 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:04:59.407 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:04:59.407 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:04:59.407 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:04:59.407 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:04:59.407 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:04:59.665 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:04:59.666 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:04:59.666 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:59.666 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:59.666 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2438281 00:04:59.666 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:04:59.666 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2438281 00:04:59.666 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2438281 ']' 00:04:59.666 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.666 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.666 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.666 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.666 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:59.666 [2024-12-06 03:13:19.606736] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:04:59.666 [2024-12-06 03:13:19.606777] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:04:59.666 [2024-12-06 03:13:19.673447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:59.666 [2024-12-06 03:13:19.717863] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:04:59.666 [2024-12-06 03:13:19.717900] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:04:59.666 [2024-12-06 03:13:19.717907] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:59.666 [2024-12-06 03:13:19.717913] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:59.666 [2024-12-06 03:13:19.717918] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:04:59.666 [2024-12-06 03:13:19.719328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:59.666 [2024-12-06 03:13:19.719418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:59.666 [2024-12-06 03:13:19.719420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.925 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.925 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:04:59.925 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:04:59.925 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:59.925 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:59.925 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:04:59.925 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:04:59.925 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.925 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:59.925 [2024-12-06 03:13:19.857275] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:59.925 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.925 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:04:59.925 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.925 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:59.925 Malloc0 00:04:59.925 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.925 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:04:59.925 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.925 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:59.925 Delay0 00:04:59.925 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.925 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:04:59.925 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.925 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:59.925 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.925 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:04:59.925 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.925 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:59.925 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.925 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:04:59.925 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.925 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:59.925 [2024-12-06 03:13:19.926974] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:04:59.925 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.925 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:04:59.925 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.925 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:59.925 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.925 03:13:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:04:59.925 [2024-12-06 03:13:20.043117] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:02.464 Initializing NVMe Controllers 00:05:02.464 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:02.464 controller IO queue size 128 less than required 00:05:02.464 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:02.464 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:02.464 Initialization complete. Launching workers. 00:05:02.464 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36619 00:05:02.464 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36680, failed to submit 62 00:05:02.464 success 36623, unsuccessful 57, failed 0 00:05:02.464 03:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:02.464 03:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.464 03:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:02.464 03:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.464 03:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:02.464 03:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:02.464 03:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:02.464 03:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:02.464 03:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:02.464 03:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:02.464 03:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:02.464 03:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:02.464 rmmod nvme_tcp 00:05:02.464 rmmod nvme_fabrics 00:05:02.464 rmmod nvme_keyring 00:05:02.464 03:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:02.464 03:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:02.464 03:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:02.464 03:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2438281 ']' 00:05:02.464 03:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2438281 00:05:02.464 03:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2438281 ']' 00:05:02.464 03:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2438281 00:05:02.464 03:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:02.464 03:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.464 03:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2438281 00:05:02.464 03:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:02.464 03:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:02.464 03:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2438281' 00:05:02.464 killing process with pid 2438281 00:05:02.464 03:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2438281 00:05:02.464 03:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2438281 00:05:02.464 03:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:02.464 03:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:02.464 03:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:02.464 03:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:02.464 03:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:02.464 03:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:02.464 03:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:02.464 03:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:02.464 03:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:02.464 03:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:02.464 03:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:02.464 03:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:05.003 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:05.003 00:05:05.003 real 0m10.758s 00:05:05.003 user 0m11.605s 00:05:05.003 sys 0m5.075s 00:05:05.003 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.003 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:05.003 ************************************ 00:05:05.003 END TEST nvmf_abort 00:05:05.003 ************************************ 00:05:05.003 03:13:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:05.003 03:13:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:05.003 03:13:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.003 03:13:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:05.003 ************************************ 00:05:05.003 START TEST nvmf_ns_hotplug_stress 00:05:05.003 ************************************ 00:05:05.003 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:05.003 * Looking for test storage... 00:05:05.003 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:05.003 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:05.003 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:05:05.003 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:05.003 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:05.003 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.003 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.003 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.003 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.003 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.003 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.003 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.003 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.003 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.003 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:05.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.004 --rc genhtml_branch_coverage=1 00:05:05.004 --rc genhtml_function_coverage=1 00:05:05.004 --rc genhtml_legend=1 00:05:05.004 --rc geninfo_all_blocks=1 00:05:05.004 --rc geninfo_unexecuted_blocks=1 00:05:05.004 00:05:05.004 ' 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:05.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.004 --rc genhtml_branch_coverage=1 00:05:05.004 --rc genhtml_function_coverage=1 00:05:05.004 --rc genhtml_legend=1 00:05:05.004 --rc geninfo_all_blocks=1 00:05:05.004 --rc geninfo_unexecuted_blocks=1 00:05:05.004 00:05:05.004 ' 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:05.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.004 --rc genhtml_branch_coverage=1 00:05:05.004 --rc genhtml_function_coverage=1 00:05:05.004 --rc genhtml_legend=1 00:05:05.004 --rc geninfo_all_blocks=1 00:05:05.004 --rc geninfo_unexecuted_blocks=1 00:05:05.004 00:05:05.004 ' 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:05.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.004 --rc genhtml_branch_coverage=1 00:05:05.004 --rc genhtml_function_coverage=1 00:05:05.004 --rc genhtml_legend=1 00:05:05.004 --rc geninfo_all_blocks=1 00:05:05.004 --rc geninfo_unexecuted_blocks=1 00:05:05.004 00:05:05.004 ' 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:05.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:05.004 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:05.005 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:05.005 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:05.005 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:05.005 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:05.005 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:05.005 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:05.005 03:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:10.272 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:10.272 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:10.272 Found net devices under 0000:86:00.0: cvl_0_0 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:10.272 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:10.272 Found net devices under 0000:86:00.1: cvl_0_1 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:10.273 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:10.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.446 ms 00:05:10.273 00:05:10.273 --- 10.0.0.2 ping statistics --- 00:05:10.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:10.273 rtt min/avg/max/mdev = 0.446/0.446/0.446/0.000 ms 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:10.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:10.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:05:10.273 00:05:10.273 --- 10.0.0.1 ping statistics --- 00:05:10.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:10.273 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2442077 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2442077 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2442077 ']' 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:10.273 03:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:10.273 [2024-12-06 03:13:29.814570] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:05:10.273 [2024-12-06 03:13:29.814615] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:10.273 [2024-12-06 03:13:29.879727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:10.273 [2024-12-06 03:13:29.922693] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:10.273 [2024-12-06 03:13:29.922729] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:10.273 [2024-12-06 03:13:29.922737] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:10.273 [2024-12-06 03:13:29.922743] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:10.273 [2024-12-06 03:13:29.922748] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:10.273 [2024-12-06 03:13:29.924085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:10.273 [2024-12-06 03:13:29.924175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:10.273 [2024-12-06 03:13:29.924177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.273 03:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.273 03:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:10.273 03:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:10.273 03:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:10.273 03:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:10.273 03:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:10.273 03:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:10.273 03:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:10.273 [2024-12-06 03:13:30.246888] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:10.273 03:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:10.532 03:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:10.532 [2024-12-06 03:13:30.660372] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:10.792 03:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:10.792 03:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:11.052 Malloc0 00:05:11.052 03:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:11.311 Delay0 00:05:11.311 03:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:11.571 03:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:11.571 NULL1 00:05:11.571 03:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:11.830 03:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2442561 00:05:11.830 03:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:11.830 03:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2442561 00:05:11.830 03:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:12.090 03:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:12.350 03:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:12.350 03:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:12.350 true 00:05:12.610 03:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2442561 00:05:12.610 03:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:12.610 03:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:12.870 03:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:12.870 03:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:13.129 true 00:05:13.129 03:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2442561 00:05:13.129 03:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:14.509 Read completed with error (sct=0, sc=11) 00:05:14.509 03:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:14.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.509 03:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:14.509 03:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:14.780 true 00:05:14.780 03:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2442561 00:05:14.780 03:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:15.473 03:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:15.770 03:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:15.770 03:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:15.770 true 00:05:15.770 03:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2442561 00:05:15.770 03:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:16.060 03:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:16.318 03:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:16.318 03:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:16.577 true 00:05:16.577 03:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2442561 00:05:16.577 03:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:17.514 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:17.514 03:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:17.514 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:17.773 03:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:17.773 03:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:17.773 true 00:05:18.031 03:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2442561 00:05:18.031 03:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:18.031 03:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:18.291 03:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:18.291 03:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:18.549 true 00:05:18.549 03:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2442561 00:05:18.549 03:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:19.502 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:19.502 03:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:19.502 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:19.760 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:19.760 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:19.760 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:19.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:19.761 03:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:19.761 03:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:20.020 true 00:05:20.020 03:13:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2442561 00:05:20.020 03:13:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:20.959 03:13:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:20.959 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.959 03:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:20.959 03:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:21.218 true 00:05:21.218 03:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2442561 00:05:21.218 03:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:21.476 03:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:21.736 03:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:21.736 03:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:21.736 true 00:05:21.736 03:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2442561 00:05:21.736 03:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.115 03:13:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:23.115 03:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:23.115 03:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:23.115 true 00:05:23.116 03:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2442561 00:05:23.116 03:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.374 03:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:23.632 03:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:23.632 03:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:23.890 true 00:05:23.890 03:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2442561 00:05:23.890 03:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.827 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.827 03:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.827 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.827 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.085 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.085 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.085 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.085 03:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:25.085 03:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:25.344 true 00:05:25.344 03:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2442561 00:05:25.344 03:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.280 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:26.280 03:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.280 03:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:26.280 03:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:26.540 true 00:05:26.540 03:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2442561 00:05:26.540 03:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.800 03:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.800 03:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:26.800 03:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:27.060 true 00:05:27.060 03:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2442561 00:05:27.060 03:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.441 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.441 03:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.441 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.441 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.441 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.441 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.441 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.441 03:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:28.441 03:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:28.700 true 00:05:28.700 03:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2442561 00:05:28.700 03:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.639 03:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.639 03:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:29.639 03:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:29.898 true 00:05:29.898 03:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2442561 00:05:29.898 03:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.898 03:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.157 03:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:30.157 03:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:30.415 true 00:05:30.415 03:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2442561 00:05:30.415 03:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.348 03:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.605 03:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:31.605 03:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:31.863 true 00:05:31.863 03:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2442561 00:05:31.863 03:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.121 03:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.121 03:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:32.121 03:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:32.379 true 00:05:32.379 03:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2442561 00:05:32.379 03:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.755 03:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.755 03:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:33.755 03:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:34.013 true 00:05:34.013 03:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2442561 00:05:34.013 03:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.949 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.949 03:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.949 03:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:34.949 03:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:35.207 true 00:05:35.207 03:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2442561 00:05:35.207 03:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.465 03:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.465 03:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:35.465 03:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:35.724 true 00:05:35.724 03:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2442561 00:05:35.724 03:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.102 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.102 03:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.102 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.102 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.102 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.102 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.102 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.102 03:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:37.102 03:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:37.361 true 00:05:37.361 03:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2442561 00:05:37.361 03:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.927 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.184 03:13:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.184 03:13:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:38.184 03:13:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:38.442 true 00:05:38.442 03:13:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2442561 00:05:38.443 03:13:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.701 03:13:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.960 03:13:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:38.960 03:13:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:38.960 true 00:05:38.960 03:13:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2442561 00:05:38.960 03:13:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.339 03:14:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.339 03:14:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:40.339 03:14:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:40.598 true 00:05:40.598 03:14:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2442561 00:05:40.598 03:14:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.535 03:14:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.535 03:14:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:41.535 03:14:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:41.794 true 00:05:41.794 03:14:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2442561 00:05:41.794 03:14:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.054 03:14:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.054 03:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:42.054 03:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:42.313 Initializing NVMe Controllers 00:05:42.313 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:42.313 Controller IO queue size 128, less than required. 00:05:42.313 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:42.313 Controller IO queue size 128, less than required. 00:05:42.313 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:42.313 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:42.313 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:42.313 Initialization complete. Launching workers. 00:05:42.313 ======================================================== 00:05:42.313 Latency(us) 00:05:42.313 Device Information : IOPS MiB/s Average min max 00:05:42.313 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1751.43 0.86 47513.95 2515.43 1024360.17 00:05:42.313 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16618.87 8.11 7683.17 1334.48 381443.06 00:05:42.313 ======================================================== 00:05:42.313 Total : 18370.29 8.97 11480.64 1334.48 1024360.17 00:05:42.313 00:05:42.313 true 00:05:42.313 03:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2442561 00:05:42.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2442561) - No such process 00:05:42.313 03:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2442561 00:05:42.313 03:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.572 03:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:42.831 03:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:42.831 03:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:42.831 03:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:42.831 03:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:42.831 03:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:42.831 null0 00:05:43.089 03:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:43.089 03:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:43.089 03:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:43.089 null1 00:05:43.090 03:14:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:43.090 03:14:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:43.090 03:14:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:43.348 null2 00:05:43.348 03:14:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:43.348 03:14:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:43.348 03:14:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:43.607 null3 00:05:43.607 03:14:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:43.607 03:14:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:43.607 03:14:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:43.866 null4 00:05:43.866 03:14:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:43.866 03:14:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:43.866 03:14:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:05:43.866 null5 00:05:43.866 03:14:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:43.866 03:14:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:43.866 03:14:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:05:44.125 null6 00:05:44.125 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:44.125 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:44.125 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:05:44.386 null7 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2447969 2447970 2447972 2447974 2447976 2447978 2447979 2447982 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.386 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:44.647 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:44.647 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:44.647 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.647 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:44.647 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:44.647 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:44.647 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:44.647 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:44.908 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.908 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.908 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:44.908 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.908 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.908 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:44.908 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.908 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.908 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:44.908 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.908 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.908 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:44.908 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.908 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.908 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:44.908 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.908 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.908 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:44.908 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.908 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.908 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:44.908 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.908 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.908 03:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:44.908 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:44.908 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:44.908 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:44.908 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:44.908 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:44.909 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:44.909 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.168 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:45.168 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.168 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.168 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:45.168 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.168 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.168 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:45.168 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.168 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.168 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:45.168 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.168 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.168 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:45.168 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.168 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.168 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.168 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.168 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:45.168 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:45.168 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.168 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.168 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:45.168 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.168 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.168 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:45.427 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:45.427 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:45.427 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:45.427 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:45.427 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:45.427 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:45.427 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.427 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:45.685 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.685 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.685 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:45.685 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.685 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.685 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:45.685 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.685 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.685 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:45.685 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.685 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.685 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:45.685 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.685 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.685 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.685 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:45.685 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.685 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:45.685 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.685 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.685 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:45.685 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.685 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.685 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:45.943 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:45.943 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:45.943 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:45.944 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:45.944 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:45.944 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.944 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:45.944 03:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:45.944 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.944 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.944 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:45.944 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.944 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.944 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:45.944 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.944 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.944 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:45.944 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.944 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.944 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:45.944 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.944 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.944 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:45.944 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.944 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.944 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:45.944 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.944 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.944 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:46.202 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.202 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.202 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:46.202 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:46.202 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:46.203 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.203 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:46.203 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:46.203 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:46.203 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:46.203 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:46.461 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.461 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.461 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:46.461 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.461 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.461 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:46.461 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.461 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.461 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:46.461 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.461 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.461 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:46.461 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.461 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.461 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:46.461 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.461 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.461 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.461 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.461 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:46.461 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:46.461 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.461 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.461 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:46.720 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:46.720 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:46.720 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:46.720 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:46.720 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:46.720 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:46.720 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:46.720 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.979 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.979 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.979 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:46.979 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.979 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.979 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:46.979 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.979 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.979 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:46.979 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.979 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.979 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:46.979 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.979 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.979 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:46.979 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.979 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.979 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:46.979 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.979 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.979 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:46.979 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.979 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.979 03:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:46.979 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:46.979 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:46.979 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:46.979 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:47.238 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:47.238 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.238 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:47.238 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:47.238 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.238 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.238 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:47.238 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.238 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.238 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:47.238 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.238 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.238 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:47.238 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.238 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.238 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:47.238 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.238 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.238 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:47.238 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.238 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.238 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.238 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:47.238 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.238 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:47.238 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.238 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.238 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:47.497 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:47.497 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:47.497 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:47.497 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:47.497 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:47.497 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:47.497 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:47.497 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.755 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.755 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.756 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:47.756 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.756 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.756 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:47.756 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.756 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.756 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.756 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.756 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:47.756 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:47.756 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.756 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.756 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:47.756 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.756 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.756 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:47.756 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.756 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.756 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:47.756 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.756 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.756 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:48.014 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:48.015 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:48.015 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:48.015 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:48.015 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:48.015 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.015 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:48.015 03:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:48.273 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.274 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.274 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:48.274 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.274 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.274 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:48.274 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.274 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.274 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:48.274 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.274 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.274 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.274 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.274 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:48.274 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:48.274 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.274 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.274 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:48.274 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.274 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.274 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:48.274 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.274 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.274 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:48.274 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:48.274 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:48.274 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:48.274 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:48.274 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:48.274 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:48.274 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.274 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:48.533 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.533 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.533 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.533 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.533 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.533 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.533 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.533 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.533 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.533 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.533 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.533 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.533 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.533 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.533 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.533 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.533 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:05:48.533 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:05:48.533 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:48.533 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:05:48.533 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:48.533 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:05:48.533 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:48.533 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:48.533 rmmod nvme_tcp 00:05:48.533 rmmod nvme_fabrics 00:05:48.533 rmmod nvme_keyring 00:05:48.533 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:48.793 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:05:48.793 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:05:48.793 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2442077 ']' 00:05:48.793 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2442077 00:05:48.793 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2442077 ']' 00:05:48.793 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2442077 00:05:48.793 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:05:48.793 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.793 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2442077 00:05:48.793 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:48.793 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:48.793 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2442077' 00:05:48.793 killing process with pid 2442077 00:05:48.793 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2442077 00:05:48.793 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2442077 00:05:48.793 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:48.793 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:48.793 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:48.793 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:05:48.793 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:05:48.793 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:05:48.793 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:48.793 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:48.793 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:48.793 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:48.793 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:48.793 03:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:51.329 03:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:51.329 00:05:51.329 real 0m46.351s 00:05:51.329 user 3m12.590s 00:05:51.329 sys 0m14.639s 00:05:51.329 03:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.329 03:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:51.329 ************************************ 00:05:51.329 END TEST nvmf_ns_hotplug_stress 00:05:51.329 ************************************ 00:05:51.329 03:14:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:51.329 03:14:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:51.329 03:14:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:51.329 ************************************ 00:05:51.329 START TEST nvmf_delete_subsystem 00:05:51.329 ************************************ 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:51.329 * Looking for test storage... 00:05:51.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:51.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.329 --rc genhtml_branch_coverage=1 00:05:51.329 --rc genhtml_function_coverage=1 00:05:51.329 --rc genhtml_legend=1 00:05:51.329 --rc geninfo_all_blocks=1 00:05:51.329 --rc geninfo_unexecuted_blocks=1 00:05:51.329 00:05:51.329 ' 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:51.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.329 --rc genhtml_branch_coverage=1 00:05:51.329 --rc genhtml_function_coverage=1 00:05:51.329 --rc genhtml_legend=1 00:05:51.329 --rc geninfo_all_blocks=1 00:05:51.329 --rc geninfo_unexecuted_blocks=1 00:05:51.329 00:05:51.329 ' 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:51.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.329 --rc genhtml_branch_coverage=1 00:05:51.329 --rc genhtml_function_coverage=1 00:05:51.329 --rc genhtml_legend=1 00:05:51.329 --rc geninfo_all_blocks=1 00:05:51.329 --rc geninfo_unexecuted_blocks=1 00:05:51.329 00:05:51.329 ' 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:51.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.329 --rc genhtml_branch_coverage=1 00:05:51.329 --rc genhtml_function_coverage=1 00:05:51.329 --rc genhtml_legend=1 00:05:51.329 --rc geninfo_all_blocks=1 00:05:51.329 --rc geninfo_unexecuted_blocks=1 00:05:51.329 00:05:51.329 ' 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:51.329 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:51.330 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:05:51.330 03:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:56.601 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:56.601 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:05:56.601 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:56.601 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:56.601 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:56.601 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:56.601 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:56.601 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:05:56.601 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:56.601 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:05:56.601 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:05:56.601 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:05:56.601 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:05:56.601 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:05:56.601 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:05:56.601 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:56.601 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:56.601 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:56.601 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:56.601 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:56.601 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:56.601 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:56.601 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:56.601 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:56.601 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:56.601 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:56.601 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:56.601 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:56.601 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:56.601 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:56.601 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:56.601 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:56.601 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:56.601 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:56.601 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:56.601 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:56.601 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:56.601 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:56.602 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:56.602 Found net devices under 0000:86:00.0: cvl_0_0 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:56.602 Found net devices under 0000:86:00.1: cvl_0_1 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:56.602 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:56.859 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:56.859 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:56.859 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:56.859 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:56.859 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:56.859 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:56.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:56.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:05:56.859 00:05:56.859 --- 10.0.0.2 ping statistics --- 00:05:56.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:56.859 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:05:56.859 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:56.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:56.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:05:56.859 00:05:56.859 --- 10.0.0.1 ping statistics --- 00:05:56.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:56.859 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:05:56.859 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:56.859 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:05:56.859 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:56.859 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:56.859 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:56.859 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:56.859 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:56.859 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:56.859 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:56.859 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:05:56.859 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:56.859 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:56.859 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:56.859 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2452533 00:05:56.859 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:05:56.859 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2452533 00:05:56.859 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2452533 ']' 00:05:56.859 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.859 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.859 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.859 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.859 03:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:56.859 [2024-12-06 03:14:16.938975] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:05:56.859 [2024-12-06 03:14:16.939018] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:57.116 [2024-12-06 03:14:17.006262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.116 [2024-12-06 03:14:17.046338] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:57.116 [2024-12-06 03:14:17.046376] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:57.116 [2024-12-06 03:14:17.046383] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:57.116 [2024-12-06 03:14:17.046388] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:57.116 [2024-12-06 03:14:17.046394] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:57.116 [2024-12-06 03:14:17.047590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.116 [2024-12-06 03:14:17.047593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.116 03:14:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.116 03:14:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:05:57.116 03:14:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:57.116 03:14:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:57.116 03:14:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.116 03:14:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:57.116 03:14:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:57.116 03:14:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.116 03:14:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.116 [2024-12-06 03:14:17.184817] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:57.116 03:14:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.116 03:14:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:57.116 03:14:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.116 03:14:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.117 03:14:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.117 03:14:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:57.117 03:14:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.117 03:14:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.117 [2024-12-06 03:14:17.201009] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:57.117 03:14:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.117 03:14:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:05:57.117 03:14:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.117 03:14:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.117 NULL1 00:05:57.117 03:14:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.117 03:14:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:57.117 03:14:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.117 03:14:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.117 Delay0 00:05:57.117 03:14:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.117 03:14:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.117 03:14:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.117 03:14:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.117 03:14:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.117 03:14:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2452598 00:05:57.117 03:14:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:05:57.117 03:14:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:05:57.415 [2024-12-06 03:14:17.285809] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:05:59.312 03:14:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:05:59.312 03:14:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.312 03:14:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 starting I/O failed: -6 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 starting I/O failed: -6 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 starting I/O failed: -6 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 starting I/O failed: -6 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 starting I/O failed: -6 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 starting I/O failed: -6 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 starting I/O failed: -6 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 starting I/O failed: -6 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 starting I/O failed: -6 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 starting I/O failed: -6 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 starting I/O failed: -6 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 starting I/O failed: -6 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 [2024-12-06 03:14:19.407287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1917680 is same with the state(6) to be set 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 starting I/O failed: -6 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 starting I/O failed: -6 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 starting I/O failed: -6 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 starting I/O failed: -6 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 starting I/O failed: -6 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 starting I/O failed: -6 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 starting I/O failed: -6 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 starting I/O failed: -6 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 starting I/O failed: -6 00:05:59.312 [2024-12-06 03:14:19.407639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe92000d680 is same with the state(6) to be set 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.312 Write completed with error (sct=0, sc=8) 00:05:59.312 Read completed with error (sct=0, sc=8) 00:05:59.313 Read completed with error (sct=0, sc=8) 00:05:59.313 Read completed with error (sct=0, sc=8) 00:06:00.245 [2024-12-06 03:14:20.382213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19189b0 is same with the state(6) to be set 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Write completed with error (sct=0, sc=8) 00:06:00.503 Write completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Write completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Write completed with error (sct=0, sc=8) 00:06:00.503 [2024-12-06 03:14:20.410109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe92000d350 is same with the state(6) to be set 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Write completed with error (sct=0, sc=8) 00:06:00.503 Write completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Write completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Write completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Write completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Write completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Write completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Write completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 [2024-12-06 03:14:20.411444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19172c0 is same with the state(6) to be set 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Write completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Write completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Write completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Write completed with error (sct=0, sc=8) 00:06:00.503 Write completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Write completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Write completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 [2024-12-06 03:14:20.411594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1917860 is same with the state(6) to be set 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.503 Read completed with error (sct=0, sc=8) 00:06:00.504 Read completed with error (sct=0, sc=8) 00:06:00.504 Write completed with error (sct=0, sc=8) 00:06:00.504 Write completed with error (sct=0, sc=8) 00:06:00.504 Write completed with error (sct=0, sc=8) 00:06:00.504 Read completed with error (sct=0, sc=8) 00:06:00.504 Read completed with error (sct=0, sc=8) 00:06:00.504 Read completed with error (sct=0, sc=8) 00:06:00.504 Read completed with error (sct=0, sc=8) 00:06:00.504 Write completed with error (sct=0, sc=8) 00:06:00.504 Read completed with error (sct=0, sc=8) 00:06:00.504 Write completed with error (sct=0, sc=8) 00:06:00.504 Read completed with error (sct=0, sc=8) 00:06:00.504 Write completed with error (sct=0, sc=8) 00:06:00.504 Write completed with error (sct=0, sc=8) 00:06:00.504 Write completed with error (sct=0, sc=8) 00:06:00.504 Read completed with error (sct=0, sc=8) 00:06:00.504 Read completed with error (sct=0, sc=8) 00:06:00.504 Read completed with error (sct=0, sc=8) 00:06:00.504 Write completed with error (sct=0, sc=8) 00:06:00.504 Write completed with error (sct=0, sc=8) 00:06:00.504 Read completed with error (sct=0, sc=8) 00:06:00.504 Read completed with error (sct=0, sc=8) 00:06:00.504 Read completed with error (sct=0, sc=8) 00:06:00.504 Read completed with error (sct=0, sc=8) 00:06:00.504 Read completed with error (sct=0, sc=8) 00:06:00.504 Read completed with error (sct=0, sc=8) 00:06:00.504 Write completed with error (sct=0, sc=8) 00:06:00.504 Read completed with error (sct=0, sc=8) 00:06:00.504 [2024-12-06 03:14:20.412356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19174a0 is same with the state(6) to be set 00:06:00.504 Initializing NVMe Controllers 00:06:00.504 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:00.504 Controller IO queue size 128, less than required. 00:06:00.504 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:00.504 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:00.504 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:00.504 Initialization complete. Launching workers. 00:06:00.504 ======================================================== 00:06:00.504 Latency(us) 00:06:00.504 Device Information : IOPS MiB/s Average min max 00:06:00.504 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 179.66 0.09 958466.78 617.11 1013551.74 00:06:00.504 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 148.39 0.07 905816.06 229.06 1011347.57 00:06:00.504 ======================================================== 00:06:00.504 Total : 328.05 0.16 934650.49 229.06 1013551.74 00:06:00.504 00:06:00.504 [2024-12-06 03:14:20.413047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19189b0 (9): Bad file descriptor 00:06:00.504 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:00.504 03:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.504 03:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:00.504 03:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2452598 00:06:00.504 03:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:01.071 03:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:01.071 03:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2452598 00:06:01.071 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2452598) - No such process 00:06:01.071 03:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2452598 00:06:01.071 03:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:01.071 03:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2452598 00:06:01.071 03:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:01.071 03:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.071 03:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:01.071 03:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.071 03:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2452598 00:06:01.071 03:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:01.071 03:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:01.071 03:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:01.071 03:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:01.071 03:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:01.071 03:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.071 03:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:01.071 03:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.071 03:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:01.071 03:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.071 03:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:01.071 [2024-12-06 03:14:20.938999] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:01.071 03:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.071 03:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.071 03:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.071 03:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:01.071 03:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.071 03:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2453197 00:06:01.071 03:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:01.071 03:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:01.071 03:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2453197 00:06:01.071 03:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:01.071 [2024-12-06 03:14:21.007079] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:01.329 03:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:01.329 03:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2453197 00:06:01.329 03:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:01.895 03:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:01.895 03:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2453197 00:06:01.895 03:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:02.460 03:14:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:02.460 03:14:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2453197 00:06:02.460 03:14:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:03.026 03:14:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:03.026 03:14:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2453197 00:06:03.026 03:14:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:03.591 03:14:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:03.591 03:14:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2453197 00:06:03.591 03:14:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:03.848 03:14:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:03.848 03:14:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2453197 00:06:03.848 03:14:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:04.107 Initializing NVMe Controllers 00:06:04.107 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:04.107 Controller IO queue size 128, less than required. 00:06:04.107 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:04.107 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:04.107 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:04.107 Initialization complete. Launching workers. 00:06:04.107 ======================================================== 00:06:04.107 Latency(us) 00:06:04.107 Device Information : IOPS MiB/s Average min max 00:06:04.107 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003112.33 1000147.56 1011470.93 00:06:04.107 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004812.83 1000261.37 1011345.52 00:06:04.107 ======================================================== 00:06:04.107 Total : 256.00 0.12 1003962.58 1000147.56 1011470.93 00:06:04.107 00:06:04.364 03:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:04.364 03:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2453197 00:06:04.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2453197) - No such process 00:06:04.364 03:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2453197 00:06:04.364 03:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:04.365 03:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:04.365 03:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:04.365 03:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:04.365 03:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:04.365 03:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:04.365 03:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:04.365 03:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:04.365 rmmod nvme_tcp 00:06:04.623 rmmod nvme_fabrics 00:06:04.623 rmmod nvme_keyring 00:06:04.623 03:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:04.623 03:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:04.623 03:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:04.623 03:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2452533 ']' 00:06:04.623 03:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2452533 00:06:04.623 03:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2452533 ']' 00:06:04.623 03:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2452533 00:06:04.623 03:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:04.623 03:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.623 03:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2452533 00:06:04.623 03:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.623 03:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.623 03:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2452533' 00:06:04.623 killing process with pid 2452533 00:06:04.623 03:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2452533 00:06:04.623 03:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2452533 00:06:04.881 03:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:04.881 03:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:04.881 03:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:04.881 03:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:04.881 03:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:04.881 03:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:04.881 03:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:04.881 03:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:04.881 03:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:04.881 03:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:04.881 03:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:04.881 03:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:06.792 03:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:06.792 00:06:06.792 real 0m15.799s 00:06:06.792 user 0m29.054s 00:06:06.792 sys 0m5.203s 00:06:06.792 03:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.792 03:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:06.792 ************************************ 00:06:06.792 END TEST nvmf_delete_subsystem 00:06:06.792 ************************************ 00:06:06.792 03:14:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:06.792 03:14:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:06.792 03:14:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.792 03:14:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:06.792 ************************************ 00:06:06.792 START TEST nvmf_host_management 00:06:06.792 ************************************ 00:06:06.792 03:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:07.051 * Looking for test storage... 00:06:07.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:07.051 03:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:07.051 03:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:06:07.051 03:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:07.051 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:07.051 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.051 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.051 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:07.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.052 --rc genhtml_branch_coverage=1 00:06:07.052 --rc genhtml_function_coverage=1 00:06:07.052 --rc genhtml_legend=1 00:06:07.052 --rc geninfo_all_blocks=1 00:06:07.052 --rc geninfo_unexecuted_blocks=1 00:06:07.052 00:06:07.052 ' 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:07.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.052 --rc genhtml_branch_coverage=1 00:06:07.052 --rc genhtml_function_coverage=1 00:06:07.052 --rc genhtml_legend=1 00:06:07.052 --rc geninfo_all_blocks=1 00:06:07.052 --rc geninfo_unexecuted_blocks=1 00:06:07.052 00:06:07.052 ' 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:07.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.052 --rc genhtml_branch_coverage=1 00:06:07.052 --rc genhtml_function_coverage=1 00:06:07.052 --rc genhtml_legend=1 00:06:07.052 --rc geninfo_all_blocks=1 00:06:07.052 --rc geninfo_unexecuted_blocks=1 00:06:07.052 00:06:07.052 ' 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:07.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.052 --rc genhtml_branch_coverage=1 00:06:07.052 --rc genhtml_function_coverage=1 00:06:07.052 --rc genhtml_legend=1 00:06:07.052 --rc geninfo_all_blocks=1 00:06:07.052 --rc geninfo_unexecuted_blocks=1 00:06:07.052 00:06:07.052 ' 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:07.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:07.052 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:07.053 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:07.053 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:07.053 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:07.053 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:07.053 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:07.053 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:07.053 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:07.053 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:07.053 03:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:12.325 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:12.325 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:12.325 Found net devices under 0000:86:00.0: cvl_0_0 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:12.325 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:12.326 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:12.326 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:12.326 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:12.326 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:12.326 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:12.326 Found net devices under 0000:86:00.1: cvl_0_1 00:06:12.326 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:12.326 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:12.326 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:12.326 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:12.326 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:12.326 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:12.326 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:12.326 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:12.326 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:12.326 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:12.326 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:12.326 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:12.326 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:12.326 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:12.326 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:12.326 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:12.326 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:12.326 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:12.326 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:12.326 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:12.326 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:12.585 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:12.585 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:12.585 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:12.585 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:12.585 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:12.585 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:12.585 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:12.585 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:12.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:12.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.457 ms 00:06:12.585 00:06:12.585 --- 10.0.0.2 ping statistics --- 00:06:12.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:12.585 rtt min/avg/max/mdev = 0.457/0.457/0.457/0.000 ms 00:06:12.585 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:12.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:12.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:06:12.585 00:06:12.585 --- 10.0.0.1 ping statistics --- 00:06:12.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:12.585 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:06:12.585 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:12.585 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:12.585 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:12.585 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:12.585 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:12.585 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:12.585 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:12.585 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:12.585 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:12.585 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:12.585 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:12.585 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:12.585 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:12.585 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:12.585 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:12.585 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:12.585 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2457300 00:06:12.585 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2457300 00:06:12.585 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2457300 ']' 00:06:12.585 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.585 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.585 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.585 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.585 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:12.585 [2024-12-06 03:14:32.678524] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:06:12.585 [2024-12-06 03:14:32.678573] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:12.844 [2024-12-06 03:14:32.746018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:12.844 [2024-12-06 03:14:32.791051] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:12.844 [2024-12-06 03:14:32.791088] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:12.844 [2024-12-06 03:14:32.791095] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:12.844 [2024-12-06 03:14:32.791101] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:12.844 [2024-12-06 03:14:32.791107] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:12.844 [2024-12-06 03:14:32.792690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.844 [2024-12-06 03:14:32.792781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:12.844 [2024-12-06 03:14:32.792889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.844 [2024-12-06 03:14:32.792890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:12.844 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.844 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:12.844 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:12.844 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:12.844 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:12.844 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:12.844 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:12.844 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.844 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:12.844 [2024-12-06 03:14:32.931589] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:12.844 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.844 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:12.844 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:12.845 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:12.845 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:12.845 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:12.845 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:12.845 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.845 03:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:12.845 Malloc0 00:06:13.103 [2024-12-06 03:14:32.998909] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:13.103 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.103 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:13.103 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:13.103 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:13.103 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2457350 00:06:13.103 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2457350 /var/tmp/bdevperf.sock 00:06:13.103 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2457350 ']' 00:06:13.103 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:13.104 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:13.104 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.104 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:13.104 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:13.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:13.104 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:13.104 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.104 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:13.104 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:13.104 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:13.104 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:13.104 { 00:06:13.104 "params": { 00:06:13.104 "name": "Nvme$subsystem", 00:06:13.104 "trtype": "$TEST_TRANSPORT", 00:06:13.104 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:13.104 "adrfam": "ipv4", 00:06:13.104 "trsvcid": "$NVMF_PORT", 00:06:13.104 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:13.104 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:13.104 "hdgst": ${hdgst:-false}, 00:06:13.104 "ddgst": ${ddgst:-false} 00:06:13.104 }, 00:06:13.104 "method": "bdev_nvme_attach_controller" 00:06:13.104 } 00:06:13.104 EOF 00:06:13.104 )") 00:06:13.104 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:13.104 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:13.104 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:13.104 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:13.104 "params": { 00:06:13.104 "name": "Nvme0", 00:06:13.104 "trtype": "tcp", 00:06:13.104 "traddr": "10.0.0.2", 00:06:13.104 "adrfam": "ipv4", 00:06:13.104 "trsvcid": "4420", 00:06:13.104 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:13.104 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:13.104 "hdgst": false, 00:06:13.104 "ddgst": false 00:06:13.104 }, 00:06:13.104 "method": "bdev_nvme_attach_controller" 00:06:13.104 }' 00:06:13.104 [2024-12-06 03:14:33.095880] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:06:13.104 [2024-12-06 03:14:33.095924] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2457350 ] 00:06:13.104 [2024-12-06 03:14:33.159425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.104 [2024-12-06 03:14:33.201101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.362 Running I/O for 10 seconds... 00:06:13.621 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.621 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:13.621 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:13.621 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.621 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:13.621 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.621 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:13.621 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:13.621 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:13.621 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:13.621 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:13.621 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:13.621 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:13.621 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:13.621 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:13.621 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:13.621 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.621 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:13.621 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.621 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:06:13.621 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:06:13.621 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:13.882 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:13.882 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:13.882 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:13.882 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.882 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:13.882 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:13.882 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.882 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:06:13.882 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:06:13.882 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:13.882 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:13.882 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:13.882 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:13.882 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.882 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:13.882 [2024-12-06 03:14:33.894152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.882 [2024-12-06 03:14:33.894193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.882 [2024-12-06 03:14:33.894210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.882 [2024-12-06 03:14:33.894217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.882 [2024-12-06 03:14:33.894227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.882 [2024-12-06 03:14:33.894234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.882 [2024-12-06 03:14:33.894242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.882 [2024-12-06 03:14:33.894249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.882 [2024-12-06 03:14:33.894257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.882 [2024-12-06 03:14:33.894265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.882 [2024-12-06 03:14:33.894273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.882 [2024-12-06 03:14:33.894286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.882 [2024-12-06 03:14:33.894295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.882 [2024-12-06 03:14:33.894302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.882 [2024-12-06 03:14:33.894310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.882 [2024-12-06 03:14:33.894318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.882 [2024-12-06 03:14:33.894326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.882 [2024-12-06 03:14:33.894333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.882 [2024-12-06 03:14:33.894341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.882 [2024-12-06 03:14:33.894348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.882 [2024-12-06 03:14:33.894356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.882 [2024-12-06 03:14:33.894363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.882 [2024-12-06 03:14:33.894372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.882 [2024-12-06 03:14:33.894378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.882 [2024-12-06 03:14:33.894387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.882 [2024-12-06 03:14:33.894394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.882 [2024-12-06 03:14:33.894401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.882 [2024-12-06 03:14:33.894408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.882 [2024-12-06 03:14:33.894416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.882 [2024-12-06 03:14:33.894422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.882 [2024-12-06 03:14:33.894431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.882 [2024-12-06 03:14:33.894438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.882 [2024-12-06 03:14:33.894446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.882 [2024-12-06 03:14:33.894452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.882 [2024-12-06 03:14:33.894462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.882 [2024-12-06 03:14:33.894469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.882 [2024-12-06 03:14:33.894479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.882 [2024-12-06 03:14:33.894486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.882 [2024-12-06 03:14:33.894495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.882 [2024-12-06 03:14:33.894502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.882 [2024-12-06 03:14:33.894510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.882 [2024-12-06 03:14:33.894516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.882 [2024-12-06 03:14:33.894525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.882 [2024-12-06 03:14:33.894531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.882 [2024-12-06 03:14:33.894540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.882 [2024-12-06 03:14:33.894546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.882 [2024-12-06 03:14:33.894554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.882 [2024-12-06 03:14:33.894562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.882 [2024-12-06 03:14:33.894570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.883 [2024-12-06 03:14:33.894578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.883 [2024-12-06 03:14:33.894587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.883 [2024-12-06 03:14:33.894593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.883 [2024-12-06 03:14:33.894601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.883 [2024-12-06 03:14:33.894608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.883 [2024-12-06 03:14:33.894616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.883 [2024-12-06 03:14:33.894623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.883 [2024-12-06 03:14:33.894632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.883 [2024-12-06 03:14:33.894638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.883 [2024-12-06 03:14:33.894646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.883 [2024-12-06 03:14:33.894653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.883 [2024-12-06 03:14:33.894661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.883 [2024-12-06 03:14:33.894674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.883 [2024-12-06 03:14:33.894683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.883 [2024-12-06 03:14:33.894690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.883 [2024-12-06 03:14:33.894698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.883 [2024-12-06 03:14:33.894705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.883 [2024-12-06 03:14:33.894713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.883 [2024-12-06 03:14:33.894720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.883 [2024-12-06 03:14:33.894729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.883 [2024-12-06 03:14:33.894736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.883 [2024-12-06 03:14:33.894745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.883 [2024-12-06 03:14:33.894752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.883 [2024-12-06 03:14:33.894760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.883 [2024-12-06 03:14:33.894766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.883 [2024-12-06 03:14:33.894774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.883 [2024-12-06 03:14:33.894782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.883 [2024-12-06 03:14:33.894790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.883 [2024-12-06 03:14:33.894797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.883 [2024-12-06 03:14:33.894805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.883 [2024-12-06 03:14:33.894812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.883 [2024-12-06 03:14:33.894820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.883 [2024-12-06 03:14:33.894826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.883 [2024-12-06 03:14:33.894836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.883 [2024-12-06 03:14:33.894843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.883 [2024-12-06 03:14:33.894852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.883 [2024-12-06 03:14:33.894859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.883 [2024-12-06 03:14:33.894869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.883 [2024-12-06 03:14:33.894876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.883 [2024-12-06 03:14:33.894885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.883 [2024-12-06 03:14:33.894892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.883 [2024-12-06 03:14:33.894901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.883 [2024-12-06 03:14:33.894908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.883 [2024-12-06 03:14:33.894916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.883 [2024-12-06 03:14:33.894923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.883 [2024-12-06 03:14:33.894931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.883 [2024-12-06 03:14:33.894938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.883 [2024-12-06 03:14:33.894952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.883 [2024-12-06 03:14:33.894960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.883 [2024-12-06 03:14:33.894968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.883 [2024-12-06 03:14:33.894975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.883 [2024-12-06 03:14:33.894983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.883 [2024-12-06 03:14:33.894990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.883 [2024-12-06 03:14:33.894999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.883 [2024-12-06 03:14:33.895006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.883 [2024-12-06 03:14:33.895014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.883 [2024-12-06 03:14:33.895021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.883 [2024-12-06 03:14:33.895029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.883 [2024-12-06 03:14:33.895037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.883 [2024-12-06 03:14:33.895046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.883 [2024-12-06 03:14:33.895053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.883 [2024-12-06 03:14:33.895062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.884 [2024-12-06 03:14:33.895071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.884 [2024-12-06 03:14:33.895079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.884 [2024-12-06 03:14:33.895086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.884 [2024-12-06 03:14:33.895095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.884 [2024-12-06 03:14:33.895103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.884 [2024-12-06 03:14:33.895111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.884 [2024-12-06 03:14:33.895118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.884 [2024-12-06 03:14:33.895126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.884 [2024-12-06 03:14:33.895132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.884 [2024-12-06 03:14:33.895141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.884 [2024-12-06 03:14:33.895148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.884 [2024-12-06 03:14:33.895156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.884 [2024-12-06 03:14:33.895164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.884 [2024-12-06 03:14:33.895172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.884 [2024-12-06 03:14:33.895178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.884 [2024-12-06 03:14:33.895186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.884 [2024-12-06 03:14:33.895194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.884 [2024-12-06 03:14:33.895223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:06:13.884 [2024-12-06 03:14:33.896175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:13.884 task offset: 98816 on job bdev=Nvme0n1 fails 00:06:13.884 00:06:13.884 Latency(us) 00:06:13.884 [2024-12-06T02:14:34.025Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:13.884 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:13.884 Job: Nvme0n1 ended in about 0.41 seconds with error 00:06:13.884 Verification LBA range: start 0x0 length 0x400 00:06:13.884 Nvme0n1 : 0.41 1882.94 117.68 156.91 0.00 30528.31 1403.33 27468.13 00:06:13.884 [2024-12-06T02:14:34.025Z] =================================================================================================================== 00:06:13.884 [2024-12-06T02:14:34.025Z] Total : 1882.94 117.68 156.91 0.00 30528.31 1403.33 27468.13 00:06:13.884 [2024-12-06 03:14:33.898598] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:13.884 [2024-12-06 03:14:33.898624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2149120 (9): Bad file descriptor 00:06:13.884 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.884 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:13.884 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.884 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:13.884 [2024-12-06 03:14:33.901659] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:06:13.884 [2024-12-06 03:14:33.901746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:06:13.884 [2024-12-06 03:14:33.901769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:13.884 [2024-12-06 03:14:33.901784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:06:13.884 [2024-12-06 03:14:33.901792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:06:13.884 [2024-12-06 03:14:33.901799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:06:13.884 [2024-12-06 03:14:33.901807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2149120 00:06:13.884 [2024-12-06 03:14:33.901826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2149120 (9): Bad file descriptor 00:06:13.884 [2024-12-06 03:14:33.901838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:06:13.884 [2024-12-06 03:14:33.901845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:06:13.884 [2024-12-06 03:14:33.901855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:06:13.884 [2024-12-06 03:14:33.901863] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:06:13.884 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.884 03:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:14.819 03:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2457350 00:06:14.819 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2457350) - No such process 00:06:14.819 03:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:14.819 03:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:14.819 03:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:14.819 03:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:14.819 03:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:14.819 03:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:14.819 03:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:14.819 03:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:14.819 { 00:06:14.819 "params": { 00:06:14.819 "name": "Nvme$subsystem", 00:06:14.819 "trtype": "$TEST_TRANSPORT", 00:06:14.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:14.819 "adrfam": "ipv4", 00:06:14.819 "trsvcid": "$NVMF_PORT", 00:06:14.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:14.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:14.819 "hdgst": ${hdgst:-false}, 00:06:14.819 "ddgst": ${ddgst:-false} 00:06:14.819 }, 00:06:14.819 "method": "bdev_nvme_attach_controller" 00:06:14.819 } 00:06:14.819 EOF 00:06:14.819 )") 00:06:14.819 03:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:14.819 03:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:14.819 03:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:14.820 03:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:14.820 "params": { 00:06:14.820 "name": "Nvme0", 00:06:14.820 "trtype": "tcp", 00:06:14.820 "traddr": "10.0.0.2", 00:06:14.820 "adrfam": "ipv4", 00:06:14.820 "trsvcid": "4420", 00:06:14.820 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:14.820 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:14.820 "hdgst": false, 00:06:14.820 "ddgst": false 00:06:14.820 }, 00:06:14.820 "method": "bdev_nvme_attach_controller" 00:06:14.820 }' 00:06:15.101 [2024-12-06 03:14:34.967300] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:06:15.101 [2024-12-06 03:14:34.967349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2457814 ] 00:06:15.101 [2024-12-06 03:14:35.030747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.101 [2024-12-06 03:14:35.070479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.101 Running I/O for 1 seconds... 00:06:16.139 1920.00 IOPS, 120.00 MiB/s 00:06:16.139 Latency(us) 00:06:16.139 [2024-12-06T02:14:36.280Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:16.139 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:16.139 Verification LBA range: start 0x0 length 0x400 00:06:16.139 Nvme0n1 : 1.01 1958.67 122.42 0.00 0.00 32146.74 6211.67 28493.91 00:06:16.139 [2024-12-06T02:14:36.280Z] =================================================================================================================== 00:06:16.139 [2024-12-06T02:14:36.280Z] Total : 1958.67 122.42 0.00 0.00 32146.74 6211.67 28493.91 00:06:16.432 03:14:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:16.432 03:14:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:16.432 03:14:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:16.432 03:14:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:16.433 03:14:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:16.433 03:14:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:16.433 03:14:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:16.433 03:14:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:16.433 03:14:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:16.433 03:14:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:16.433 03:14:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:16.433 rmmod nvme_tcp 00:06:16.433 rmmod nvme_fabrics 00:06:16.433 rmmod nvme_keyring 00:06:16.433 03:14:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:16.433 03:14:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:16.433 03:14:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:16.433 03:14:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2457300 ']' 00:06:16.433 03:14:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2457300 00:06:16.433 03:14:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2457300 ']' 00:06:16.433 03:14:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2457300 00:06:16.433 03:14:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:16.433 03:14:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.433 03:14:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2457300 00:06:16.433 03:14:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:16.433 03:14:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:16.433 03:14:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2457300' 00:06:16.433 killing process with pid 2457300 00:06:16.433 03:14:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2457300 00:06:16.433 03:14:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2457300 00:06:16.700 [2024-12-06 03:14:36.696788] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:16.700 03:14:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:16.700 03:14:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:16.700 03:14:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:16.700 03:14:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:16.700 03:14:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:16.700 03:14:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:16.700 03:14:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:16.700 03:14:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:16.700 03:14:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:16.700 03:14:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:16.700 03:14:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:16.700 03:14:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:19.235 03:14:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:19.235 03:14:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:19.235 00:06:19.235 real 0m11.885s 00:06:19.235 user 0m19.497s 00:06:19.235 sys 0m5.171s 00:06:19.235 03:14:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.235 03:14:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:19.235 ************************************ 00:06:19.235 END TEST nvmf_host_management 00:06:19.235 ************************************ 00:06:19.235 03:14:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:19.235 03:14:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:19.235 03:14:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.235 03:14:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:19.235 ************************************ 00:06:19.235 START TEST nvmf_lvol 00:06:19.235 ************************************ 00:06:19.235 03:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:19.235 * Looking for test storage... 00:06:19.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:19.235 03:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:19.235 03:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:06:19.235 03:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:19.235 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:19.235 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.235 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.235 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.235 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.235 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.235 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.235 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.235 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.235 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.235 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.235 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.235 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:19.235 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:19.235 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.235 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.235 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:19.235 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:19.235 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.235 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:19.235 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.235 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:19.235 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:19.235 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.235 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:19.235 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.235 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.235 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.235 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:19.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.236 --rc genhtml_branch_coverage=1 00:06:19.236 --rc genhtml_function_coverage=1 00:06:19.236 --rc genhtml_legend=1 00:06:19.236 --rc geninfo_all_blocks=1 00:06:19.236 --rc geninfo_unexecuted_blocks=1 00:06:19.236 00:06:19.236 ' 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:19.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.236 --rc genhtml_branch_coverage=1 00:06:19.236 --rc genhtml_function_coverage=1 00:06:19.236 --rc genhtml_legend=1 00:06:19.236 --rc geninfo_all_blocks=1 00:06:19.236 --rc geninfo_unexecuted_blocks=1 00:06:19.236 00:06:19.236 ' 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:19.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.236 --rc genhtml_branch_coverage=1 00:06:19.236 --rc genhtml_function_coverage=1 00:06:19.236 --rc genhtml_legend=1 00:06:19.236 --rc geninfo_all_blocks=1 00:06:19.236 --rc geninfo_unexecuted_blocks=1 00:06:19.236 00:06:19.236 ' 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:19.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.236 --rc genhtml_branch_coverage=1 00:06:19.236 --rc genhtml_function_coverage=1 00:06:19.236 --rc genhtml_legend=1 00:06:19.236 --rc geninfo_all_blocks=1 00:06:19.236 --rc geninfo_unexecuted_blocks=1 00:06:19.236 00:06:19.236 ' 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:19.236 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:19.236 03:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:24.503 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:24.503 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:24.503 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:24.504 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:24.504 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:24.504 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:24.504 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:24.504 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:24.504 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:24.504 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:24.504 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:24.504 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:24.504 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:24.504 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:24.504 Found net devices under 0000:86:00.0: cvl_0_0 00:06:24.504 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:24.504 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:24.504 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:24.504 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:24.504 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:24.504 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:24.504 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:24.504 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:24.504 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:24.504 Found net devices under 0000:86:00.1: cvl_0_1 00:06:24.504 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:24.504 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:24.504 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:24.504 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:24.504 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:24.504 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:24.504 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:24.504 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:24.504 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:24.504 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:24.504 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:24.504 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:24.504 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:24.504 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:24.504 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:24.504 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:24.504 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:24.504 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:24.504 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:24.504 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:24.504 03:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:24.504 03:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:24.504 03:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:24.504 03:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:24.504 03:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:24.504 03:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:24.504 03:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:24.504 03:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:24.504 03:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:24.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:24.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.479 ms 00:06:24.504 00:06:24.504 --- 10.0.0.2 ping statistics --- 00:06:24.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:24.504 rtt min/avg/max/mdev = 0.479/0.479/0.479/0.000 ms 00:06:24.504 03:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:24.504 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:24.504 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:06:24.504 00:06:24.504 --- 10.0.0.1 ping statistics --- 00:06:24.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:24.504 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:06:24.504 03:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:24.504 03:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:24.504 03:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:24.504 03:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:24.504 03:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:24.504 03:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:24.504 03:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:24.504 03:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:24.504 03:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:24.504 03:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:24.504 03:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:24.504 03:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:24.504 03:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:24.504 03:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2461374 00:06:24.504 03:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2461374 00:06:24.504 03:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2461374 ']' 00:06:24.504 03:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.504 03:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.504 03:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.504 03:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:24.504 03:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.504 03:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:24.504 [2024-12-06 03:14:44.193712] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:06:24.504 [2024-12-06 03:14:44.193758] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:24.504 [2024-12-06 03:14:44.260701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:24.504 [2024-12-06 03:14:44.303747] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:24.504 [2024-12-06 03:14:44.303786] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:24.504 [2024-12-06 03:14:44.303794] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:24.504 [2024-12-06 03:14:44.303800] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:24.504 [2024-12-06 03:14:44.303805] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:24.504 [2024-12-06 03:14:44.305128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.504 [2024-12-06 03:14:44.305226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.504 [2024-12-06 03:14:44.305228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.504 03:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.504 03:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:24.504 03:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:24.504 03:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:24.504 03:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:24.504 03:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:24.504 03:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:24.504 [2024-12-06 03:14:44.616230] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:24.504 03:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:24.764 03:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:24.764 03:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:25.023 03:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:25.023 03:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:25.280 03:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:25.537 03:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=3ac76139-c253-4859-93ba-f2898e3c21d4 00:06:25.537 03:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3ac76139-c253-4859-93ba-f2898e3c21d4 lvol 20 00:06:25.537 03:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=595f201b-2c38-4812-8f1d-c5c73f89d9e6 00:06:25.537 03:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:25.794 03:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 595f201b-2c38-4812-8f1d-c5c73f89d9e6 00:06:26.051 03:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:26.310 [2024-12-06 03:14:46.227850] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:26.310 03:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:26.568 03:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2461868 00:06:26.568 03:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:26.568 03:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:27.503 03:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 595f201b-2c38-4812-8f1d-c5c73f89d9e6 MY_SNAPSHOT 00:06:27.762 03:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5aa5dd28-795e-4b40-b00c-898297fe1c36 00:06:27.762 03:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 595f201b-2c38-4812-8f1d-c5c73f89d9e6 30 00:06:28.021 03:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 5aa5dd28-795e-4b40-b00c-898297fe1c36 MY_CLONE 00:06:28.280 03:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=0f7c5a37-da28-4778-b0db-d39549834907 00:06:28.280 03:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 0f7c5a37-da28-4778-b0db-d39549834907 00:06:28.847 03:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2461868 00:06:36.962 Initializing NVMe Controllers 00:06:36.962 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:36.962 Controller IO queue size 128, less than required. 00:06:36.962 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:36.962 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:36.962 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:36.962 Initialization complete. Launching workers. 00:06:36.962 ======================================================== 00:06:36.962 Latency(us) 00:06:36.962 Device Information : IOPS MiB/s Average min max 00:06:36.962 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11799.22 46.09 10853.34 606.36 58859.88 00:06:36.962 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11658.82 45.54 10977.33 3060.93 61182.35 00:06:36.962 ======================================================== 00:06:36.962 Total : 23458.04 91.63 10914.96 606.36 61182.35 00:06:36.962 00:06:36.962 03:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:36.962 03:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 595f201b-2c38-4812-8f1d-c5c73f89d9e6 00:06:37.220 03:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3ac76139-c253-4859-93ba-f2898e3c21d4 00:06:37.479 03:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:37.479 03:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:37.479 03:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:37.479 03:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:37.479 03:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:37.479 03:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:37.479 03:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:37.479 03:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:37.479 03:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:37.479 rmmod nvme_tcp 00:06:37.479 rmmod nvme_fabrics 00:06:37.479 rmmod nvme_keyring 00:06:37.479 03:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:37.479 03:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:37.479 03:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:37.479 03:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2461374 ']' 00:06:37.479 03:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2461374 00:06:37.479 03:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2461374 ']' 00:06:37.479 03:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2461374 00:06:37.479 03:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:06:37.479 03:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.479 03:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2461374 00:06:37.738 03:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.738 03:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.738 03:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2461374' 00:06:37.738 killing process with pid 2461374 00:06:37.738 03:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2461374 00:06:37.738 03:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2461374 00:06:37.738 03:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:37.738 03:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:37.738 03:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:37.738 03:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:37.738 03:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:37.738 03:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:37.738 03:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:37.738 03:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:37.738 03:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:37.738 03:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:37.738 03:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:37.738 03:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:40.287 03:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:40.287 00:06:40.287 real 0m21.039s 00:06:40.287 user 1m2.628s 00:06:40.287 sys 0m7.082s 00:06:40.287 03:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.287 03:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:40.287 ************************************ 00:06:40.287 END TEST nvmf_lvol 00:06:40.287 ************************************ 00:06:40.287 03:14:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:40.287 03:14:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:40.287 03:14:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.287 03:14:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:40.287 ************************************ 00:06:40.287 START TEST nvmf_lvs_grow 00:06:40.287 ************************************ 00:06:40.287 03:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:40.287 * Looking for test storage... 00:06:40.287 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:40.287 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:40.287 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:06:40.287 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:40.287 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:40.287 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.287 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.287 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.287 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.287 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.287 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.287 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.287 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.287 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.287 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.287 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.287 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:40.287 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:40.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.288 --rc genhtml_branch_coverage=1 00:06:40.288 --rc genhtml_function_coverage=1 00:06:40.288 --rc genhtml_legend=1 00:06:40.288 --rc geninfo_all_blocks=1 00:06:40.288 --rc geninfo_unexecuted_blocks=1 00:06:40.288 00:06:40.288 ' 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:40.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.288 --rc genhtml_branch_coverage=1 00:06:40.288 --rc genhtml_function_coverage=1 00:06:40.288 --rc genhtml_legend=1 00:06:40.288 --rc geninfo_all_blocks=1 00:06:40.288 --rc geninfo_unexecuted_blocks=1 00:06:40.288 00:06:40.288 ' 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:40.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.288 --rc genhtml_branch_coverage=1 00:06:40.288 --rc genhtml_function_coverage=1 00:06:40.288 --rc genhtml_legend=1 00:06:40.288 --rc geninfo_all_blocks=1 00:06:40.288 --rc geninfo_unexecuted_blocks=1 00:06:40.288 00:06:40.288 ' 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:40.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.288 --rc genhtml_branch_coverage=1 00:06:40.288 --rc genhtml_function_coverage=1 00:06:40.288 --rc genhtml_legend=1 00:06:40.288 --rc geninfo_all_blocks=1 00:06:40.288 --rc geninfo_unexecuted_blocks=1 00:06:40.288 00:06:40.288 ' 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:40.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:06:40.288 03:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:45.557 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:45.557 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:06:45.557 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:45.557 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:45.557 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:45.557 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:45.557 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:45.558 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:45.558 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:45.558 Found net devices under 0000:86:00.0: cvl_0_0 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:45.558 Found net devices under 0000:86:00.1: cvl_0_1 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:45.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:45.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.450 ms 00:06:45.558 00:06:45.558 --- 10.0.0.2 ping statistics --- 00:06:45.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:45.558 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:45.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:45.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:06:45.558 00:06:45.558 --- 10.0.0.1 ping statistics --- 00:06:45.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:45.558 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:45.558 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2467365 00:06:45.559 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2467365 00:06:45.559 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:45.559 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2467365 ']' 00:06:45.559 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.559 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.559 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.559 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.559 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:45.559 [2024-12-06 03:15:05.644941] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:06:45.559 [2024-12-06 03:15:05.644993] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:45.829 [2024-12-06 03:15:05.711319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.829 [2024-12-06 03:15:05.753410] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:45.829 [2024-12-06 03:15:05.753449] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:45.829 [2024-12-06 03:15:05.753456] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:45.829 [2024-12-06 03:15:05.753462] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:45.829 [2024-12-06 03:15:05.753471] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:45.829 [2024-12-06 03:15:05.754050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.829 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.829 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:06:45.829 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:45.829 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:45.829 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:45.829 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:45.829 03:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:46.086 [2024-12-06 03:15:06.059701] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:46.086 03:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:46.086 03:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.086 03:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.086 03:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:46.086 ************************************ 00:06:46.086 START TEST lvs_grow_clean 00:06:46.086 ************************************ 00:06:46.086 03:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:06:46.086 03:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:46.086 03:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:46.086 03:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:46.087 03:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:46.087 03:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:46.087 03:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:46.087 03:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:46.087 03:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:46.087 03:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:46.344 03:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:46.344 03:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:46.602 03:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=942c40ee-e1b5-4690-8fc4-1ac36d681a50 00:06:46.602 03:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 942c40ee-e1b5-4690-8fc4-1ac36d681a50 00:06:46.602 03:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:46.602 03:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:46.602 03:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:46.602 03:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 942c40ee-e1b5-4690-8fc4-1ac36d681a50 lvol 150 00:06:46.861 03:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c2626029-7127-42c4-ad19-8b1d6eb61f14 00:06:46.861 03:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:46.861 03:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:47.119 [2024-12-06 03:15:07.069801] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:47.119 [2024-12-06 03:15:07.069850] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:47.119 true 00:06:47.119 03:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 942c40ee-e1b5-4690-8fc4-1ac36d681a50 00:06:47.119 03:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:47.377 03:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:47.377 03:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:47.377 03:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c2626029-7127-42c4-ad19-8b1d6eb61f14 00:06:47.635 03:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:47.894 [2024-12-06 03:15:07.800011] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:47.895 03:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:47.895 03:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2467834 00:06:47.895 03:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:47.895 03:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:47.895 03:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2467834 /var/tmp/bdevperf.sock 00:06:47.895 03:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2467834 ']' 00:06:47.895 03:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:47.895 03:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.895 03:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:47.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:47.895 03:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.895 03:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:48.152 [2024-12-06 03:15:08.050112] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:06:48.152 [2024-12-06 03:15:08.050162] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2467834 ] 00:06:48.152 [2024-12-06 03:15:08.112570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.152 [2024-12-06 03:15:08.155633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.152 03:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.152 03:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:06:48.152 03:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:48.410 Nvme0n1 00:06:48.410 03:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:48.667 [ 00:06:48.667 { 00:06:48.667 "name": "Nvme0n1", 00:06:48.667 "aliases": [ 00:06:48.667 "c2626029-7127-42c4-ad19-8b1d6eb61f14" 00:06:48.667 ], 00:06:48.667 "product_name": "NVMe disk", 00:06:48.667 "block_size": 4096, 00:06:48.667 "num_blocks": 38912, 00:06:48.667 "uuid": "c2626029-7127-42c4-ad19-8b1d6eb61f14", 00:06:48.667 "numa_id": 1, 00:06:48.667 "assigned_rate_limits": { 00:06:48.667 "rw_ios_per_sec": 0, 00:06:48.668 "rw_mbytes_per_sec": 0, 00:06:48.668 "r_mbytes_per_sec": 0, 00:06:48.668 "w_mbytes_per_sec": 0 00:06:48.668 }, 00:06:48.668 "claimed": false, 00:06:48.668 "zoned": false, 00:06:48.668 "supported_io_types": { 00:06:48.668 "read": true, 00:06:48.668 "write": true, 00:06:48.668 "unmap": true, 00:06:48.668 "flush": true, 00:06:48.668 "reset": true, 00:06:48.668 "nvme_admin": true, 00:06:48.668 "nvme_io": true, 00:06:48.668 "nvme_io_md": false, 00:06:48.668 "write_zeroes": true, 00:06:48.668 "zcopy": false, 00:06:48.668 "get_zone_info": false, 00:06:48.668 "zone_management": false, 00:06:48.668 "zone_append": false, 00:06:48.668 "compare": true, 00:06:48.668 "compare_and_write": true, 00:06:48.668 "abort": true, 00:06:48.668 "seek_hole": false, 00:06:48.668 "seek_data": false, 00:06:48.668 "copy": true, 00:06:48.668 "nvme_iov_md": false 00:06:48.668 }, 00:06:48.668 "memory_domains": [ 00:06:48.668 { 00:06:48.668 "dma_device_id": "system", 00:06:48.668 "dma_device_type": 1 00:06:48.668 } 00:06:48.668 ], 00:06:48.668 "driver_specific": { 00:06:48.668 "nvme": [ 00:06:48.668 { 00:06:48.668 "trid": { 00:06:48.668 "trtype": "TCP", 00:06:48.668 "adrfam": "IPv4", 00:06:48.668 "traddr": "10.0.0.2", 00:06:48.668 "trsvcid": "4420", 00:06:48.668 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:48.668 }, 00:06:48.668 "ctrlr_data": { 00:06:48.668 "cntlid": 1, 00:06:48.668 "vendor_id": "0x8086", 00:06:48.668 "model_number": "SPDK bdev Controller", 00:06:48.668 "serial_number": "SPDK0", 00:06:48.668 "firmware_revision": "25.01", 00:06:48.668 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:48.668 "oacs": { 00:06:48.668 "security": 0, 00:06:48.668 "format": 0, 00:06:48.668 "firmware": 0, 00:06:48.668 "ns_manage": 0 00:06:48.668 }, 00:06:48.668 "multi_ctrlr": true, 00:06:48.668 "ana_reporting": false 00:06:48.668 }, 00:06:48.668 "vs": { 00:06:48.668 "nvme_version": "1.3" 00:06:48.668 }, 00:06:48.668 "ns_data": { 00:06:48.668 "id": 1, 00:06:48.668 "can_share": true 00:06:48.668 } 00:06:48.668 } 00:06:48.668 ], 00:06:48.668 "mp_policy": "active_passive" 00:06:48.668 } 00:06:48.668 } 00:06:48.668 ] 00:06:48.668 03:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2468036 00:06:48.668 03:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:48.668 03:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:48.925 Running I/O for 10 seconds... 00:06:49.859 Latency(us) 00:06:49.859 [2024-12-06T02:15:10.000Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:49.859 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:49.859 Nvme0n1 : 1.00 22607.00 88.31 0.00 0.00 0.00 0.00 0.00 00:06:49.859 [2024-12-06T02:15:10.000Z] =================================================================================================================== 00:06:49.859 [2024-12-06T02:15:10.000Z] Total : 22607.00 88.31 0.00 0.00 0.00 0.00 0.00 00:06:49.859 00:06:50.796 03:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 942c40ee-e1b5-4690-8fc4-1ac36d681a50 00:06:50.796 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:50.796 Nvme0n1 : 2.00 22606.50 88.31 0.00 0.00 0.00 0.00 0.00 00:06:50.796 [2024-12-06T02:15:10.937Z] =================================================================================================================== 00:06:50.796 [2024-12-06T02:15:10.937Z] Total : 22606.50 88.31 0.00 0.00 0.00 0.00 0.00 00:06:50.796 00:06:50.796 true 00:06:51.055 03:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 942c40ee-e1b5-4690-8fc4-1ac36d681a50 00:06:51.055 03:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:51.055 03:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:51.055 03:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:51.055 03:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2468036 00:06:51.992 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:51.992 Nvme0n1 : 3.00 22746.00 88.85 0.00 0.00 0.00 0.00 0.00 00:06:51.992 [2024-12-06T02:15:12.133Z] =================================================================================================================== 00:06:51.992 [2024-12-06T02:15:12.133Z] Total : 22746.00 88.85 0.00 0.00 0.00 0.00 0.00 00:06:51.992 00:06:52.930 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:52.930 Nvme0n1 : 4.00 22864.25 89.31 0.00 0.00 0.00 0.00 0.00 00:06:52.930 [2024-12-06T02:15:13.071Z] =================================================================================================================== 00:06:52.930 [2024-12-06T02:15:13.071Z] Total : 22864.25 89.31 0.00 0.00 0.00 0.00 0.00 00:06:52.930 00:06:53.882 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:53.883 Nvme0n1 : 5.00 22940.20 89.61 0.00 0.00 0.00 0.00 0.00 00:06:53.883 [2024-12-06T02:15:14.024Z] =================================================================================================================== 00:06:53.883 [2024-12-06T02:15:14.024Z] Total : 22940.20 89.61 0.00 0.00 0.00 0.00 0.00 00:06:53.883 00:06:54.820 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:54.820 Nvme0n1 : 6.00 23006.17 89.87 0.00 0.00 0.00 0.00 0.00 00:06:54.820 [2024-12-06T02:15:14.961Z] =================================================================================================================== 00:06:54.820 [2024-12-06T02:15:14.961Z] Total : 23006.17 89.87 0.00 0.00 0.00 0.00 0.00 00:06:54.820 00:06:55.763 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:55.763 Nvme0n1 : 7.00 23050.86 90.04 0.00 0.00 0.00 0.00 0.00 00:06:55.763 [2024-12-06T02:15:15.904Z] =================================================================================================================== 00:06:55.763 [2024-12-06T02:15:15.904Z] Total : 23050.86 90.04 0.00 0.00 0.00 0.00 0.00 00:06:55.763 00:06:56.697 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:56.697 Nvme0n1 : 8.00 23084.25 90.17 0.00 0.00 0.00 0.00 0.00 00:06:56.697 [2024-12-06T02:15:16.838Z] =================================================================================================================== 00:06:56.697 [2024-12-06T02:15:16.838Z] Total : 23084.25 90.17 0.00 0.00 0.00 0.00 0.00 00:06:56.697 00:06:58.084 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:58.084 Nvme0n1 : 9.00 23114.11 90.29 0.00 0.00 0.00 0.00 0.00 00:06:58.084 [2024-12-06T02:15:18.225Z] =================================================================================================================== 00:06:58.084 [2024-12-06T02:15:18.225Z] Total : 23114.11 90.29 0.00 0.00 0.00 0.00 0.00 00:06:58.084 00:06:59.017 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:59.017 Nvme0n1 : 10.00 23133.50 90.37 0.00 0.00 0.00 0.00 0.00 00:06:59.017 [2024-12-06T02:15:19.158Z] =================================================================================================================== 00:06:59.017 [2024-12-06T02:15:19.158Z] Total : 23133.50 90.37 0.00 0.00 0.00 0.00 0.00 00:06:59.017 00:06:59.017 00:06:59.017 Latency(us) 00:06:59.017 [2024-12-06T02:15:19.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:59.017 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:59.017 Nvme0n1 : 10.00 23140.43 90.39 0.00 0.00 5528.55 3219.81 14531.90 00:06:59.017 [2024-12-06T02:15:19.158Z] =================================================================================================================== 00:06:59.017 [2024-12-06T02:15:19.158Z] Total : 23140.43 90.39 0.00 0.00 5528.55 3219.81 14531.90 00:06:59.017 { 00:06:59.017 "results": [ 00:06:59.017 { 00:06:59.017 "job": "Nvme0n1", 00:06:59.017 "core_mask": "0x2", 00:06:59.017 "workload": "randwrite", 00:06:59.017 "status": "finished", 00:06:59.017 "queue_depth": 128, 00:06:59.017 "io_size": 4096, 00:06:59.017 "runtime": 10.002536, 00:06:59.017 "iops": 23140.43158654965, 00:06:59.017 "mibps": 90.39231088495957, 00:06:59.017 "io_failed": 0, 00:06:59.017 "io_timeout": 0, 00:06:59.017 "avg_latency_us": 5528.549011281547, 00:06:59.017 "min_latency_us": 3219.8121739130434, 00:06:59.017 "max_latency_us": 14531.895652173913 00:06:59.017 } 00:06:59.017 ], 00:06:59.017 "core_count": 1 00:06:59.017 } 00:06:59.017 03:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2467834 00:06:59.017 03:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2467834 ']' 00:06:59.017 03:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2467834 00:06:59.017 03:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:06:59.017 03:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.017 03:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2467834 00:06:59.017 03:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:59.017 03:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:59.017 03:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2467834' 00:06:59.017 killing process with pid 2467834 00:06:59.017 03:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2467834 00:06:59.017 Received shutdown signal, test time was about 10.000000 seconds 00:06:59.017 00:06:59.017 Latency(us) 00:06:59.017 [2024-12-06T02:15:19.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:59.017 [2024-12-06T02:15:19.158Z] =================================================================================================================== 00:06:59.017 [2024-12-06T02:15:19.158Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:59.017 03:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2467834 00:06:59.017 03:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:59.274 03:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:59.531 03:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 942c40ee-e1b5-4690-8fc4-1ac36d681a50 00:06:59.531 03:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:06:59.531 03:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:06:59.531 03:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:06:59.531 03:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:59.789 [2024-12-06 03:15:19.806568] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:06:59.789 03:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 942c40ee-e1b5-4690-8fc4-1ac36d681a50 00:06:59.789 03:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:06:59.789 03:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 942c40ee-e1b5-4690-8fc4-1ac36d681a50 00:06:59.789 03:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:59.789 03:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.789 03:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:59.789 03:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.789 03:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:59.789 03:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.789 03:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:59.789 03:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:59.790 03:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 942c40ee-e1b5-4690-8fc4-1ac36d681a50 00:07:00.047 request: 00:07:00.047 { 00:07:00.047 "uuid": "942c40ee-e1b5-4690-8fc4-1ac36d681a50", 00:07:00.048 "method": "bdev_lvol_get_lvstores", 00:07:00.048 "req_id": 1 00:07:00.048 } 00:07:00.048 Got JSON-RPC error response 00:07:00.048 response: 00:07:00.048 { 00:07:00.048 "code": -19, 00:07:00.048 "message": "No such device" 00:07:00.048 } 00:07:00.048 03:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:00.048 03:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:00.048 03:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:00.048 03:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:00.048 03:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:00.305 aio_bdev 00:07:00.305 03:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c2626029-7127-42c4-ad19-8b1d6eb61f14 00:07:00.305 03:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=c2626029-7127-42c4-ad19-8b1d6eb61f14 00:07:00.305 03:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:00.305 03:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:00.305 03:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:00.305 03:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:00.305 03:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:00.305 03:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c2626029-7127-42c4-ad19-8b1d6eb61f14 -t 2000 00:07:00.563 [ 00:07:00.563 { 00:07:00.563 "name": "c2626029-7127-42c4-ad19-8b1d6eb61f14", 00:07:00.563 "aliases": [ 00:07:00.563 "lvs/lvol" 00:07:00.563 ], 00:07:00.563 "product_name": "Logical Volume", 00:07:00.563 "block_size": 4096, 00:07:00.563 "num_blocks": 38912, 00:07:00.563 "uuid": "c2626029-7127-42c4-ad19-8b1d6eb61f14", 00:07:00.563 "assigned_rate_limits": { 00:07:00.563 "rw_ios_per_sec": 0, 00:07:00.563 "rw_mbytes_per_sec": 0, 00:07:00.563 "r_mbytes_per_sec": 0, 00:07:00.563 "w_mbytes_per_sec": 0 00:07:00.563 }, 00:07:00.563 "claimed": false, 00:07:00.563 "zoned": false, 00:07:00.563 "supported_io_types": { 00:07:00.563 "read": true, 00:07:00.563 "write": true, 00:07:00.563 "unmap": true, 00:07:00.563 "flush": false, 00:07:00.563 "reset": true, 00:07:00.563 "nvme_admin": false, 00:07:00.563 "nvme_io": false, 00:07:00.563 "nvme_io_md": false, 00:07:00.563 "write_zeroes": true, 00:07:00.563 "zcopy": false, 00:07:00.563 "get_zone_info": false, 00:07:00.563 "zone_management": false, 00:07:00.563 "zone_append": false, 00:07:00.563 "compare": false, 00:07:00.563 "compare_and_write": false, 00:07:00.563 "abort": false, 00:07:00.563 "seek_hole": true, 00:07:00.563 "seek_data": true, 00:07:00.563 "copy": false, 00:07:00.563 "nvme_iov_md": false 00:07:00.563 }, 00:07:00.563 "driver_specific": { 00:07:00.563 "lvol": { 00:07:00.563 "lvol_store_uuid": "942c40ee-e1b5-4690-8fc4-1ac36d681a50", 00:07:00.563 "base_bdev": "aio_bdev", 00:07:00.563 "thin_provision": false, 00:07:00.563 "num_allocated_clusters": 38, 00:07:00.563 "snapshot": false, 00:07:00.563 "clone": false, 00:07:00.563 "esnap_clone": false 00:07:00.563 } 00:07:00.563 } 00:07:00.563 } 00:07:00.563 ] 00:07:00.563 03:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:00.563 03:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 942c40ee-e1b5-4690-8fc4-1ac36d681a50 00:07:00.563 03:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:00.820 03:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:00.820 03:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 942c40ee-e1b5-4690-8fc4-1ac36d681a50 00:07:00.820 03:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:01.079 03:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:01.079 03:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c2626029-7127-42c4-ad19-8b1d6eb61f14 00:07:01.079 03:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 942c40ee-e1b5-4690-8fc4-1ac36d681a50 00:07:01.338 03:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:01.597 03:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:01.597 00:07:01.597 real 0m15.449s 00:07:01.597 user 0m14.994s 00:07:01.597 sys 0m1.471s 00:07:01.597 03:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.597 03:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:01.597 ************************************ 00:07:01.597 END TEST lvs_grow_clean 00:07:01.597 ************************************ 00:07:01.597 03:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:01.597 03:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:01.597 03:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.597 03:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:01.597 ************************************ 00:07:01.597 START TEST lvs_grow_dirty 00:07:01.597 ************************************ 00:07:01.597 03:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:01.597 03:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:01.597 03:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:01.597 03:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:01.597 03:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:01.597 03:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:01.597 03:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:01.597 03:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:01.597 03:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:01.597 03:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:01.856 03:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:01.856 03:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:02.114 03:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=16815fc0-6b30-4d8d-9a6a-4c572b8f62f1 00:07:02.114 03:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16815fc0-6b30-4d8d-9a6a-4c572b8f62f1 00:07:02.115 03:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:02.115 03:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:02.115 03:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:02.115 03:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 16815fc0-6b30-4d8d-9a6a-4c572b8f62f1 lvol 150 00:07:02.373 03:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=2593aa43-ac12-462c-8586-d922d8f2b800 00:07:02.373 03:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:02.373 03:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:02.632 [2024-12-06 03:15:22.608836] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:02.632 [2024-12-06 03:15:22.608885] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:02.632 true 00:07:02.632 03:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16815fc0-6b30-4d8d-9a6a-4c572b8f62f1 00:07:02.632 03:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:02.891 03:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:02.891 03:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:02.891 03:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2593aa43-ac12-462c-8586-d922d8f2b800 00:07:03.149 03:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:03.408 [2024-12-06 03:15:23.359085] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:03.408 03:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:03.667 03:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2470858 00:07:03.667 03:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:03.667 03:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2470858 /var/tmp/bdevperf.sock 00:07:03.667 03:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2470858 ']' 00:07:03.667 03:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:03.667 03:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.667 03:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:03.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:03.667 03:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.667 03:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:03.667 03:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:03.667 [2024-12-06 03:15:23.598103] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:07:03.667 [2024-12-06 03:15:23.598150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2470858 ] 00:07:03.667 [2024-12-06 03:15:23.659286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.667 [2024-12-06 03:15:23.699848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.667 03:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.667 03:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:03.667 03:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:04.235 Nvme0n1 00:07:04.235 03:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:04.494 [ 00:07:04.494 { 00:07:04.494 "name": "Nvme0n1", 00:07:04.494 "aliases": [ 00:07:04.494 "2593aa43-ac12-462c-8586-d922d8f2b800" 00:07:04.494 ], 00:07:04.494 "product_name": "NVMe disk", 00:07:04.494 "block_size": 4096, 00:07:04.494 "num_blocks": 38912, 00:07:04.494 "uuid": "2593aa43-ac12-462c-8586-d922d8f2b800", 00:07:04.494 "numa_id": 1, 00:07:04.494 "assigned_rate_limits": { 00:07:04.494 "rw_ios_per_sec": 0, 00:07:04.494 "rw_mbytes_per_sec": 0, 00:07:04.494 "r_mbytes_per_sec": 0, 00:07:04.494 "w_mbytes_per_sec": 0 00:07:04.494 }, 00:07:04.494 "claimed": false, 00:07:04.494 "zoned": false, 00:07:04.494 "supported_io_types": { 00:07:04.494 "read": true, 00:07:04.494 "write": true, 00:07:04.494 "unmap": true, 00:07:04.494 "flush": true, 00:07:04.494 "reset": true, 00:07:04.494 "nvme_admin": true, 00:07:04.494 "nvme_io": true, 00:07:04.494 "nvme_io_md": false, 00:07:04.494 "write_zeroes": true, 00:07:04.494 "zcopy": false, 00:07:04.494 "get_zone_info": false, 00:07:04.494 "zone_management": false, 00:07:04.494 "zone_append": false, 00:07:04.494 "compare": true, 00:07:04.494 "compare_and_write": true, 00:07:04.494 "abort": true, 00:07:04.494 "seek_hole": false, 00:07:04.494 "seek_data": false, 00:07:04.494 "copy": true, 00:07:04.494 "nvme_iov_md": false 00:07:04.494 }, 00:07:04.494 "memory_domains": [ 00:07:04.494 { 00:07:04.494 "dma_device_id": "system", 00:07:04.494 "dma_device_type": 1 00:07:04.494 } 00:07:04.494 ], 00:07:04.494 "driver_specific": { 00:07:04.494 "nvme": [ 00:07:04.494 { 00:07:04.494 "trid": { 00:07:04.494 "trtype": "TCP", 00:07:04.494 "adrfam": "IPv4", 00:07:04.494 "traddr": "10.0.0.2", 00:07:04.494 "trsvcid": "4420", 00:07:04.494 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:04.494 }, 00:07:04.494 "ctrlr_data": { 00:07:04.494 "cntlid": 1, 00:07:04.494 "vendor_id": "0x8086", 00:07:04.494 "model_number": "SPDK bdev Controller", 00:07:04.494 "serial_number": "SPDK0", 00:07:04.494 "firmware_revision": "25.01", 00:07:04.494 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:04.494 "oacs": { 00:07:04.494 "security": 0, 00:07:04.494 "format": 0, 00:07:04.494 "firmware": 0, 00:07:04.494 "ns_manage": 0 00:07:04.494 }, 00:07:04.494 "multi_ctrlr": true, 00:07:04.494 "ana_reporting": false 00:07:04.494 }, 00:07:04.494 "vs": { 00:07:04.494 "nvme_version": "1.3" 00:07:04.494 }, 00:07:04.494 "ns_data": { 00:07:04.494 "id": 1, 00:07:04.494 "can_share": true 00:07:04.494 } 00:07:04.494 } 00:07:04.494 ], 00:07:04.494 "mp_policy": "active_passive" 00:07:04.494 } 00:07:04.494 } 00:07:04.494 ] 00:07:04.494 03:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2470932 00:07:04.494 03:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:04.494 03:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:04.494 Running I/O for 10 seconds... 00:07:05.430 Latency(us) 00:07:05.430 [2024-12-06T02:15:25.571Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:05.430 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:05.430 Nvme0n1 : 1.00 22887.00 89.40 0.00 0.00 0.00 0.00 0.00 00:07:05.430 [2024-12-06T02:15:25.571Z] =================================================================================================================== 00:07:05.430 [2024-12-06T02:15:25.571Z] Total : 22887.00 89.40 0.00 0.00 0.00 0.00 0.00 00:07:05.430 00:07:06.366 03:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 16815fc0-6b30-4d8d-9a6a-4c572b8f62f1 00:07:06.625 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:06.625 Nvme0n1 : 2.00 23002.00 89.85 0.00 0.00 0.00 0.00 0.00 00:07:06.625 [2024-12-06T02:15:26.766Z] =================================================================================================================== 00:07:06.625 [2024-12-06T02:15:26.766Z] Total : 23002.00 89.85 0.00 0.00 0.00 0.00 0.00 00:07:06.625 00:07:06.625 true 00:07:06.625 03:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:06.625 03:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16815fc0-6b30-4d8d-9a6a-4c572b8f62f1 00:07:06.883 03:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:06.883 03:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:06.883 03:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2470932 00:07:07.451 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:07.451 Nvme0n1 : 3.00 23076.33 90.14 0.00 0.00 0.00 0.00 0.00 00:07:07.451 [2024-12-06T02:15:27.592Z] =================================================================================================================== 00:07:07.451 [2024-12-06T02:15:27.592Z] Total : 23076.33 90.14 0.00 0.00 0.00 0.00 0.00 00:07:07.451 00:07:08.828 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:08.829 Nvme0n1 : 4.00 23150.00 90.43 0.00 0.00 0.00 0.00 0.00 00:07:08.829 [2024-12-06T02:15:28.970Z] =================================================================================================================== 00:07:08.829 [2024-12-06T02:15:28.970Z] Total : 23150.00 90.43 0.00 0.00 0.00 0.00 0.00 00:07:08.829 00:07:09.764 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:09.764 Nvme0n1 : 5.00 23144.80 90.41 0.00 0.00 0.00 0.00 0.00 00:07:09.764 [2024-12-06T02:15:29.905Z] =================================================================================================================== 00:07:09.764 [2024-12-06T02:15:29.905Z] Total : 23144.80 90.41 0.00 0.00 0.00 0.00 0.00 00:07:09.764 00:07:10.700 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:10.700 Nvme0n1 : 6.00 23160.83 90.47 0.00 0.00 0.00 0.00 0.00 00:07:10.700 [2024-12-06T02:15:30.841Z] =================================================================================================================== 00:07:10.700 [2024-12-06T02:15:30.841Z] Total : 23160.83 90.47 0.00 0.00 0.00 0.00 0.00 00:07:10.700 00:07:11.640 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:11.640 Nvme0n1 : 7.00 23208.57 90.66 0.00 0.00 0.00 0.00 0.00 00:07:11.640 [2024-12-06T02:15:31.781Z] =================================================================================================================== 00:07:11.640 [2024-12-06T02:15:31.781Z] Total : 23208.57 90.66 0.00 0.00 0.00 0.00 0.00 00:07:11.640 00:07:12.573 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:12.573 Nvme0n1 : 8.00 23235.50 90.76 0.00 0.00 0.00 0.00 0.00 00:07:12.573 [2024-12-06T02:15:32.714Z] =================================================================================================================== 00:07:12.573 [2024-12-06T02:15:32.714Z] Total : 23235.50 90.76 0.00 0.00 0.00 0.00 0.00 00:07:12.573 00:07:13.507 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:13.507 Nvme0n1 : 9.00 23260.78 90.86 0.00 0.00 0.00 0.00 0.00 00:07:13.507 [2024-12-06T02:15:33.648Z] =================================================================================================================== 00:07:13.507 [2024-12-06T02:15:33.648Z] Total : 23260.78 90.86 0.00 0.00 0.00 0.00 0.00 00:07:13.507 00:07:14.442 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:14.442 Nvme0n1 : 10.00 23269.70 90.90 0.00 0.00 0.00 0.00 0.00 00:07:14.442 [2024-12-06T02:15:34.583Z] =================================================================================================================== 00:07:14.442 [2024-12-06T02:15:34.583Z] Total : 23269.70 90.90 0.00 0.00 0.00 0.00 0.00 00:07:14.442 00:07:14.442 00:07:14.442 Latency(us) 00:07:14.442 [2024-12-06T02:15:34.583Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:14.442 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:14.442 Nvme0n1 : 10.00 23273.35 90.91 0.00 0.00 5496.79 2450.48 11454.55 00:07:14.442 [2024-12-06T02:15:34.583Z] =================================================================================================================== 00:07:14.442 [2024-12-06T02:15:34.583Z] Total : 23273.35 90.91 0.00 0.00 5496.79 2450.48 11454.55 00:07:14.442 { 00:07:14.442 "results": [ 00:07:14.442 { 00:07:14.442 "job": "Nvme0n1", 00:07:14.442 "core_mask": "0x2", 00:07:14.442 "workload": "randwrite", 00:07:14.442 "status": "finished", 00:07:14.442 "queue_depth": 128, 00:07:14.442 "io_size": 4096, 00:07:14.442 "runtime": 10.003931, 00:07:14.442 "iops": 23273.351245625345, 00:07:14.442 "mibps": 90.911528303224, 00:07:14.442 "io_failed": 0, 00:07:14.442 "io_timeout": 0, 00:07:14.442 "avg_latency_us": 5496.786955263096, 00:07:14.442 "min_latency_us": 2450.4765217391305, 00:07:14.442 "max_latency_us": 11454.553043478261 00:07:14.442 } 00:07:14.442 ], 00:07:14.442 "core_count": 1 00:07:14.442 } 00:07:14.700 03:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2470858 00:07:14.700 03:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2470858 ']' 00:07:14.700 03:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2470858 00:07:14.700 03:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:14.700 03:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:14.700 03:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2470858 00:07:14.700 03:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:14.700 03:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:14.700 03:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2470858' 00:07:14.700 killing process with pid 2470858 00:07:14.700 03:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2470858 00:07:14.700 Received shutdown signal, test time was about 10.000000 seconds 00:07:14.700 00:07:14.700 Latency(us) 00:07:14.700 [2024-12-06T02:15:34.841Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:14.700 [2024-12-06T02:15:34.841Z] =================================================================================================================== 00:07:14.700 [2024-12-06T02:15:34.841Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:14.700 03:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2470858 00:07:14.700 03:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:14.959 03:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:15.216 03:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16815fc0-6b30-4d8d-9a6a-4c572b8f62f1 00:07:15.216 03:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:15.474 03:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:15.474 03:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:15.474 03:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2467365 00:07:15.474 03:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2467365 00:07:15.474 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2467365 Killed "${NVMF_APP[@]}" "$@" 00:07:15.474 03:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:15.474 03:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:15.474 03:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:15.474 03:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:15.474 03:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:15.474 03:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:15.474 03:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2472725 00:07:15.474 03:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2472725 00:07:15.474 03:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2472725 ']' 00:07:15.474 03:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.474 03:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.474 03:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.474 03:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.474 03:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:15.474 [2024-12-06 03:15:35.440088] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:07:15.474 [2024-12-06 03:15:35.440137] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.474 [2024-12-06 03:15:35.502304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.474 [2024-12-06 03:15:35.543614] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:15.474 [2024-12-06 03:15:35.543649] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:15.474 [2024-12-06 03:15:35.543656] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:15.474 [2024-12-06 03:15:35.543662] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:15.474 [2024-12-06 03:15:35.543669] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:15.474 [2024-12-06 03:15:35.544248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.731 03:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.731 03:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:15.731 03:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:15.731 03:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:15.731 03:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:15.731 03:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:15.731 03:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:15.731 [2024-12-06 03:15:35.846725] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:15.731 [2024-12-06 03:15:35.846811] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:15.731 [2024-12-06 03:15:35.846837] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:15.731 03:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:15.731 03:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 2593aa43-ac12-462c-8586-d922d8f2b800 00:07:15.731 03:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=2593aa43-ac12-462c-8586-d922d8f2b800 00:07:15.731 03:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:15.731 03:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:15.731 03:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:15.731 03:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:15.731 03:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:15.988 03:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2593aa43-ac12-462c-8586-d922d8f2b800 -t 2000 00:07:16.246 [ 00:07:16.246 { 00:07:16.246 "name": "2593aa43-ac12-462c-8586-d922d8f2b800", 00:07:16.246 "aliases": [ 00:07:16.246 "lvs/lvol" 00:07:16.246 ], 00:07:16.246 "product_name": "Logical Volume", 00:07:16.246 "block_size": 4096, 00:07:16.246 "num_blocks": 38912, 00:07:16.246 "uuid": "2593aa43-ac12-462c-8586-d922d8f2b800", 00:07:16.246 "assigned_rate_limits": { 00:07:16.246 "rw_ios_per_sec": 0, 00:07:16.246 "rw_mbytes_per_sec": 0, 00:07:16.246 "r_mbytes_per_sec": 0, 00:07:16.246 "w_mbytes_per_sec": 0 00:07:16.246 }, 00:07:16.246 "claimed": false, 00:07:16.246 "zoned": false, 00:07:16.246 "supported_io_types": { 00:07:16.246 "read": true, 00:07:16.246 "write": true, 00:07:16.246 "unmap": true, 00:07:16.246 "flush": false, 00:07:16.246 "reset": true, 00:07:16.246 "nvme_admin": false, 00:07:16.246 "nvme_io": false, 00:07:16.246 "nvme_io_md": false, 00:07:16.246 "write_zeroes": true, 00:07:16.246 "zcopy": false, 00:07:16.246 "get_zone_info": false, 00:07:16.246 "zone_management": false, 00:07:16.246 "zone_append": false, 00:07:16.246 "compare": false, 00:07:16.246 "compare_and_write": false, 00:07:16.246 "abort": false, 00:07:16.246 "seek_hole": true, 00:07:16.246 "seek_data": true, 00:07:16.246 "copy": false, 00:07:16.246 "nvme_iov_md": false 00:07:16.246 }, 00:07:16.246 "driver_specific": { 00:07:16.246 "lvol": { 00:07:16.246 "lvol_store_uuid": "16815fc0-6b30-4d8d-9a6a-4c572b8f62f1", 00:07:16.246 "base_bdev": "aio_bdev", 00:07:16.246 "thin_provision": false, 00:07:16.246 "num_allocated_clusters": 38, 00:07:16.246 "snapshot": false, 00:07:16.246 "clone": false, 00:07:16.246 "esnap_clone": false 00:07:16.246 } 00:07:16.246 } 00:07:16.246 } 00:07:16.246 ] 00:07:16.246 03:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:16.246 03:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16815fc0-6b30-4d8d-9a6a-4c572b8f62f1 00:07:16.246 03:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:16.504 03:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:16.504 03:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16815fc0-6b30-4d8d-9a6a-4c572b8f62f1 00:07:16.504 03:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:16.504 03:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:16.504 03:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:16.762 [2024-12-06 03:15:36.791710] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:16.762 03:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16815fc0-6b30-4d8d-9a6a-4c572b8f62f1 00:07:16.762 03:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:16.762 03:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16815fc0-6b30-4d8d-9a6a-4c572b8f62f1 00:07:16.762 03:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:16.762 03:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.762 03:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:16.762 03:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.762 03:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:16.762 03:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.762 03:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:16.762 03:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:16.762 03:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16815fc0-6b30-4d8d-9a6a-4c572b8f62f1 00:07:17.020 request: 00:07:17.020 { 00:07:17.020 "uuid": "16815fc0-6b30-4d8d-9a6a-4c572b8f62f1", 00:07:17.020 "method": "bdev_lvol_get_lvstores", 00:07:17.020 "req_id": 1 00:07:17.020 } 00:07:17.020 Got JSON-RPC error response 00:07:17.020 response: 00:07:17.020 { 00:07:17.020 "code": -19, 00:07:17.020 "message": "No such device" 00:07:17.020 } 00:07:17.020 03:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:17.020 03:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:17.020 03:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:17.020 03:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:17.021 03:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:17.280 aio_bdev 00:07:17.280 03:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2593aa43-ac12-462c-8586-d922d8f2b800 00:07:17.280 03:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=2593aa43-ac12-462c-8586-d922d8f2b800 00:07:17.280 03:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:17.280 03:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:17.280 03:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:17.280 03:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:17.280 03:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:17.280 03:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2593aa43-ac12-462c-8586-d922d8f2b800 -t 2000 00:07:17.539 [ 00:07:17.539 { 00:07:17.539 "name": "2593aa43-ac12-462c-8586-d922d8f2b800", 00:07:17.539 "aliases": [ 00:07:17.539 "lvs/lvol" 00:07:17.539 ], 00:07:17.539 "product_name": "Logical Volume", 00:07:17.539 "block_size": 4096, 00:07:17.539 "num_blocks": 38912, 00:07:17.539 "uuid": "2593aa43-ac12-462c-8586-d922d8f2b800", 00:07:17.539 "assigned_rate_limits": { 00:07:17.539 "rw_ios_per_sec": 0, 00:07:17.539 "rw_mbytes_per_sec": 0, 00:07:17.539 "r_mbytes_per_sec": 0, 00:07:17.539 "w_mbytes_per_sec": 0 00:07:17.539 }, 00:07:17.539 "claimed": false, 00:07:17.539 "zoned": false, 00:07:17.539 "supported_io_types": { 00:07:17.539 "read": true, 00:07:17.539 "write": true, 00:07:17.539 "unmap": true, 00:07:17.539 "flush": false, 00:07:17.539 "reset": true, 00:07:17.539 "nvme_admin": false, 00:07:17.539 "nvme_io": false, 00:07:17.539 "nvme_io_md": false, 00:07:17.539 "write_zeroes": true, 00:07:17.539 "zcopy": false, 00:07:17.539 "get_zone_info": false, 00:07:17.539 "zone_management": false, 00:07:17.539 "zone_append": false, 00:07:17.539 "compare": false, 00:07:17.539 "compare_and_write": false, 00:07:17.539 "abort": false, 00:07:17.539 "seek_hole": true, 00:07:17.539 "seek_data": true, 00:07:17.539 "copy": false, 00:07:17.539 "nvme_iov_md": false 00:07:17.539 }, 00:07:17.539 "driver_specific": { 00:07:17.539 "lvol": { 00:07:17.539 "lvol_store_uuid": "16815fc0-6b30-4d8d-9a6a-4c572b8f62f1", 00:07:17.539 "base_bdev": "aio_bdev", 00:07:17.539 "thin_provision": false, 00:07:17.539 "num_allocated_clusters": 38, 00:07:17.539 "snapshot": false, 00:07:17.539 "clone": false, 00:07:17.539 "esnap_clone": false 00:07:17.539 } 00:07:17.539 } 00:07:17.539 } 00:07:17.539 ] 00:07:17.539 03:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:17.539 03:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16815fc0-6b30-4d8d-9a6a-4c572b8f62f1 00:07:17.539 03:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:17.797 03:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:17.797 03:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16815fc0-6b30-4d8d-9a6a-4c572b8f62f1 00:07:17.797 03:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:18.055 03:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:18.055 03:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2593aa43-ac12-462c-8586-d922d8f2b800 00:07:18.055 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 16815fc0-6b30-4d8d-9a6a-4c572b8f62f1 00:07:18.313 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:18.571 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:18.571 00:07:18.571 real 0m16.896s 00:07:18.571 user 0m43.643s 00:07:18.571 sys 0m3.719s 00:07:18.571 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.571 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:18.571 ************************************ 00:07:18.571 END TEST lvs_grow_dirty 00:07:18.571 ************************************ 00:07:18.571 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:18.571 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:18.571 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:18.571 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:18.571 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:18.571 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:18.571 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:18.571 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:18.571 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:18.571 nvmf_trace.0 00:07:18.571 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:18.571 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:18.571 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:18.571 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:18.571 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:18.571 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:18.571 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:18.571 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:18.571 rmmod nvme_tcp 00:07:18.571 rmmod nvme_fabrics 00:07:18.571 rmmod nvme_keyring 00:07:18.571 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:18.571 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:18.571 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:18.571 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2472725 ']' 00:07:18.571 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2472725 00:07:18.571 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2472725 ']' 00:07:18.571 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2472725 00:07:18.571 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:18.571 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.571 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2472725 00:07:18.830 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:18.830 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:18.830 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2472725' 00:07:18.830 killing process with pid 2472725 00:07:18.830 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2472725 00:07:18.830 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2472725 00:07:18.830 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:18.830 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:18.830 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:18.830 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:18.830 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:18.830 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:18.830 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:18.830 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:18.830 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:18.830 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.830 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:18.830 03:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:21.365 03:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:21.365 00:07:21.365 real 0m41.003s 00:07:21.365 user 1m4.057s 00:07:21.365 sys 0m9.656s 00:07:21.365 03:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.365 03:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:21.365 ************************************ 00:07:21.365 END TEST nvmf_lvs_grow 00:07:21.365 ************************************ 00:07:21.365 03:15:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:21.365 03:15:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:21.365 03:15:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.365 03:15:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:21.365 ************************************ 00:07:21.365 START TEST nvmf_bdev_io_wait 00:07:21.365 ************************************ 00:07:21.365 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:21.365 * Looking for test storage... 00:07:21.365 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:21.365 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:21.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.366 --rc genhtml_branch_coverage=1 00:07:21.366 --rc genhtml_function_coverage=1 00:07:21.366 --rc genhtml_legend=1 00:07:21.366 --rc geninfo_all_blocks=1 00:07:21.366 --rc geninfo_unexecuted_blocks=1 00:07:21.366 00:07:21.366 ' 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:21.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.366 --rc genhtml_branch_coverage=1 00:07:21.366 --rc genhtml_function_coverage=1 00:07:21.366 --rc genhtml_legend=1 00:07:21.366 --rc geninfo_all_blocks=1 00:07:21.366 --rc geninfo_unexecuted_blocks=1 00:07:21.366 00:07:21.366 ' 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:21.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.366 --rc genhtml_branch_coverage=1 00:07:21.366 --rc genhtml_function_coverage=1 00:07:21.366 --rc genhtml_legend=1 00:07:21.366 --rc geninfo_all_blocks=1 00:07:21.366 --rc geninfo_unexecuted_blocks=1 00:07:21.366 00:07:21.366 ' 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:21.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.366 --rc genhtml_branch_coverage=1 00:07:21.366 --rc genhtml_function_coverage=1 00:07:21.366 --rc genhtml_legend=1 00:07:21.366 --rc geninfo_all_blocks=1 00:07:21.366 --rc geninfo_unexecuted_blocks=1 00:07:21.366 00:07:21.366 ' 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:21.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:21.366 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:21.367 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:21.367 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:21.367 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:21.367 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:21.367 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:21.367 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:21.367 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:21.367 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.367 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:21.367 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:21.367 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:21.367 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:21.367 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:21.367 03:15:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:26.637 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:26.637 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:26.637 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:26.637 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:26.637 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:26.637 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:26.637 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:26.637 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:26.637 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:26.637 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:26.637 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:26.637 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:26.637 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:26.637 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:26.637 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:26.637 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:26.637 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:26.637 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:26.637 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:26.637 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:26.637 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:26.637 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:26.637 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:26.637 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:26.637 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:26.638 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:26.638 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:26.638 Found net devices under 0000:86:00.0: cvl_0_0 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:26.638 Found net devices under 0000:86:00.1: cvl_0_1 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:26.638 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:26.898 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:26.898 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:26.899 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:26.899 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:26.899 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:26.899 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:07:26.899 00:07:26.899 --- 10.0.0.2 ping statistics --- 00:07:26.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.899 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:07:26.899 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:26.899 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:26.899 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:07:26.899 00:07:26.899 --- 10.0.0.1 ping statistics --- 00:07:26.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.899 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:07:26.899 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:26.899 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:26.899 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:26.899 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:26.899 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:26.899 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:26.899 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:26.899 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:26.899 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:26.899 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:26.899 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:26.899 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:26.899 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:26.899 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2476992 00:07:26.899 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:26.899 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2476992 00:07:26.899 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2476992 ']' 00:07:26.899 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.899 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:26.899 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.899 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:26.899 03:15:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:26.899 [2024-12-06 03:15:46.946402] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:07:26.899 [2024-12-06 03:15:46.946448] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.899 [2024-12-06 03:15:47.011308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:27.159 [2024-12-06 03:15:47.056359] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:27.159 [2024-12-06 03:15:47.056394] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:27.159 [2024-12-06 03:15:47.056401] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:27.159 [2024-12-06 03:15:47.056407] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:27.159 [2024-12-06 03:15:47.056412] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:27.159 [2024-12-06 03:15:47.059963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.159 [2024-12-06 03:15:47.059982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.159 [2024-12-06 03:15:47.060064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:27.159 [2024-12-06 03:15:47.060067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:27.159 [2024-12-06 03:15:47.219943] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:27.159 Malloc0 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:27.159 [2024-12-06 03:15:47.275630] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2477025 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2477027 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:27.159 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:27.159 { 00:07:27.159 "params": { 00:07:27.159 "name": "Nvme$subsystem", 00:07:27.159 "trtype": "$TEST_TRANSPORT", 00:07:27.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:27.159 "adrfam": "ipv4", 00:07:27.159 "trsvcid": "$NVMF_PORT", 00:07:27.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:27.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:27.159 "hdgst": ${hdgst:-false}, 00:07:27.159 "ddgst": ${ddgst:-false} 00:07:27.159 }, 00:07:27.159 "method": "bdev_nvme_attach_controller" 00:07:27.159 } 00:07:27.159 EOF 00:07:27.159 )") 00:07:27.160 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:27.160 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2477029 00:07:27.160 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:27.160 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:27.160 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:27.160 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:27.160 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:27.160 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:27.160 { 00:07:27.160 "params": { 00:07:27.160 "name": "Nvme$subsystem", 00:07:27.160 "trtype": "$TEST_TRANSPORT", 00:07:27.160 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:27.160 "adrfam": "ipv4", 00:07:27.160 "trsvcid": "$NVMF_PORT", 00:07:27.160 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:27.160 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:27.160 "hdgst": ${hdgst:-false}, 00:07:27.160 "ddgst": ${ddgst:-false} 00:07:27.160 }, 00:07:27.160 "method": "bdev_nvme_attach_controller" 00:07:27.160 } 00:07:27.160 EOF 00:07:27.160 )") 00:07:27.160 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2477032 00:07:27.160 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:27.160 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:27.160 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:27.160 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:27.160 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:27.160 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:27.160 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:27.160 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:27.160 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:27.160 { 00:07:27.160 "params": { 00:07:27.160 "name": "Nvme$subsystem", 00:07:27.160 "trtype": "$TEST_TRANSPORT", 00:07:27.160 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:27.160 "adrfam": "ipv4", 00:07:27.160 "trsvcid": "$NVMF_PORT", 00:07:27.160 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:27.160 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:27.160 "hdgst": ${hdgst:-false}, 00:07:27.160 "ddgst": ${ddgst:-false} 00:07:27.160 }, 00:07:27.160 "method": "bdev_nvme_attach_controller" 00:07:27.160 } 00:07:27.160 EOF 00:07:27.160 )") 00:07:27.160 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:27.160 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:27.160 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:27.160 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:27.160 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:27.160 { 00:07:27.160 "params": { 00:07:27.160 "name": "Nvme$subsystem", 00:07:27.160 "trtype": "$TEST_TRANSPORT", 00:07:27.160 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:27.160 "adrfam": "ipv4", 00:07:27.160 "trsvcid": "$NVMF_PORT", 00:07:27.160 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:27.160 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:27.160 "hdgst": ${hdgst:-false}, 00:07:27.160 "ddgst": ${ddgst:-false} 00:07:27.160 }, 00:07:27.160 "method": "bdev_nvme_attach_controller" 00:07:27.160 } 00:07:27.160 EOF 00:07:27.160 )") 00:07:27.160 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:27.160 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2477025 00:07:27.160 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:27.160 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:27.160 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:27.160 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:27.160 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:27.160 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:27.160 "params": { 00:07:27.160 "name": "Nvme1", 00:07:27.160 "trtype": "tcp", 00:07:27.160 "traddr": "10.0.0.2", 00:07:27.160 "adrfam": "ipv4", 00:07:27.160 "trsvcid": "4420", 00:07:27.160 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:27.160 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:27.160 "hdgst": false, 00:07:27.160 "ddgst": false 00:07:27.160 }, 00:07:27.160 "method": "bdev_nvme_attach_controller" 00:07:27.160 }' 00:07:27.160 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:27.160 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:27.160 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:27.160 "params": { 00:07:27.160 "name": "Nvme1", 00:07:27.160 "trtype": "tcp", 00:07:27.160 "traddr": "10.0.0.2", 00:07:27.160 "adrfam": "ipv4", 00:07:27.160 "trsvcid": "4420", 00:07:27.160 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:27.160 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:27.160 "hdgst": false, 00:07:27.160 "ddgst": false 00:07:27.160 }, 00:07:27.160 "method": "bdev_nvme_attach_controller" 00:07:27.160 }' 00:07:27.420 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:27.420 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:27.420 "params": { 00:07:27.420 "name": "Nvme1", 00:07:27.420 "trtype": "tcp", 00:07:27.420 "traddr": "10.0.0.2", 00:07:27.420 "adrfam": "ipv4", 00:07:27.420 "trsvcid": "4420", 00:07:27.420 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:27.420 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:27.420 "hdgst": false, 00:07:27.420 "ddgst": false 00:07:27.420 }, 00:07:27.420 "method": "bdev_nvme_attach_controller" 00:07:27.420 }' 00:07:27.420 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:27.420 03:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:27.420 "params": { 00:07:27.420 "name": "Nvme1", 00:07:27.420 "trtype": "tcp", 00:07:27.420 "traddr": "10.0.0.2", 00:07:27.420 "adrfam": "ipv4", 00:07:27.420 "trsvcid": "4420", 00:07:27.420 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:27.420 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:27.420 "hdgst": false, 00:07:27.420 "ddgst": false 00:07:27.420 }, 00:07:27.420 "method": "bdev_nvme_attach_controller" 00:07:27.420 }' 00:07:27.420 [2024-12-06 03:15:47.327421] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:07:27.420 [2024-12-06 03:15:47.327469] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:27.420 [2024-12-06 03:15:47.329520] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:07:27.420 [2024-12-06 03:15:47.329560] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:27.420 [2024-12-06 03:15:47.330468] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:07:27.420 [2024-12-06 03:15:47.330507] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:27.420 [2024-12-06 03:15:47.332591] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:07:27.420 [2024-12-06 03:15:47.332639] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:27.420 [2024-12-06 03:15:47.509841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.420 [2024-12-06 03:15:47.552771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:27.680 [2024-12-06 03:15:47.603122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.680 [2024-12-06 03:15:47.646058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:27.680 [2024-12-06 03:15:47.699601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.680 [2024-12-06 03:15:47.753106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.680 [2024-12-06 03:15:47.759081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:27.680 [2024-12-06 03:15:47.795977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:27.939 Running I/O for 1 seconds... 00:07:27.939 Running I/O for 1 seconds... 00:07:27.939 Running I/O for 1 seconds... 00:07:27.939 Running I/O for 1 seconds... 00:07:28.877 13371.00 IOPS, 52.23 MiB/s 00:07:28.877 Latency(us) 00:07:28.877 [2024-12-06T02:15:49.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:28.877 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:28.877 Nvme1n1 : 1.01 13431.86 52.47 0.00 0.00 9501.84 4331.07 15386.71 00:07:28.877 [2024-12-06T02:15:49.018Z] =================================================================================================================== 00:07:28.877 [2024-12-06T02:15:49.018Z] Total : 13431.86 52.47 0.00 0.00 9501.84 4331.07 15386.71 00:07:28.877 9364.00 IOPS, 36.58 MiB/s 00:07:28.877 Latency(us) 00:07:28.877 [2024-12-06T02:15:49.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:28.877 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:28.877 Nvme1n1 : 1.01 9417.85 36.79 0.00 0.00 13534.02 6924.02 22453.20 00:07:28.877 [2024-12-06T02:15:49.018Z] =================================================================================================================== 00:07:28.877 [2024-12-06T02:15:49.018Z] Total : 9417.85 36.79 0.00 0.00 13534.02 6924.02 22453.20 00:07:29.138 9955.00 IOPS, 38.89 MiB/s 00:07:29.138 Latency(us) 00:07:29.138 [2024-12-06T02:15:49.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:29.138 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:29.138 Nvme1n1 : 1.01 10039.24 39.22 0.00 0.00 12715.20 3647.22 25302.59 00:07:29.138 [2024-12-06T02:15:49.279Z] =================================================================================================================== 00:07:29.138 [2024-12-06T02:15:49.279Z] Total : 10039.24 39.22 0.00 0.00 12715.20 3647.22 25302.59 00:07:29.138 235856.00 IOPS, 921.31 MiB/s 00:07:29.138 Latency(us) 00:07:29.138 [2024-12-06T02:15:49.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:29.138 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:29.138 Nvme1n1 : 1.00 235492.86 919.89 0.00 0.00 541.08 227.95 1531.55 00:07:29.138 [2024-12-06T02:15:49.279Z] =================================================================================================================== 00:07:29.138 [2024-12-06T02:15:49.279Z] Total : 235492.86 919.89 0.00 0.00 541.08 227.95 1531.55 00:07:29.138 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2477027 00:07:29.138 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2477029 00:07:29.138 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2477032 00:07:29.138 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:29.138 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.138 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:29.138 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.138 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:29.138 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:29.138 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:29.138 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:29.138 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:29.138 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:29.138 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:29.138 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:29.138 rmmod nvme_tcp 00:07:29.138 rmmod nvme_fabrics 00:07:29.138 rmmod nvme_keyring 00:07:29.138 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:29.138 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:29.138 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:29.138 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2476992 ']' 00:07:29.138 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2476992 00:07:29.138 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2476992 ']' 00:07:29.138 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2476992 00:07:29.138 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:29.138 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.138 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2476992 00:07:29.399 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:29.399 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:29.399 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2476992' 00:07:29.399 killing process with pid 2476992 00:07:29.399 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2476992 00:07:29.399 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2476992 00:07:29.399 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:29.399 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:29.399 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:29.399 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:29.399 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:29.399 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:29.399 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:29.399 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:29.399 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:29.399 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.399 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:29.399 03:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.965 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:31.965 00:07:31.965 real 0m10.502s 00:07:31.965 user 0m16.539s 00:07:31.965 sys 0m5.883s 00:07:31.965 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.965 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:31.965 ************************************ 00:07:31.965 END TEST nvmf_bdev_io_wait 00:07:31.965 ************************************ 00:07:31.965 03:15:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:31.965 03:15:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:31.965 03:15:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.965 03:15:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:31.965 ************************************ 00:07:31.965 START TEST nvmf_queue_depth 00:07:31.965 ************************************ 00:07:31.965 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:31.965 * Looking for test storage... 00:07:31.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:31.965 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:31.965 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:07:31.965 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:31.965 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:31.965 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:31.965 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:31.965 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:31.965 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:31.965 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:31.965 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:31.965 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:31.965 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:31.965 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:31.965 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:31.965 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:31.965 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:31.965 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:31.965 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:31.965 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:31.965 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:31.965 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:31.965 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:31.965 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:31.965 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:31.965 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:31.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.966 --rc genhtml_branch_coverage=1 00:07:31.966 --rc genhtml_function_coverage=1 00:07:31.966 --rc genhtml_legend=1 00:07:31.966 --rc geninfo_all_blocks=1 00:07:31.966 --rc geninfo_unexecuted_blocks=1 00:07:31.966 00:07:31.966 ' 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:31.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.966 --rc genhtml_branch_coverage=1 00:07:31.966 --rc genhtml_function_coverage=1 00:07:31.966 --rc genhtml_legend=1 00:07:31.966 --rc geninfo_all_blocks=1 00:07:31.966 --rc geninfo_unexecuted_blocks=1 00:07:31.966 00:07:31.966 ' 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:31.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.966 --rc genhtml_branch_coverage=1 00:07:31.966 --rc genhtml_function_coverage=1 00:07:31.966 --rc genhtml_legend=1 00:07:31.966 --rc geninfo_all_blocks=1 00:07:31.966 --rc geninfo_unexecuted_blocks=1 00:07:31.966 00:07:31.966 ' 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:31.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.966 --rc genhtml_branch_coverage=1 00:07:31.966 --rc genhtml_function_coverage=1 00:07:31.966 --rc genhtml_legend=1 00:07:31.966 --rc geninfo_all_blocks=1 00:07:31.966 --rc geninfo_unexecuted_blocks=1 00:07:31.966 00:07:31.966 ' 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:31.966 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:31.966 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:31.967 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:31.967 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:31.967 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.967 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:31.967 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.967 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:31.967 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:31.967 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:31.967 03:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:37.240 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:37.240 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:37.240 Found net devices under 0000:86:00.0: cvl_0_0 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.240 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:37.241 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:37.241 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.241 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:37.241 Found net devices under 0000:86:00.1: cvl_0_1 00:07:37.241 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.241 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:37.241 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:07:37.241 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:37.241 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:37.241 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:37.241 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:37.241 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:37.241 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:37.241 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:37.241 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:37.241 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:37.241 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:37.241 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:37.241 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:37.241 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:37.241 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:37.241 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:37.241 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:37.241 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:37.241 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:37.241 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:37.241 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:37.241 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:37.241 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:37.500 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:37.500 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:37.500 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:37.500 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:37.500 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:37.500 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.455 ms 00:07:37.500 00:07:37.500 --- 10.0.0.2 ping statistics --- 00:07:37.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.500 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:07:37.500 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:37.500 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:37.500 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:07:37.500 00:07:37.500 --- 10.0.0.1 ping statistics --- 00:07:37.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.500 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:07:37.500 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:37.500 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:07:37.500 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:37.500 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:37.500 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:37.500 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:37.500 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:37.501 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:37.501 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:37.501 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:37.501 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:37.501 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:37.501 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:37.501 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2480817 00:07:37.501 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2480817 00:07:37.501 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:37.501 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2480817 ']' 00:07:37.501 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.501 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.501 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.501 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.501 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:37.501 [2024-12-06 03:15:57.545016] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:07:37.501 [2024-12-06 03:15:57.545069] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.501 [2024-12-06 03:15:57.614821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.760 [2024-12-06 03:15:57.660103] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:37.760 [2024-12-06 03:15:57.660137] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:37.760 [2024-12-06 03:15:57.660146] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:37.760 [2024-12-06 03:15:57.660153] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:37.760 [2024-12-06 03:15:57.660159] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:37.760 [2024-12-06 03:15:57.660719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.760 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.760 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:37.760 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:37.760 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:37.760 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:37.760 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:37.760 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:37.760 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.760 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:37.760 [2024-12-06 03:15:57.798109] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:37.760 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.760 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:37.760 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.760 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:37.760 Malloc0 00:07:37.760 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.760 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:37.760 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.760 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:37.760 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.760 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:37.760 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.760 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:37.760 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.760 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:37.760 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.760 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:37.760 [2024-12-06 03:15:57.840651] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:37.760 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.760 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2480969 00:07:37.760 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:37.760 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2480969 /var/tmp/bdevperf.sock 00:07:37.760 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2480969 ']' 00:07:37.760 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:37.760 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:37.760 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.760 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:37.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:37.760 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.760 03:15:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:37.760 [2024-12-06 03:15:57.892612] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:07:37.760 [2024-12-06 03:15:57.892656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2480969 ] 00:07:38.025 [2024-12-06 03:15:57.955548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.025 [2024-12-06 03:15:57.998717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.025 03:15:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.025 03:15:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:38.025 03:15:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:38.025 03:15:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.025 03:15:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:38.371 NVMe0n1 00:07:38.371 03:15:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.371 03:15:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:38.371 Running I/O for 10 seconds... 00:07:40.310 11282.00 IOPS, 44.07 MiB/s [2024-12-06T02:16:01.385Z] 11758.50 IOPS, 45.93 MiB/s [2024-12-06T02:16:02.321Z] 11735.67 IOPS, 45.84 MiB/s [2024-12-06T02:16:03.696Z] 11796.00 IOPS, 46.08 MiB/s [2024-12-06T02:16:04.631Z] 11880.00 IOPS, 46.41 MiB/s [2024-12-06T02:16:05.566Z] 11933.83 IOPS, 46.62 MiB/s [2024-12-06T02:16:06.501Z] 11982.57 IOPS, 46.81 MiB/s [2024-12-06T02:16:07.437Z] 12012.88 IOPS, 46.93 MiB/s [2024-12-06T02:16:08.373Z] 12042.44 IOPS, 47.04 MiB/s [2024-12-06T02:16:08.373Z] 12062.50 IOPS, 47.12 MiB/s 00:07:48.232 Latency(us) 00:07:48.232 [2024-12-06T02:16:08.373Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.232 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:07:48.232 Verification LBA range: start 0x0 length 0x4000 00:07:48.232 NVMe0n1 : 10.06 12090.00 47.23 0.00 0.00 84441.83 19489.84 55848.07 00:07:48.232 [2024-12-06T02:16:08.373Z] =================================================================================================================== 00:07:48.232 [2024-12-06T02:16:08.373Z] Total : 12090.00 47.23 0.00 0.00 84441.83 19489.84 55848.07 00:07:48.232 { 00:07:48.232 "results": [ 00:07:48.232 { 00:07:48.232 "job": "NVMe0n1", 00:07:48.232 "core_mask": "0x1", 00:07:48.232 "workload": "verify", 00:07:48.232 "status": "finished", 00:07:48.232 "verify_range": { 00:07:48.232 "start": 0, 00:07:48.232 "length": 16384 00:07:48.232 }, 00:07:48.232 "queue_depth": 1024, 00:07:48.232 "io_size": 4096, 00:07:48.232 "runtime": 10.061869, 00:07:48.232 "iops": 12090.000376669583, 00:07:48.232 "mibps": 47.22656397136556, 00:07:48.232 "io_failed": 0, 00:07:48.232 "io_timeout": 0, 00:07:48.232 "avg_latency_us": 84441.83310523878, 00:07:48.232 "min_latency_us": 19489.83652173913, 00:07:48.232 "max_latency_us": 55848.06956521739 00:07:48.232 } 00:07:48.232 ], 00:07:48.232 "core_count": 1 00:07:48.232 } 00:07:48.232 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2480969 00:07:48.232 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2480969 ']' 00:07:48.232 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2480969 00:07:48.232 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:48.232 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.232 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2480969 00:07:48.491 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:48.491 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:48.491 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2480969' 00:07:48.491 killing process with pid 2480969 00:07:48.491 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2480969 00:07:48.491 Received shutdown signal, test time was about 10.000000 seconds 00:07:48.491 00:07:48.491 Latency(us) 00:07:48.491 [2024-12-06T02:16:08.632Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.491 [2024-12-06T02:16:08.632Z] =================================================================================================================== 00:07:48.491 [2024-12-06T02:16:08.632Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:48.491 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2480969 00:07:48.491 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:07:48.491 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:07:48.491 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:48.491 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:07:48.491 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:48.491 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:07:48.491 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:48.491 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:48.491 rmmod nvme_tcp 00:07:48.491 rmmod nvme_fabrics 00:07:48.491 rmmod nvme_keyring 00:07:48.751 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:48.751 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:07:48.751 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:07:48.751 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2480817 ']' 00:07:48.751 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2480817 00:07:48.751 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2480817 ']' 00:07:48.751 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2480817 00:07:48.751 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:48.751 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.751 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2480817 00:07:48.751 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:48.751 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:48.751 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2480817' 00:07:48.751 killing process with pid 2480817 00:07:48.751 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2480817 00:07:48.751 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2480817 00:07:48.751 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:48.751 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:48.751 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:48.751 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:07:48.751 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:07:48.751 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:48.751 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:07:49.011 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:49.011 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:49.011 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.011 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:49.011 03:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.917 03:16:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:50.917 00:07:50.917 real 0m19.358s 00:07:50.917 user 0m22.799s 00:07:50.917 sys 0m5.867s 00:07:50.917 03:16:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.917 03:16:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:50.917 ************************************ 00:07:50.917 END TEST nvmf_queue_depth 00:07:50.917 ************************************ 00:07:50.917 03:16:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:50.917 03:16:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:50.917 03:16:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.917 03:16:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:50.917 ************************************ 00:07:50.917 START TEST nvmf_target_multipath 00:07:50.917 ************************************ 00:07:50.917 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:51.178 * Looking for test storage... 00:07:51.178 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:51.178 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:51.178 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:07:51.178 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:51.178 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:51.178 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:51.178 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:51.178 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:51.178 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.178 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:07:51.178 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:07:51.178 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:07:51.178 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:07:51.178 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:07:51.178 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:07:51.178 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:51.178 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:07:51.178 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:07:51.178 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:51.178 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.178 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:07:51.178 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:51.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.179 --rc genhtml_branch_coverage=1 00:07:51.179 --rc genhtml_function_coverage=1 00:07:51.179 --rc genhtml_legend=1 00:07:51.179 --rc geninfo_all_blocks=1 00:07:51.179 --rc geninfo_unexecuted_blocks=1 00:07:51.179 00:07:51.179 ' 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:51.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.179 --rc genhtml_branch_coverage=1 00:07:51.179 --rc genhtml_function_coverage=1 00:07:51.179 --rc genhtml_legend=1 00:07:51.179 --rc geninfo_all_blocks=1 00:07:51.179 --rc geninfo_unexecuted_blocks=1 00:07:51.179 00:07:51.179 ' 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:51.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.179 --rc genhtml_branch_coverage=1 00:07:51.179 --rc genhtml_function_coverage=1 00:07:51.179 --rc genhtml_legend=1 00:07:51.179 --rc geninfo_all_blocks=1 00:07:51.179 --rc geninfo_unexecuted_blocks=1 00:07:51.179 00:07:51.179 ' 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:51.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.179 --rc genhtml_branch_coverage=1 00:07:51.179 --rc genhtml_function_coverage=1 00:07:51.179 --rc genhtml_legend=1 00:07:51.179 --rc geninfo_all_blocks=1 00:07:51.179 --rc geninfo_unexecuted_blocks=1 00:07:51.179 00:07:51.179 ' 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:51.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:07:51.179 03:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:56.454 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:56.454 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:56.454 Found net devices under 0000:86:00.0: cvl_0_0 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:56.454 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:56.455 Found net devices under 0000:86:00.1: cvl_0_1 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:56.455 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:56.455 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.469 ms 00:07:56.455 00:07:56.455 --- 10.0.0.2 ping statistics --- 00:07:56.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.455 rtt min/avg/max/mdev = 0.469/0.469/0.469/0.000 ms 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:56.455 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:56.455 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:07:56.455 00:07:56.455 --- 10.0.0.1 ping statistics --- 00:07:56.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.455 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:07:56.455 only one NIC for nvmf test 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:56.455 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:56.455 rmmod nvme_tcp 00:07:56.455 rmmod nvme_fabrics 00:07:56.455 rmmod nvme_keyring 00:07:56.714 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:56.714 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:07:56.715 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:07:56.715 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:07:56.715 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:56.715 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:56.715 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:56.715 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:07:56.715 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:07:56.715 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:07:56.715 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:56.715 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:56.715 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:56.715 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.715 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:56.715 03:16:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.619 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:58.619 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:07:58.619 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:07:58.619 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:58.619 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:07:58.619 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:58.619 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:07:58.619 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:58.619 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:58.619 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:58.619 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:07:58.619 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:07:58.619 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:07:58.619 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:58.619 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:58.619 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:58.619 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:07:58.619 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:07:58.619 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:58.619 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:07:58.619 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:58.619 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:58.619 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.619 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:58.619 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.619 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:58.619 00:07:58.619 real 0m7.711s 00:07:58.619 user 0m1.546s 00:07:58.619 sys 0m4.122s 00:07:58.619 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.619 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:58.619 ************************************ 00:07:58.619 END TEST nvmf_target_multipath 00:07:58.619 ************************************ 00:07:58.878 03:16:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:07:58.878 03:16:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:58.878 03:16:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.878 03:16:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:58.878 ************************************ 00:07:58.878 START TEST nvmf_zcopy 00:07:58.878 ************************************ 00:07:58.878 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:07:58.878 * Looking for test storage... 00:07:58.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:58.878 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:58.878 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:07:58.878 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:58.878 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:58.878 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:58.878 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:58.878 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:58.878 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:07:58.878 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:07:58.878 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:07:58.878 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:07:58.878 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:07:58.878 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:07:58.878 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:07:58.878 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:58.878 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:07:58.878 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:07:58.878 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:58.878 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:58.878 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:07:58.878 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:07:58.878 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:58.878 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:07:58.878 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:07:58.878 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:07:58.878 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:07:58.878 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:58.878 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:07:58.878 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:07:58.878 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:58.878 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:58.878 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:07:58.878 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:58.878 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:58.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.878 --rc genhtml_branch_coverage=1 00:07:58.878 --rc genhtml_function_coverage=1 00:07:58.878 --rc genhtml_legend=1 00:07:58.878 --rc geninfo_all_blocks=1 00:07:58.878 --rc geninfo_unexecuted_blocks=1 00:07:58.878 00:07:58.878 ' 00:07:58.878 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:58.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.878 --rc genhtml_branch_coverage=1 00:07:58.878 --rc genhtml_function_coverage=1 00:07:58.878 --rc genhtml_legend=1 00:07:58.878 --rc geninfo_all_blocks=1 00:07:58.878 --rc geninfo_unexecuted_blocks=1 00:07:58.879 00:07:58.879 ' 00:07:58.879 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:58.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.879 --rc genhtml_branch_coverage=1 00:07:58.879 --rc genhtml_function_coverage=1 00:07:58.879 --rc genhtml_legend=1 00:07:58.879 --rc geninfo_all_blocks=1 00:07:58.879 --rc geninfo_unexecuted_blocks=1 00:07:58.879 00:07:58.879 ' 00:07:58.879 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:58.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.879 --rc genhtml_branch_coverage=1 00:07:58.879 --rc genhtml_function_coverage=1 00:07:58.879 --rc genhtml_legend=1 00:07:58.879 --rc geninfo_all_blocks=1 00:07:58.879 --rc geninfo_unexecuted_blocks=1 00:07:58.879 00:07:58.879 ' 00:07:58.879 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:58.879 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:07:58.879 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:58.879 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:58.879 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:58.879 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:58.879 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:58.879 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:58.879 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:58.879 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:58.879 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:58.879 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:58.879 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:58.879 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:58.879 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:58.879 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:58.879 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:58.879 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:58.879 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:58.879 03:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:07:58.879 03:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.879 03:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.879 03:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.879 03:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.879 03:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.879 03:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.879 03:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:07:58.879 03:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.879 03:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:07:58.879 03:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:58.879 03:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:58.879 03:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:58.879 03:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:58.879 03:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:58.879 03:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:58.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:58.879 03:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:58.879 03:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:58.879 03:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:58.879 03:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:07:58.879 03:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:58.879 03:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:58.879 03:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:58.879 03:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:58.879 03:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:58.879 03:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.879 03:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:58.879 03:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.138 03:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:59.138 03:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:59.138 03:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:07:59.138 03:16:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:04.408 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:04.408 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:04.408 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:04.408 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:04.408 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:04.408 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:04.408 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:04.408 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:04.409 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:04.409 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:04.409 Found net devices under 0000:86:00.0: cvl_0_0 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:04.409 Found net devices under 0000:86:00.1: cvl_0_1 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:04.409 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:04.410 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:04.410 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:04.410 03:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:04.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:04.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:08:04.410 00:08:04.410 --- 10.0.0.2 ping statistics --- 00:08:04.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.410 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:04.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:04.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:08:04.410 00:08:04.410 --- 10.0.0.1 ping statistics --- 00:08:04.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.410 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2489636 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2489636 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2489636 ']' 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:04.410 [2024-12-06 03:16:24.190116] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:08:04.410 [2024-12-06 03:16:24.190162] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.410 [2024-12-06 03:16:24.258771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.410 [2024-12-06 03:16:24.298358] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:04.410 [2024-12-06 03:16:24.298399] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:04.410 [2024-12-06 03:16:24.298406] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:04.410 [2024-12-06 03:16:24.298412] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:04.410 [2024-12-06 03:16:24.298417] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:04.410 [2024-12-06 03:16:24.299003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:04.410 [2024-12-06 03:16:24.431796] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:04.410 [2024-12-06 03:16:24.447998] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:04.410 malloc0 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:04.410 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:04.411 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:04.411 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:04.411 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:04.411 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:04.411 { 00:08:04.411 "params": { 00:08:04.411 "name": "Nvme$subsystem", 00:08:04.411 "trtype": "$TEST_TRANSPORT", 00:08:04.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:04.411 "adrfam": "ipv4", 00:08:04.411 "trsvcid": "$NVMF_PORT", 00:08:04.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:04.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:04.411 "hdgst": ${hdgst:-false}, 00:08:04.411 "ddgst": ${ddgst:-false} 00:08:04.411 }, 00:08:04.411 "method": "bdev_nvme_attach_controller" 00:08:04.411 } 00:08:04.411 EOF 00:08:04.411 )") 00:08:04.411 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:04.411 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:04.411 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:04.411 03:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:04.411 "params": { 00:08:04.411 "name": "Nvme1", 00:08:04.411 "trtype": "tcp", 00:08:04.411 "traddr": "10.0.0.2", 00:08:04.411 "adrfam": "ipv4", 00:08:04.411 "trsvcid": "4420", 00:08:04.411 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:04.411 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:04.411 "hdgst": false, 00:08:04.411 "ddgst": false 00:08:04.411 }, 00:08:04.411 "method": "bdev_nvme_attach_controller" 00:08:04.411 }' 00:08:04.411 [2024-12-06 03:16:24.527612] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:08:04.411 [2024-12-06 03:16:24.527654] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2489757 ] 00:08:04.669 [2024-12-06 03:16:24.590649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.669 [2024-12-06 03:16:24.631823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.929 Running I/O for 10 seconds... 00:08:06.803 8485.00 IOPS, 66.29 MiB/s [2024-12-06T02:16:28.337Z] 8554.50 IOPS, 66.83 MiB/s [2024-12-06T02:16:29.275Z] 8586.67 IOPS, 67.08 MiB/s [2024-12-06T02:16:30.213Z] 8595.25 IOPS, 67.15 MiB/s [2024-12-06T02:16:31.150Z] 8596.20 IOPS, 67.16 MiB/s [2024-12-06T02:16:32.087Z] 8599.67 IOPS, 67.18 MiB/s [2024-12-06T02:16:33.025Z] 8603.57 IOPS, 67.22 MiB/s [2024-12-06T02:16:33.963Z] 8610.38 IOPS, 67.27 MiB/s [2024-12-06T02:16:35.343Z] 8612.00 IOPS, 67.28 MiB/s [2024-12-06T02:16:35.343Z] 8615.70 IOPS, 67.31 MiB/s 00:08:15.202 Latency(us) 00:08:15.202 [2024-12-06T02:16:35.343Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:15.202 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:15.202 Verification LBA range: start 0x0 length 0x1000 00:08:15.202 Nvme1n1 : 10.01 8617.76 67.33 0.00 0.00 14810.41 2578.70 23251.03 00:08:15.202 [2024-12-06T02:16:35.343Z] =================================================================================================================== 00:08:15.202 [2024-12-06T02:16:35.343Z] Total : 8617.76 67.33 0.00 0.00 14810.41 2578.70 23251.03 00:08:15.202 03:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2491405 00:08:15.202 03:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:15.202 03:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:15.202 03:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:15.202 03:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:15.202 03:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:15.202 03:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:15.202 03:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:15.202 03:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:15.202 { 00:08:15.202 "params": { 00:08:15.202 "name": "Nvme$subsystem", 00:08:15.202 "trtype": "$TEST_TRANSPORT", 00:08:15.202 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:15.202 "adrfam": "ipv4", 00:08:15.202 "trsvcid": "$NVMF_PORT", 00:08:15.202 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:15.202 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:15.202 "hdgst": ${hdgst:-false}, 00:08:15.202 "ddgst": ${ddgst:-false} 00:08:15.202 }, 00:08:15.202 "method": "bdev_nvme_attach_controller" 00:08:15.202 } 00:08:15.202 EOF 00:08:15.202 )") 00:08:15.202 [2024-12-06 03:16:35.120565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.202 [2024-12-06 03:16:35.120599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.202 03:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:15.202 03:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:15.202 [2024-12-06 03:16:35.128545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.202 [2024-12-06 03:16:35.128558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.202 03:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:15.202 03:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:15.202 "params": { 00:08:15.202 "name": "Nvme1", 00:08:15.202 "trtype": "tcp", 00:08:15.202 "traddr": "10.0.0.2", 00:08:15.202 "adrfam": "ipv4", 00:08:15.202 "trsvcid": "4420", 00:08:15.202 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:15.202 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:15.202 "hdgst": false, 00:08:15.202 "ddgst": false 00:08:15.202 }, 00:08:15.202 "method": "bdev_nvme_attach_controller" 00:08:15.202 }' 00:08:15.202 [2024-12-06 03:16:35.136562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.202 [2024-12-06 03:16:35.136574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.202 [2024-12-06 03:16:35.144580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.202 [2024-12-06 03:16:35.144591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.202 [2024-12-06 03:16:35.152601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.202 [2024-12-06 03:16:35.152612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.202 [2024-12-06 03:16:35.160624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.202 [2024-12-06 03:16:35.160635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.202 [2024-12-06 03:16:35.161813] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:08:15.202 [2024-12-06 03:16:35.161856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2491405 ] 00:08:15.202 [2024-12-06 03:16:35.168646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.202 [2024-12-06 03:16:35.168658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.202 [2024-12-06 03:16:35.176667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.202 [2024-12-06 03:16:35.176677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.202 [2024-12-06 03:16:35.184688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.202 [2024-12-06 03:16:35.184698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.202 [2024-12-06 03:16:35.192710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.203 [2024-12-06 03:16:35.192722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.203 [2024-12-06 03:16:35.200733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.203 [2024-12-06 03:16:35.200744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.203 [2024-12-06 03:16:35.208753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.203 [2024-12-06 03:16:35.208764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.203 [2024-12-06 03:16:35.216775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.203 [2024-12-06 03:16:35.216786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.203 [2024-12-06 03:16:35.224747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.203 [2024-12-06 03:16:35.224803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.203 [2024-12-06 03:16:35.224817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.203 [2024-12-06 03:16:35.232820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.203 [2024-12-06 03:16:35.232833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.203 [2024-12-06 03:16:35.240841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.203 [2024-12-06 03:16:35.240855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.203 [2024-12-06 03:16:35.248863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.203 [2024-12-06 03:16:35.248876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.203 [2024-12-06 03:16:35.256884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.203 [2024-12-06 03:16:35.256895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.203 [2024-12-06 03:16:35.264906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.203 [2024-12-06 03:16:35.264920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.203 [2024-12-06 03:16:35.267197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.203 [2024-12-06 03:16:35.272928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.203 [2024-12-06 03:16:35.272940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.203 [2024-12-06 03:16:35.280964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.203 [2024-12-06 03:16:35.280999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.203 [2024-12-06 03:16:35.288983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.203 [2024-12-06 03:16:35.289000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.203 [2024-12-06 03:16:35.297003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.203 [2024-12-06 03:16:35.297019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.203 [2024-12-06 03:16:35.305018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.203 [2024-12-06 03:16:35.305031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.203 [2024-12-06 03:16:35.313039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.203 [2024-12-06 03:16:35.313052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.203 [2024-12-06 03:16:35.321058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.203 [2024-12-06 03:16:35.321071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.203 [2024-12-06 03:16:35.329082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.203 [2024-12-06 03:16:35.329095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.203 [2024-12-06 03:16:35.337104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.203 [2024-12-06 03:16:35.337116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.461 [2024-12-06 03:16:35.345123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.461 [2024-12-06 03:16:35.345134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.461 [2024-12-06 03:16:35.353143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.461 [2024-12-06 03:16:35.353154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.461 [2024-12-06 03:16:35.361180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.461 [2024-12-06 03:16:35.361199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.461 [2024-12-06 03:16:35.369200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.462 [2024-12-06 03:16:35.369217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.462 [2024-12-06 03:16:35.377218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.462 [2024-12-06 03:16:35.377231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.462 [2024-12-06 03:16:35.385239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.462 [2024-12-06 03:16:35.385255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.462 [2024-12-06 03:16:35.393258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.462 [2024-12-06 03:16:35.393271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.462 [2024-12-06 03:16:35.401280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.462 [2024-12-06 03:16:35.401296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.462 [2024-12-06 03:16:35.409300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.462 [2024-12-06 03:16:35.409313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.462 [2024-12-06 03:16:35.417320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.462 [2024-12-06 03:16:35.417331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.462 [2024-12-06 03:16:35.462183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.462 [2024-12-06 03:16:35.462201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.462 [2024-12-06 03:16:35.469466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.462 [2024-12-06 03:16:35.469479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.462 Running I/O for 5 seconds... 00:08:15.462 [2024-12-06 03:16:35.477484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.462 [2024-12-06 03:16:35.477495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.462 [2024-12-06 03:16:35.489550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.462 [2024-12-06 03:16:35.489571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.462 [2024-12-06 03:16:35.497150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.462 [2024-12-06 03:16:35.497170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.462 [2024-12-06 03:16:35.506347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.462 [2024-12-06 03:16:35.506366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.462 [2024-12-06 03:16:35.515023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.462 [2024-12-06 03:16:35.515042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.462 [2024-12-06 03:16:35.524494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.462 [2024-12-06 03:16:35.524514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.462 [2024-12-06 03:16:35.533991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.462 [2024-12-06 03:16:35.534011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.462 [2024-12-06 03:16:35.543514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.462 [2024-12-06 03:16:35.543534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.462 [2024-12-06 03:16:35.553037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.462 [2024-12-06 03:16:35.553057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.462 [2024-12-06 03:16:35.562358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.462 [2024-12-06 03:16:35.562377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.462 [2024-12-06 03:16:35.569207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.462 [2024-12-06 03:16:35.569226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.462 [2024-12-06 03:16:35.580300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.462 [2024-12-06 03:16:35.580319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.462 [2024-12-06 03:16:35.588930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.462 [2024-12-06 03:16:35.588956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.462 [2024-12-06 03:16:35.598711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.462 [2024-12-06 03:16:35.598730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.721 [2024-12-06 03:16:35.608335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.721 [2024-12-06 03:16:35.608354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.721 [2024-12-06 03:16:35.617685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.721 [2024-12-06 03:16:35.617703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.721 [2024-12-06 03:16:35.626866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.721 [2024-12-06 03:16:35.626885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.721 [2024-12-06 03:16:35.635574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.721 [2024-12-06 03:16:35.635594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.721 [2024-12-06 03:16:35.645054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.721 [2024-12-06 03:16:35.645074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.721 [2024-12-06 03:16:35.653557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.721 [2024-12-06 03:16:35.653576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.721 [2024-12-06 03:16:35.662142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.721 [2024-12-06 03:16:35.662162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.721 [2024-12-06 03:16:35.670874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.721 [2024-12-06 03:16:35.670897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.721 [2024-12-06 03:16:35.679416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.721 [2024-12-06 03:16:35.679435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.721 [2024-12-06 03:16:35.688094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.721 [2024-12-06 03:16:35.688117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.721 [2024-12-06 03:16:35.697491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.721 [2024-12-06 03:16:35.697510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.721 [2024-12-06 03:16:35.706744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.721 [2024-12-06 03:16:35.706763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.721 [2024-12-06 03:16:35.716144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.721 [2024-12-06 03:16:35.716162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.721 [2024-12-06 03:16:35.724790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.721 [2024-12-06 03:16:35.724810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.721 [2024-12-06 03:16:35.733568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.721 [2024-12-06 03:16:35.733587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.722 [2024-12-06 03:16:35.743283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.722 [2024-12-06 03:16:35.743302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.722 [2024-12-06 03:16:35.752311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.722 [2024-12-06 03:16:35.752330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.722 [2024-12-06 03:16:35.761495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.722 [2024-12-06 03:16:35.761515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.722 [2024-12-06 03:16:35.770607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.722 [2024-12-06 03:16:35.770626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.722 [2024-12-06 03:16:35.779968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.722 [2024-12-06 03:16:35.779987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.722 [2024-12-06 03:16:35.789340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.722 [2024-12-06 03:16:35.789359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.722 [2024-12-06 03:16:35.798061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.722 [2024-12-06 03:16:35.798080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.722 [2024-12-06 03:16:35.806636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.722 [2024-12-06 03:16:35.806655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.722 [2024-12-06 03:16:35.816001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.722 [2024-12-06 03:16:35.816019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.722 [2024-12-06 03:16:35.825097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.722 [2024-12-06 03:16:35.825116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.722 [2024-12-06 03:16:35.833634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.722 [2024-12-06 03:16:35.833653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.722 [2024-12-06 03:16:35.842742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.722 [2024-12-06 03:16:35.842767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.722 [2024-12-06 03:16:35.852041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.722 [2024-12-06 03:16:35.852060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.981 [2024-12-06 03:16:35.861133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.981 [2024-12-06 03:16:35.861152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.981 [2024-12-06 03:16:35.870594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.981 [2024-12-06 03:16:35.870614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.981 [2024-12-06 03:16:35.879867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.981 [2024-12-06 03:16:35.879887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.981 [2024-12-06 03:16:35.888649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.981 [2024-12-06 03:16:35.888668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.981 [2024-12-06 03:16:35.897269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.981 [2024-12-06 03:16:35.897289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.981 [2024-12-06 03:16:35.906631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.981 [2024-12-06 03:16:35.906650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.981 [2024-12-06 03:16:35.915275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.981 [2024-12-06 03:16:35.915294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.981 [2024-12-06 03:16:35.924617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.981 [2024-12-06 03:16:35.924636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.981 [2024-12-06 03:16:35.933298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.981 [2024-12-06 03:16:35.933317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.981 [2024-12-06 03:16:35.942403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.981 [2024-12-06 03:16:35.942421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.981 [2024-12-06 03:16:35.951143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.981 [2024-12-06 03:16:35.951162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.981 [2024-12-06 03:16:35.960453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.981 [2024-12-06 03:16:35.960472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.981 [2024-12-06 03:16:35.969058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.981 [2024-12-06 03:16:35.969076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.981 [2024-12-06 03:16:35.977862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.981 [2024-12-06 03:16:35.977881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.981 [2024-12-06 03:16:35.986962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.981 [2024-12-06 03:16:35.986982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.981 [2024-12-06 03:16:35.996298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.981 [2024-12-06 03:16:35.996316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.981 [2024-12-06 03:16:36.005674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.981 [2024-12-06 03:16:36.005693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.981 [2024-12-06 03:16:36.014396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.981 [2024-12-06 03:16:36.014420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.981 [2024-12-06 03:16:36.023032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.981 [2024-12-06 03:16:36.023052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.981 [2024-12-06 03:16:36.032481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.981 [2024-12-06 03:16:36.032500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.981 [2024-12-06 03:16:36.039438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.981 [2024-12-06 03:16:36.039456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.981 [2024-12-06 03:16:36.050773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.981 [2024-12-06 03:16:36.050792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.981 [2024-12-06 03:16:36.059669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.981 [2024-12-06 03:16:36.059688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.981 [2024-12-06 03:16:36.069455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.981 [2024-12-06 03:16:36.069474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.981 [2024-12-06 03:16:36.078299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.981 [2024-12-06 03:16:36.078318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.981 [2024-12-06 03:16:36.086877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.981 [2024-12-06 03:16:36.086896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.981 [2024-12-06 03:16:36.096272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.981 [2024-12-06 03:16:36.096291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.981 [2024-12-06 03:16:36.104866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.981 [2024-12-06 03:16:36.104885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.981 [2024-12-06 03:16:36.114037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.981 [2024-12-06 03:16:36.114056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.241 [2024-12-06 03:16:36.123527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.241 [2024-12-06 03:16:36.123546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.241 [2024-12-06 03:16:36.132824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.241 [2024-12-06 03:16:36.132844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.241 [2024-12-06 03:16:36.141465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.241 [2024-12-06 03:16:36.141484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.241 [2024-12-06 03:16:36.150687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.241 [2024-12-06 03:16:36.150707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.241 [2024-12-06 03:16:36.160008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.241 [2024-12-06 03:16:36.160026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.241 [2024-12-06 03:16:36.169273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.241 [2024-12-06 03:16:36.169294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.241 [2024-12-06 03:16:36.178392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.241 [2024-12-06 03:16:36.178412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.241 [2024-12-06 03:16:36.187252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.241 [2024-12-06 03:16:36.187276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.241 [2024-12-06 03:16:36.195933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.241 [2024-12-06 03:16:36.195958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.241 [2024-12-06 03:16:36.205224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.241 [2024-12-06 03:16:36.205242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.241 [2024-12-06 03:16:36.214819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.241 [2024-12-06 03:16:36.214838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.241 [2024-12-06 03:16:36.223657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.241 [2024-12-06 03:16:36.223676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.241 [2024-12-06 03:16:36.232597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.241 [2024-12-06 03:16:36.232615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.241 [2024-12-06 03:16:36.241962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.241 [2024-12-06 03:16:36.241982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.241 [2024-12-06 03:16:36.250615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.241 [2024-12-06 03:16:36.250633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.241 [2024-12-06 03:16:36.259877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.241 [2024-12-06 03:16:36.259895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.241 [2024-12-06 03:16:36.269728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.241 [2024-12-06 03:16:36.269747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.241 [2024-12-06 03:16:36.278546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.241 [2024-12-06 03:16:36.278566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.241 [2024-12-06 03:16:36.288342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.241 [2024-12-06 03:16:36.288362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.241 [2024-12-06 03:16:36.297082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.241 [2024-12-06 03:16:36.297101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.241 [2024-12-06 03:16:36.304678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.241 [2024-12-06 03:16:36.304696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.241 [2024-12-06 03:16:36.315133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.242 [2024-12-06 03:16:36.315151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.242 [2024-12-06 03:16:36.323859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.242 [2024-12-06 03:16:36.323878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.242 [2024-12-06 03:16:36.333237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.242 [2024-12-06 03:16:36.333256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.242 [2024-12-06 03:16:36.343331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.242 [2024-12-06 03:16:36.343350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.242 [2024-12-06 03:16:36.352127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.242 [2024-12-06 03:16:36.352146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.242 [2024-12-06 03:16:36.361724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.242 [2024-12-06 03:16:36.361748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.242 [2024-12-06 03:16:36.371206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.242 [2024-12-06 03:16:36.371226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-12-06 03:16:36.380581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-12-06 03:16:36.380600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-12-06 03:16:36.389697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-12-06 03:16:36.389716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-12-06 03:16:36.398840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-12-06 03:16:36.398858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-12-06 03:16:36.408032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-12-06 03:16:36.408051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-12-06 03:16:36.417376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-12-06 03:16:36.417395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-12-06 03:16:36.426611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-12-06 03:16:36.426631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-12-06 03:16:36.435831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-12-06 03:16:36.435851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-12-06 03:16:36.444477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-12-06 03:16:36.444497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-12-06 03:16:36.453680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-12-06 03:16:36.453700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-12-06 03:16:36.462985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-12-06 03:16:36.463005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-12-06 03:16:36.472535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-12-06 03:16:36.472554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 16552.00 IOPS, 129.31 MiB/s [2024-12-06T02:16:36.642Z] [2024-12-06 03:16:36.482220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-12-06 03:16:36.482240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-12-06 03:16:36.491039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-12-06 03:16:36.491059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-12-06 03:16:36.500292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-12-06 03:16:36.500312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-12-06 03:16:36.509638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-12-06 03:16:36.509658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-12-06 03:16:36.518290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-12-06 03:16:36.518309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-12-06 03:16:36.527637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-12-06 03:16:36.527657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-12-06 03:16:36.536240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-12-06 03:16:36.536259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-12-06 03:16:36.545357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-12-06 03:16:36.545376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-12-06 03:16:36.554526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-12-06 03:16:36.554546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-12-06 03:16:36.563820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-12-06 03:16:36.563840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-12-06 03:16:36.573010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-12-06 03:16:36.573029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-12-06 03:16:36.582351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-12-06 03:16:36.582371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-12-06 03:16:36.591704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-12-06 03:16:36.591725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-12-06 03:16:36.601062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-12-06 03:16:36.601081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-12-06 03:16:36.610282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-12-06 03:16:36.610301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-12-06 03:16:36.620009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-12-06 03:16:36.620029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-12-06 03:16:36.629301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-12-06 03:16:36.629320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.501 [2024-12-06 03:16:36.638975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.501 [2024-12-06 03:16:36.638996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.761 [2024-12-06 03:16:36.648313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.761 [2024-12-06 03:16:36.648333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.761 [2024-12-06 03:16:36.657545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.761 [2024-12-06 03:16:36.657564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.761 [2024-12-06 03:16:36.666758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.761 [2024-12-06 03:16:36.666777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.761 [2024-12-06 03:16:36.676591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.761 [2024-12-06 03:16:36.676611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.761 [2024-12-06 03:16:36.685235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.761 [2024-12-06 03:16:36.685254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.761 [2024-12-06 03:16:36.694589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.761 [2024-12-06 03:16:36.694610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.761 [2024-12-06 03:16:36.703778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.761 [2024-12-06 03:16:36.703798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.761 [2024-12-06 03:16:36.712562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.761 [2024-12-06 03:16:36.712581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.761 [2024-12-06 03:16:36.721883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.761 [2024-12-06 03:16:36.721904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.761 [2024-12-06 03:16:36.731112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.761 [2024-12-06 03:16:36.731132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.761 [2024-12-06 03:16:36.740891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.761 [2024-12-06 03:16:36.740910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.761 [2024-12-06 03:16:36.750268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.761 [2024-12-06 03:16:36.750288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.761 [2024-12-06 03:16:36.759744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.761 [2024-12-06 03:16:36.759763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.761 [2024-12-06 03:16:36.768318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.761 [2024-12-06 03:16:36.768336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.761 [2024-12-06 03:16:36.777639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.761 [2024-12-06 03:16:36.777658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.761 [2024-12-06 03:16:36.786971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.761 [2024-12-06 03:16:36.786990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.761 [2024-12-06 03:16:36.795689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.761 [2024-12-06 03:16:36.795708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.761 [2024-12-06 03:16:36.804738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.761 [2024-12-06 03:16:36.804757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.761 [2024-12-06 03:16:36.814026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.761 [2024-12-06 03:16:36.814045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.761 [2024-12-06 03:16:36.823406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.761 [2024-12-06 03:16:36.823426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.761 [2024-12-06 03:16:36.833171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.761 [2024-12-06 03:16:36.833191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.761 [2024-12-06 03:16:36.841758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.761 [2024-12-06 03:16:36.841777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.761 [2024-12-06 03:16:36.850985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.761 [2024-12-06 03:16:36.851005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.761 [2024-12-06 03:16:36.859671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.761 [2024-12-06 03:16:36.859690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.761 [2024-12-06 03:16:36.868863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.761 [2024-12-06 03:16:36.868881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.761 [2024-12-06 03:16:36.878198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.761 [2024-12-06 03:16:36.878222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.761 [2024-12-06 03:16:36.887591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.761 [2024-12-06 03:16:36.887611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.761 [2024-12-06 03:16:36.896268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.761 [2024-12-06 03:16:36.896286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.020 [2024-12-06 03:16:36.905673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.020 [2024-12-06 03:16:36.905691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.020 [2024-12-06 03:16:36.914370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.020 [2024-12-06 03:16:36.914389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.020 [2024-12-06 03:16:36.923486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.020 [2024-12-06 03:16:36.923506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.020 [2024-12-06 03:16:36.932407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.020 [2024-12-06 03:16:36.932426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.020 [2024-12-06 03:16:36.939313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.020 [2024-12-06 03:16:36.939332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.020 [2024-12-06 03:16:36.950388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.020 [2024-12-06 03:16:36.950407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.020 [2024-12-06 03:16:36.959402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.020 [2024-12-06 03:16:36.959421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.020 [2024-12-06 03:16:36.968020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.020 [2024-12-06 03:16:36.968038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.020 [2024-12-06 03:16:36.977090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.020 [2024-12-06 03:16:36.977109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.020 [2024-12-06 03:16:36.986495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.020 [2024-12-06 03:16:36.986513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.020 [2024-12-06 03:16:36.994987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.020 [2024-12-06 03:16:36.995006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.020 [2024-12-06 03:16:37.003552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.020 [2024-12-06 03:16:37.003571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.020 [2024-12-06 03:16:37.013438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.020 [2024-12-06 03:16:37.013457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.020 [2024-12-06 03:16:37.022944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.020 [2024-12-06 03:16:37.022968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.020 [2024-12-06 03:16:37.031441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.020 [2024-12-06 03:16:37.031459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.020 [2024-12-06 03:16:37.038306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.020 [2024-12-06 03:16:37.038324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.020 [2024-12-06 03:16:37.049545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.020 [2024-12-06 03:16:37.049568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.020 [2024-12-06 03:16:37.058480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.020 [2024-12-06 03:16:37.058499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.020 [2024-12-06 03:16:37.067956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.020 [2024-12-06 03:16:37.067976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.020 [2024-12-06 03:16:37.077460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.020 [2024-12-06 03:16:37.077479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.020 [2024-12-06 03:16:37.086883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.020 [2024-12-06 03:16:37.086902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.021 [2024-12-06 03:16:37.096380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.021 [2024-12-06 03:16:37.096400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.021 [2024-12-06 03:16:37.105838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.021 [2024-12-06 03:16:37.105857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.021 [2024-12-06 03:16:37.114568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.021 [2024-12-06 03:16:37.114586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.021 [2024-12-06 03:16:37.123227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.021 [2024-12-06 03:16:37.123245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.021 [2024-12-06 03:16:37.132381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.021 [2024-12-06 03:16:37.132399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.021 [2024-12-06 03:16:37.141082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.021 [2024-12-06 03:16:37.141101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.021 [2024-12-06 03:16:37.150448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.021 [2024-12-06 03:16:37.150467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.280 [2024-12-06 03:16:37.159818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.280 [2024-12-06 03:16:37.159838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.280 [2024-12-06 03:16:37.169403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.280 [2024-12-06 03:16:37.169421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.280 [2024-12-06 03:16:37.178820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.280 [2024-12-06 03:16:37.178839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.280 [2024-12-06 03:16:37.187669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.280 [2024-12-06 03:16:37.187687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.280 [2024-12-06 03:16:37.196938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.280 [2024-12-06 03:16:37.196962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.280 [2024-12-06 03:16:37.205649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.280 [2024-12-06 03:16:37.205670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.280 [2024-12-06 03:16:37.215091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.280 [2024-12-06 03:16:37.215112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.280 [2024-12-06 03:16:37.223746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.280 [2024-12-06 03:16:37.223769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.280 [2024-12-06 03:16:37.233279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.280 [2024-12-06 03:16:37.233299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.280 [2024-12-06 03:16:37.242539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.280 [2024-12-06 03:16:37.242558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.280 [2024-12-06 03:16:37.251240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.280 [2024-12-06 03:16:37.251259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.280 [2024-12-06 03:16:37.260422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.280 [2024-12-06 03:16:37.260440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.280 [2024-12-06 03:16:37.269168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.280 [2024-12-06 03:16:37.269187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.280 [2024-12-06 03:16:37.277824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.280 [2024-12-06 03:16:37.277843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.280 [2024-12-06 03:16:37.286540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.280 [2024-12-06 03:16:37.286559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.280 [2024-12-06 03:16:37.295521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.280 [2024-12-06 03:16:37.295540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.280 [2024-12-06 03:16:37.304159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.280 [2024-12-06 03:16:37.304178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.280 [2024-12-06 03:16:37.312709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.280 [2024-12-06 03:16:37.312728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.280 [2024-12-06 03:16:37.322712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.280 [2024-12-06 03:16:37.322731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.280 [2024-12-06 03:16:37.331613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.280 [2024-12-06 03:16:37.331632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.280 [2024-12-06 03:16:37.340188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.280 [2024-12-06 03:16:37.340207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.280 [2024-12-06 03:16:37.349345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.280 [2024-12-06 03:16:37.349364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.280 [2024-12-06 03:16:37.358700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.281 [2024-12-06 03:16:37.358719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.281 [2024-12-06 03:16:37.367422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.281 [2024-12-06 03:16:37.367442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.281 [2024-12-06 03:16:37.376784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.281 [2024-12-06 03:16:37.376804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.281 [2024-12-06 03:16:37.385406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.281 [2024-12-06 03:16:37.385425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.281 [2024-12-06 03:16:37.394654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.281 [2024-12-06 03:16:37.394676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.281 [2024-12-06 03:16:37.403986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.281 [2024-12-06 03:16:37.404006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.281 [2024-12-06 03:16:37.412657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.281 [2024-12-06 03:16:37.412676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.540 [2024-12-06 03:16:37.422073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.540 [2024-12-06 03:16:37.422093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.540 [2024-12-06 03:16:37.430980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.540 [2024-12-06 03:16:37.431000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.540 [2024-12-06 03:16:37.439723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.540 [2024-12-06 03:16:37.439742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.540 [2024-12-06 03:16:37.449046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.540 [2024-12-06 03:16:37.449066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.540 [2024-12-06 03:16:37.458259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.540 [2024-12-06 03:16:37.458278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.540 [2024-12-06 03:16:37.467628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.540 [2024-12-06 03:16:37.467648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.540 [2024-12-06 03:16:37.476361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.540 [2024-12-06 03:16:37.476380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.540 16599.50 IOPS, 129.68 MiB/s [2024-12-06T02:16:37.681Z] [2024-12-06 03:16:37.485611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.540 [2024-12-06 03:16:37.485629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.540 [2024-12-06 03:16:37.494804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.540 [2024-12-06 03:16:37.494823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.540 [2024-12-06 03:16:37.504077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.540 [2024-12-06 03:16:37.504096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.540 [2024-12-06 03:16:37.513303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.540 [2024-12-06 03:16:37.513322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.540 [2024-12-06 03:16:37.522507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.540 [2024-12-06 03:16:37.522526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.540 [2024-12-06 03:16:37.531763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.540 [2024-12-06 03:16:37.531782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.540 [2024-12-06 03:16:37.540895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.540 [2024-12-06 03:16:37.540913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.540 [2024-12-06 03:16:37.550149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.540 [2024-12-06 03:16:37.550168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.540 [2024-12-06 03:16:37.558719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.540 [2024-12-06 03:16:37.558738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.540 [2024-12-06 03:16:37.567824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.540 [2024-12-06 03:16:37.567843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.540 [2024-12-06 03:16:37.576969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.540 [2024-12-06 03:16:37.576987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.540 [2024-12-06 03:16:37.585510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.540 [2024-12-06 03:16:37.585529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.540 [2024-12-06 03:16:37.594695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.540 [2024-12-06 03:16:37.594714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.541 [2024-12-06 03:16:37.604025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.541 [2024-12-06 03:16:37.604044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.541 [2024-12-06 03:16:37.613255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.541 [2024-12-06 03:16:37.613274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.541 [2024-12-06 03:16:37.622062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.541 [2024-12-06 03:16:37.622081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.541 [2024-12-06 03:16:37.631271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.541 [2024-12-06 03:16:37.631290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.541 [2024-12-06 03:16:37.640638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.541 [2024-12-06 03:16:37.640657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.541 [2024-12-06 03:16:37.650000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.541 [2024-12-06 03:16:37.650020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.541 [2024-12-06 03:16:37.658704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.541 [2024-12-06 03:16:37.658722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.541 [2024-12-06 03:16:37.667829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.541 [2024-12-06 03:16:37.667848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.541 [2024-12-06 03:16:37.677330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.541 [2024-12-06 03:16:37.677349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.801 [2024-12-06 03:16:37.686688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.801 [2024-12-06 03:16:37.686707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.801 [2024-12-06 03:16:37.696163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.801 [2024-12-06 03:16:37.696182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.801 [2024-12-06 03:16:37.705483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.801 [2024-12-06 03:16:37.705503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.801 [2024-12-06 03:16:37.714884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.801 [2024-12-06 03:16:37.714903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.801 [2024-12-06 03:16:37.724219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.801 [2024-12-06 03:16:37.724238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.801 [2024-12-06 03:16:37.733283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.801 [2024-12-06 03:16:37.733303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.801 [2024-12-06 03:16:37.742350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.801 [2024-12-06 03:16:37.742369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.801 [2024-12-06 03:16:37.751601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.801 [2024-12-06 03:16:37.751621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.801 [2024-12-06 03:16:37.761008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.801 [2024-12-06 03:16:37.761027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.801 [2024-12-06 03:16:37.769640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.801 [2024-12-06 03:16:37.769659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.801 [2024-12-06 03:16:37.779078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.801 [2024-12-06 03:16:37.779096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.801 [2024-12-06 03:16:37.788485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.801 [2024-12-06 03:16:37.788505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.801 [2024-12-06 03:16:37.797017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.801 [2024-12-06 03:16:37.797036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.801 [2024-12-06 03:16:37.806287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.801 [2024-12-06 03:16:37.806306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.801 [2024-12-06 03:16:37.814890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.801 [2024-12-06 03:16:37.814910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.801 [2024-12-06 03:16:37.824707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.801 [2024-12-06 03:16:37.824726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.801 [2024-12-06 03:16:37.833643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.801 [2024-12-06 03:16:37.833663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.801 [2024-12-06 03:16:37.843016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.801 [2024-12-06 03:16:37.843036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.801 [2024-12-06 03:16:37.852657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.801 [2024-12-06 03:16:37.852677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.801 [2024-12-06 03:16:37.861544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.801 [2024-12-06 03:16:37.861565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.801 [2024-12-06 03:16:37.870717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.801 [2024-12-06 03:16:37.870737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.801 [2024-12-06 03:16:37.879371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.801 [2024-12-06 03:16:37.879391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.801 [2024-12-06 03:16:37.888619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.801 [2024-12-06 03:16:37.888638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.801 [2024-12-06 03:16:37.898311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.801 [2024-12-06 03:16:37.898330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.801 [2024-12-06 03:16:37.907748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.801 [2024-12-06 03:16:37.907768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.801 [2024-12-06 03:16:37.916992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.801 [2024-12-06 03:16:37.917012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.801 [2024-12-06 03:16:37.926450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.801 [2024-12-06 03:16:37.926470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.801 [2024-12-06 03:16:37.935082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.801 [2024-12-06 03:16:37.935102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.061 [2024-12-06 03:16:37.943954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.061 [2024-12-06 03:16:37.943974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.061 [2024-12-06 03:16:37.953150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.061 [2024-12-06 03:16:37.953169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.061 [2024-12-06 03:16:37.962364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.061 [2024-12-06 03:16:37.962383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.061 [2024-12-06 03:16:37.971619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.061 [2024-12-06 03:16:37.971638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.061 [2024-12-06 03:16:37.980910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.061 [2024-12-06 03:16:37.980929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.061 [2024-12-06 03:16:37.990102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.061 [2024-12-06 03:16:37.990121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.061 [2024-12-06 03:16:37.999348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.061 [2024-12-06 03:16:37.999368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.061 [2024-12-06 03:16:38.008553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.061 [2024-12-06 03:16:38.008572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.061 [2024-12-06 03:16:38.017907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.061 [2024-12-06 03:16:38.017926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.061 [2024-12-06 03:16:38.025505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.061 [2024-12-06 03:16:38.025524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.061 [2024-12-06 03:16:38.036315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.061 [2024-12-06 03:16:38.036335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.061 [2024-12-06 03:16:38.045059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.061 [2024-12-06 03:16:38.045078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.061 [2024-12-06 03:16:38.054365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.061 [2024-12-06 03:16:38.054386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.061 [2024-12-06 03:16:38.061263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.061 [2024-12-06 03:16:38.061282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.061 [2024-12-06 03:16:38.072224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.061 [2024-12-06 03:16:38.072243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.061 [2024-12-06 03:16:38.081772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.061 [2024-12-06 03:16:38.081795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.061 [2024-12-06 03:16:38.091053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.061 [2024-12-06 03:16:38.091073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.061 [2024-12-06 03:16:38.100279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.061 [2024-12-06 03:16:38.100298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.061 [2024-12-06 03:16:38.109844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.061 [2024-12-06 03:16:38.109864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.061 [2024-12-06 03:16:38.118906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.061 [2024-12-06 03:16:38.118925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.061 [2024-12-06 03:16:38.128195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.061 [2024-12-06 03:16:38.128214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.061 [2024-12-06 03:16:38.137411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.061 [2024-12-06 03:16:38.137431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.061 [2024-12-06 03:16:38.146855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.061 [2024-12-06 03:16:38.146876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.061 [2024-12-06 03:16:38.156082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.061 [2024-12-06 03:16:38.156103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.061 [2024-12-06 03:16:38.165239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.061 [2024-12-06 03:16:38.165259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.061 [2024-12-06 03:16:38.173819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.061 [2024-12-06 03:16:38.173839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.061 [2024-12-06 03:16:38.183625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.061 [2024-12-06 03:16:38.183644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.061 [2024-12-06 03:16:38.192595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.061 [2024-12-06 03:16:38.192615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.321 [2024-12-06 03:16:38.201874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.321 [2024-12-06 03:16:38.201894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.321 [2024-12-06 03:16:38.211032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.321 [2024-12-06 03:16:38.211052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.321 [2024-12-06 03:16:38.219727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.321 [2024-12-06 03:16:38.219746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.321 [2024-12-06 03:16:38.228934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.321 [2024-12-06 03:16:38.228962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.321 [2024-12-06 03:16:38.238196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.321 [2024-12-06 03:16:38.238215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.321 [2024-12-06 03:16:38.246895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.321 [2024-12-06 03:16:38.246914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.321 [2024-12-06 03:16:38.256171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.321 [2024-12-06 03:16:38.256194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.321 [2024-12-06 03:16:38.265364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.321 [2024-12-06 03:16:38.265383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.321 [2024-12-06 03:16:38.274057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.321 [2024-12-06 03:16:38.274076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.321 [2024-12-06 03:16:38.283171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.321 [2024-12-06 03:16:38.283190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.321 [2024-12-06 03:16:38.292399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.321 [2024-12-06 03:16:38.292419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.321 [2024-12-06 03:16:38.300971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.321 [2024-12-06 03:16:38.301005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.321 [2024-12-06 03:16:38.310414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.321 [2024-12-06 03:16:38.310432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.321 [2024-12-06 03:16:38.319670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.321 [2024-12-06 03:16:38.319690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.321 [2024-12-06 03:16:38.329571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.321 [2024-12-06 03:16:38.329590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.321 [2024-12-06 03:16:38.338199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.321 [2024-12-06 03:16:38.338217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.321 [2024-12-06 03:16:38.347392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.321 [2024-12-06 03:16:38.347411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.321 [2024-12-06 03:16:38.356587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.321 [2024-12-06 03:16:38.356606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.321 [2024-12-06 03:16:38.365389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.321 [2024-12-06 03:16:38.365408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.321 [2024-12-06 03:16:38.374599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.321 [2024-12-06 03:16:38.374619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.321 [2024-12-06 03:16:38.383265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.321 [2024-12-06 03:16:38.383284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.321 [2024-12-06 03:16:38.392613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.321 [2024-12-06 03:16:38.392633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.321 [2024-12-06 03:16:38.402531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.321 [2024-12-06 03:16:38.402550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.321 [2024-12-06 03:16:38.411751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.321 [2024-12-06 03:16:38.411770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.321 [2024-12-06 03:16:38.420203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.321 [2024-12-06 03:16:38.420221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.321 [2024-12-06 03:16:38.429465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.321 [2024-12-06 03:16:38.429491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.321 [2024-12-06 03:16:38.438297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.321 [2024-12-06 03:16:38.438316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.321 [2024-12-06 03:16:38.447443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.321 [2024-12-06 03:16:38.447462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.321 [2024-12-06 03:16:38.456798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.321 [2024-12-06 03:16:38.456817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.580 [2024-12-06 03:16:38.465695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.580 [2024-12-06 03:16:38.465714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.580 [2024-12-06 03:16:38.474380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.580 [2024-12-06 03:16:38.474399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.580 [2024-12-06 03:16:38.483914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.580 [2024-12-06 03:16:38.483933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.580 16617.33 IOPS, 129.82 MiB/s [2024-12-06T02:16:38.721Z] [2024-12-06 03:16:38.490623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.580 [2024-12-06 03:16:38.490642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.580 [2024-12-06 03:16:38.501906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.580 [2024-12-06 03:16:38.501925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.580 [2024-12-06 03:16:38.510684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.580 [2024-12-06 03:16:38.510703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.580 [2024-12-06 03:16:38.519226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.580 [2024-12-06 03:16:38.519244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.580 [2024-12-06 03:16:38.528442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.580 [2024-12-06 03:16:38.528461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.581 [2024-12-06 03:16:38.537846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.581 [2024-12-06 03:16:38.537864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.581 [2024-12-06 03:16:38.547020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.581 [2024-12-06 03:16:38.547039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.581 [2024-12-06 03:16:38.556384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.581 [2024-12-06 03:16:38.556403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.581 [2024-12-06 03:16:38.564999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.581 [2024-12-06 03:16:38.565018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.581 [2024-12-06 03:16:38.574463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.581 [2024-12-06 03:16:38.574483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.581 [2024-12-06 03:16:38.583528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.581 [2024-12-06 03:16:38.583547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.581 [2024-12-06 03:16:38.592151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.581 [2024-12-06 03:16:38.592170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.581 [2024-12-06 03:16:38.601837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.581 [2024-12-06 03:16:38.601856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.581 [2024-12-06 03:16:38.611637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.581 [2024-12-06 03:16:38.611656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.581 [2024-12-06 03:16:38.621078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.581 [2024-12-06 03:16:38.621097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.581 [2024-12-06 03:16:38.630299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.581 [2024-12-06 03:16:38.630317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.581 [2024-12-06 03:16:38.639715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.581 [2024-12-06 03:16:38.639734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.581 [2024-12-06 03:16:38.648960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.581 [2024-12-06 03:16:38.648994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.581 [2024-12-06 03:16:38.658305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.581 [2024-12-06 03:16:38.658324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.581 [2024-12-06 03:16:38.667623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.581 [2024-12-06 03:16:38.667643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.581 [2024-12-06 03:16:38.676349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.581 [2024-12-06 03:16:38.676368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.581 [2024-12-06 03:16:38.685722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.581 [2024-12-06 03:16:38.685741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.581 [2024-12-06 03:16:38.694994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.581 [2024-12-06 03:16:38.695013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.581 [2024-12-06 03:16:38.704212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.581 [2024-12-06 03:16:38.704232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.581 [2024-12-06 03:16:38.713541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.581 [2024-12-06 03:16:38.713562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.839 [2024-12-06 03:16:38.722107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.840 [2024-12-06 03:16:38.722127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.840 [2024-12-06 03:16:38.730795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.840 [2024-12-06 03:16:38.730814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.840 [2024-12-06 03:16:38.739564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.840 [2024-12-06 03:16:38.739584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.840 [2024-12-06 03:16:38.748925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.840 [2024-12-06 03:16:38.748944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.840 [2024-12-06 03:16:38.758992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.840 [2024-12-06 03:16:38.759011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.840 [2024-12-06 03:16:38.767930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.840 [2024-12-06 03:16:38.767955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.840 [2024-12-06 03:16:38.777237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.840 [2024-12-06 03:16:38.777256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.840 [2024-12-06 03:16:38.786440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.840 [2024-12-06 03:16:38.786459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.840 [2024-12-06 03:16:38.795940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.840 [2024-12-06 03:16:38.795965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.840 [2024-12-06 03:16:38.804631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.840 [2024-12-06 03:16:38.804650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.840 [2024-12-06 03:16:38.813981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.840 [2024-12-06 03:16:38.814000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.840 [2024-12-06 03:16:38.823300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.840 [2024-12-06 03:16:38.823319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.840 [2024-12-06 03:16:38.832605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.840 [2024-12-06 03:16:38.832625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.840 [2024-12-06 03:16:38.842014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.840 [2024-12-06 03:16:38.842033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.840 [2024-12-06 03:16:38.851224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.840 [2024-12-06 03:16:38.851243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.840 [2024-12-06 03:16:38.860490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.840 [2024-12-06 03:16:38.860509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.840 [2024-12-06 03:16:38.869854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.840 [2024-12-06 03:16:38.869872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.840 [2024-12-06 03:16:38.879072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.840 [2024-12-06 03:16:38.879091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.840 [2024-12-06 03:16:38.888432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.840 [2024-12-06 03:16:38.888451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.840 [2024-12-06 03:16:38.897568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.840 [2024-12-06 03:16:38.897588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.840 [2024-12-06 03:16:38.906795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.840 [2024-12-06 03:16:38.906814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.840 [2024-12-06 03:16:38.915397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.840 [2024-12-06 03:16:38.915416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.840 [2024-12-06 03:16:38.924462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.840 [2024-12-06 03:16:38.924481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.840 [2024-12-06 03:16:38.933773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.840 [2024-12-06 03:16:38.933792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.840 [2024-12-06 03:16:38.943217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.840 [2024-12-06 03:16:38.943235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.840 [2024-12-06 03:16:38.952958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.840 [2024-12-06 03:16:38.952976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.840 [2024-12-06 03:16:38.962264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.840 [2024-12-06 03:16:38.962283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.840 [2024-12-06 03:16:38.971558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.840 [2024-12-06 03:16:38.971577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.098 [2024-12-06 03:16:38.980141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.098 [2024-12-06 03:16:38.980161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.098 [2024-12-06 03:16:38.988745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.098 [2024-12-06 03:16:38.988764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.098 [2024-12-06 03:16:38.997396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.098 [2024-12-06 03:16:38.997415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.098 [2024-12-06 03:16:39.006176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.098 [2024-12-06 03:16:39.006196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.098 [2024-12-06 03:16:39.015336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.098 [2024-12-06 03:16:39.015354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.098 [2024-12-06 03:16:39.024612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.098 [2024-12-06 03:16:39.024631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.098 [2024-12-06 03:16:39.031480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.098 [2024-12-06 03:16:39.031498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.098 [2024-12-06 03:16:39.042308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.098 [2024-12-06 03:16:39.042327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.098 [2024-12-06 03:16:39.051091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.098 [2024-12-06 03:16:39.051110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.098 [2024-12-06 03:16:39.060195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.098 [2024-12-06 03:16:39.060214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.098 [2024-12-06 03:16:39.068975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.098 [2024-12-06 03:16:39.068994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.098 [2024-12-06 03:16:39.078362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.098 [2024-12-06 03:16:39.078381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.098 [2024-12-06 03:16:39.087589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.098 [2024-12-06 03:16:39.087608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.098 [2024-12-06 03:16:39.096835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.098 [2024-12-06 03:16:39.096854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.098 [2024-12-06 03:16:39.105338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.098 [2024-12-06 03:16:39.105356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.098 [2024-12-06 03:16:39.114457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.098 [2024-12-06 03:16:39.114476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.098 [2024-12-06 03:16:39.123540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.098 [2024-12-06 03:16:39.123559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.098 [2024-12-06 03:16:39.132262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.098 [2024-12-06 03:16:39.132281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.098 [2024-12-06 03:16:39.141368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.099 [2024-12-06 03:16:39.141387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.099 [2024-12-06 03:16:39.149959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.099 [2024-12-06 03:16:39.149978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.099 [2024-12-06 03:16:39.158585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.099 [2024-12-06 03:16:39.158604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.099 [2024-12-06 03:16:39.167221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.099 [2024-12-06 03:16:39.167240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.099 [2024-12-06 03:16:39.176425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.099 [2024-12-06 03:16:39.176444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.099 [2024-12-06 03:16:39.185670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.099 [2024-12-06 03:16:39.185689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.099 [2024-12-06 03:16:39.194258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.099 [2024-12-06 03:16:39.194276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.099 [2024-12-06 03:16:39.203751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.099 [2024-12-06 03:16:39.203770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.099 [2024-12-06 03:16:39.212958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.099 [2024-12-06 03:16:39.212978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.099 [2024-12-06 03:16:39.222272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.099 [2024-12-06 03:16:39.222291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.099 [2024-12-06 03:16:39.231034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.099 [2024-12-06 03:16:39.231056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.358 [2024-12-06 03:16:39.239909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.358 [2024-12-06 03:16:39.239929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.358 [2024-12-06 03:16:39.248748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.358 [2024-12-06 03:16:39.248768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.358 [2024-12-06 03:16:39.257879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.358 [2024-12-06 03:16:39.257899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.358 [2024-12-06 03:16:39.266620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.358 [2024-12-06 03:16:39.266640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.358 [2024-12-06 03:16:39.275303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.358 [2024-12-06 03:16:39.275322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.358 [2024-12-06 03:16:39.285054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.358 [2024-12-06 03:16:39.285079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.358 [2024-12-06 03:16:39.293823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.358 [2024-12-06 03:16:39.293843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.358 [2024-12-06 03:16:39.303207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.358 [2024-12-06 03:16:39.303227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.358 [2024-12-06 03:16:39.312420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.358 [2024-12-06 03:16:39.312439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.358 [2024-12-06 03:16:39.321789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.358 [2024-12-06 03:16:39.321808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.358 [2024-12-06 03:16:39.330320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.358 [2024-12-06 03:16:39.330340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.358 [2024-12-06 03:16:39.339654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.358 [2024-12-06 03:16:39.339673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.358 [2024-12-06 03:16:39.348862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.358 [2024-12-06 03:16:39.348882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.358 [2024-12-06 03:16:39.358283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.358 [2024-12-06 03:16:39.358302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.358 [2024-12-06 03:16:39.367437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.358 [2024-12-06 03:16:39.367457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.358 [2024-12-06 03:16:39.376734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.358 [2024-12-06 03:16:39.376754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.358 [2024-12-06 03:16:39.385969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.358 [2024-12-06 03:16:39.385989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.358 [2024-12-06 03:16:39.395132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.358 [2024-12-06 03:16:39.395151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.358 [2024-12-06 03:16:39.403636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.358 [2024-12-06 03:16:39.403655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.358 [2024-12-06 03:16:39.412934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.358 [2024-12-06 03:16:39.412963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.358 [2024-12-06 03:16:39.422112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.358 [2024-12-06 03:16:39.422132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.358 [2024-12-06 03:16:39.431380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.359 [2024-12-06 03:16:39.431399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.359 [2024-12-06 03:16:39.440620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.359 [2024-12-06 03:16:39.440640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.359 [2024-12-06 03:16:39.450265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.359 [2024-12-06 03:16:39.450284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.359 [2024-12-06 03:16:39.459410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.359 [2024-12-06 03:16:39.459434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.359 [2024-12-06 03:16:39.468048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.359 [2024-12-06 03:16:39.468069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.359 [2024-12-06 03:16:39.477448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.359 [2024-12-06 03:16:39.477468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.359 [2024-12-06 03:16:39.486739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.359 [2024-12-06 03:16:39.486758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.359 16638.25 IOPS, 129.99 MiB/s [2024-12-06T02:16:39.500Z] [2024-12-06 03:16:39.496623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.359 [2024-12-06 03:16:39.496644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.618 [2024-12-06 03:16:39.505411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.618 [2024-12-06 03:16:39.505430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.618 [2024-12-06 03:16:39.514625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.618 [2024-12-06 03:16:39.514644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.618 [2024-12-06 03:16:39.523354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.618 [2024-12-06 03:16:39.523373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.618 [2024-12-06 03:16:39.532770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.618 [2024-12-06 03:16:39.532789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.618 [2024-12-06 03:16:39.541935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.618 [2024-12-06 03:16:39.541962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.618 [2024-12-06 03:16:39.551209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.618 [2024-12-06 03:16:39.551228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.618 [2024-12-06 03:16:39.560458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.618 [2024-12-06 03:16:39.560477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.618 [2024-12-06 03:16:39.569154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.618 [2024-12-06 03:16:39.569174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.618 [2024-12-06 03:16:39.578335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.618 [2024-12-06 03:16:39.578354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.618 [2024-12-06 03:16:39.592777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.619 [2024-12-06 03:16:39.592797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.619 [2024-12-06 03:16:39.601777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.619 [2024-12-06 03:16:39.601796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.619 [2024-12-06 03:16:39.610648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.619 [2024-12-06 03:16:39.610666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.619 [2024-12-06 03:16:39.619899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.619 [2024-12-06 03:16:39.619918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.619 [2024-12-06 03:16:39.629373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.619 [2024-12-06 03:16:39.629392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.619 [2024-12-06 03:16:39.638028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.619 [2024-12-06 03:16:39.638051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.619 [2024-12-06 03:16:39.647381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.619 [2024-12-06 03:16:39.647400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.619 [2024-12-06 03:16:39.656012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.619 [2024-12-06 03:16:39.656031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.619 [2024-12-06 03:16:39.664781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.619 [2024-12-06 03:16:39.664800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.619 [2024-12-06 03:16:39.673449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.619 [2024-12-06 03:16:39.673467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.619 [2024-12-06 03:16:39.682694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.619 [2024-12-06 03:16:39.682713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.619 [2024-12-06 03:16:39.691541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.619 [2024-12-06 03:16:39.691559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.619 [2024-12-06 03:16:39.700421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.619 [2024-12-06 03:16:39.700440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.619 [2024-12-06 03:16:39.710112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.619 [2024-12-06 03:16:39.710143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.619 [2024-12-06 03:16:39.718738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.619 [2024-12-06 03:16:39.718757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.619 [2024-12-06 03:16:39.727435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.619 [2024-12-06 03:16:39.727455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.619 [2024-12-06 03:16:39.736254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.619 [2024-12-06 03:16:39.736273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.619 [2024-12-06 03:16:39.745156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.619 [2024-12-06 03:16:39.745174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.619 [2024-12-06 03:16:39.754466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.619 [2024-12-06 03:16:39.754485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.878 [2024-12-06 03:16:39.763129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.878 [2024-12-06 03:16:39.763148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.878 [2024-12-06 03:16:39.772604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.878 [2024-12-06 03:16:39.772623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.878 [2024-12-06 03:16:39.781874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.878 [2024-12-06 03:16:39.781893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.878 [2024-12-06 03:16:39.791696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.878 [2024-12-06 03:16:39.791714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.878 [2024-12-06 03:16:39.800385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.878 [2024-12-06 03:16:39.800414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.878 [2024-12-06 03:16:39.810126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.878 [2024-12-06 03:16:39.810145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.878 [2024-12-06 03:16:39.819229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.878 [2024-12-06 03:16:39.819249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.878 [2024-12-06 03:16:39.827906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.878 [2024-12-06 03:16:39.827926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.878 [2024-12-06 03:16:39.837200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.878 [2024-12-06 03:16:39.837220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.878 [2024-12-06 03:16:39.846555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.878 [2024-12-06 03:16:39.846575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.878 [2024-12-06 03:16:39.855674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.878 [2024-12-06 03:16:39.855694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.878 [2024-12-06 03:16:39.865051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.878 [2024-12-06 03:16:39.865071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.878 [2024-12-06 03:16:39.874327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.878 [2024-12-06 03:16:39.874346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.878 [2024-12-06 03:16:39.883498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.878 [2024-12-06 03:16:39.883516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.878 [2024-12-06 03:16:39.892129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.878 [2024-12-06 03:16:39.892147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.878 [2024-12-06 03:16:39.901491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.878 [2024-12-06 03:16:39.901510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.878 [2024-12-06 03:16:39.910290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.878 [2024-12-06 03:16:39.910310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.878 [2024-12-06 03:16:39.919845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.878 [2024-12-06 03:16:39.919865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.878 [2024-12-06 03:16:39.926772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.878 [2024-12-06 03:16:39.926791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.879 [2024-12-06 03:16:39.937895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.879 [2024-12-06 03:16:39.937915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.879 [2024-12-06 03:16:39.946687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.879 [2024-12-06 03:16:39.946706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.879 [2024-12-06 03:16:39.955223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.879 [2024-12-06 03:16:39.955242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.879 [2024-12-06 03:16:39.964674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.879 [2024-12-06 03:16:39.964693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.879 [2024-12-06 03:16:39.974082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.879 [2024-12-06 03:16:39.974102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.879 [2024-12-06 03:16:39.983317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.879 [2024-12-06 03:16:39.983336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.879 [2024-12-06 03:16:39.992680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.879 [2024-12-06 03:16:39.992699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.879 [2024-12-06 03:16:40.002096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.879 [2024-12-06 03:16:40.002116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.879 [2024-12-06 03:16:40.017140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.879 [2024-12-06 03:16:40.017159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.138 [2024-12-06 03:16:40.032561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.138 [2024-12-06 03:16:40.032581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.138 [2024-12-06 03:16:40.047268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.138 [2024-12-06 03:16:40.047288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.138 [2024-12-06 03:16:40.063051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.138 [2024-12-06 03:16:40.063073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.138 [2024-12-06 03:16:40.077206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.138 [2024-12-06 03:16:40.077227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.138 [2024-12-06 03:16:40.090794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.138 [2024-12-06 03:16:40.090814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.138 [2024-12-06 03:16:40.104568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.138 [2024-12-06 03:16:40.104588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.138 [2024-12-06 03:16:40.118686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.138 [2024-12-06 03:16:40.118706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.138 [2024-12-06 03:16:40.132777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.138 [2024-12-06 03:16:40.132797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.138 [2024-12-06 03:16:40.146803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.138 [2024-12-06 03:16:40.146823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.138 [2024-12-06 03:16:40.160570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.138 [2024-12-06 03:16:40.160590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.138 [2024-12-06 03:16:40.174555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.138 [2024-12-06 03:16:40.174575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.138 [2024-12-06 03:16:40.188654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.138 [2024-12-06 03:16:40.188674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.138 [2024-12-06 03:16:40.200003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.138 [2024-12-06 03:16:40.200022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.138 [2024-12-06 03:16:40.214667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.138 [2024-12-06 03:16:40.214686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.138 [2024-12-06 03:16:40.226379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.138 [2024-12-06 03:16:40.226398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.138 [2024-12-06 03:16:40.240887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.138 [2024-12-06 03:16:40.240907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.138 [2024-12-06 03:16:40.254407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.138 [2024-12-06 03:16:40.254427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.138 [2024-12-06 03:16:40.268691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.138 [2024-12-06 03:16:40.268712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.398 [2024-12-06 03:16:40.283041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.398 [2024-12-06 03:16:40.283062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.398 [2024-12-06 03:16:40.294077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.398 [2024-12-06 03:16:40.294098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.398 [2024-12-06 03:16:40.308631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.398 [2024-12-06 03:16:40.308651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.398 [2024-12-06 03:16:40.322558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.398 [2024-12-06 03:16:40.322578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.398 [2024-12-06 03:16:40.337026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.398 [2024-12-06 03:16:40.337046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.398 [2024-12-06 03:16:40.352600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.398 [2024-12-06 03:16:40.352620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.398 [2024-12-06 03:16:40.367336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.398 [2024-12-06 03:16:40.367356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.398 [2024-12-06 03:16:40.378739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.398 [2024-12-06 03:16:40.378759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.398 [2024-12-06 03:16:40.393882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.398 [2024-12-06 03:16:40.393901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.398 [2024-12-06 03:16:40.408633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.398 [2024-12-06 03:16:40.408652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.398 [2024-12-06 03:16:40.423174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.398 [2024-12-06 03:16:40.423193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.398 [2024-12-06 03:16:40.436917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.398 [2024-12-06 03:16:40.436937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.398 [2024-12-06 03:16:40.451387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.398 [2024-12-06 03:16:40.451406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.398 [2024-12-06 03:16:40.462768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.398 [2024-12-06 03:16:40.462787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.398 [2024-12-06 03:16:40.477087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.398 [2024-12-06 03:16:40.477106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.398 16636.80 IOPS, 129.97 MiB/s [2024-12-06T02:16:40.539Z] [2024-12-06 03:16:40.491170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.398 [2024-12-06 03:16:40.491195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.398 00:08:20.398 Latency(us) 00:08:20.398 [2024-12-06T02:16:40.539Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.398 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:20.398 Nvme1n1 : 5.01 16640.42 130.00 0.00 0.00 7685.09 3305.29 14246.96 00:08:20.398 [2024-12-06T02:16:40.539Z] =================================================================================================================== 00:08:20.398 [2024-12-06T02:16:40.539Z] Total : 16640.42 130.00 0.00 0.00 7685.09 3305.29 14246.96 00:08:20.398 [2024-12-06 03:16:40.501373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.398 [2024-12-06 03:16:40.501391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.398 [2024-12-06 03:16:40.513401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.398 [2024-12-06 03:16:40.513416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.398 [2024-12-06 03:16:40.525442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.398 [2024-12-06 03:16:40.525460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.657 [2024-12-06 03:16:40.537469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.657 [2024-12-06 03:16:40.537486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.657 [2024-12-06 03:16:40.549502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.657 [2024-12-06 03:16:40.549516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.657 [2024-12-06 03:16:40.561531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.657 [2024-12-06 03:16:40.561545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.657 [2024-12-06 03:16:40.573564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.657 [2024-12-06 03:16:40.573577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.657 [2024-12-06 03:16:40.585594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.657 [2024-12-06 03:16:40.585607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.657 [2024-12-06 03:16:40.597626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.657 [2024-12-06 03:16:40.597642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.657 [2024-12-06 03:16:40.609657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.657 [2024-12-06 03:16:40.609668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.657 [2024-12-06 03:16:40.621687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.657 [2024-12-06 03:16:40.621698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.657 [2024-12-06 03:16:40.633724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.657 [2024-12-06 03:16:40.633736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.657 [2024-12-06 03:16:40.645756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.657 [2024-12-06 03:16:40.645769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.657 [2024-12-06 03:16:40.657786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.657 [2024-12-06 03:16:40.657797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.657 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2491405) - No such process 00:08:20.657 03:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2491405 00:08:20.657 03:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.657 03:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.657 03:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:20.657 03:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.657 03:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:20.657 03:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.657 03:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:20.657 delay0 00:08:20.657 03:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.657 03:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:20.657 03:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.657 03:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:20.657 03:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.657 03:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:20.916 [2024-12-06 03:16:40.799882] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:27.496 Initializing NVMe Controllers 00:08:27.496 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:27.496 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:27.497 Initialization complete. Launching workers. 00:08:27.497 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 3122 00:08:27.497 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 3386, failed to submit 56 00:08:27.497 success 3235, unsuccessful 151, failed 0 00:08:27.497 03:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:27.497 03:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:27.497 03:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:27.497 03:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:27.497 03:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:27.497 03:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:27.497 03:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:27.497 03:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:27.497 rmmod nvme_tcp 00:08:27.497 rmmod nvme_fabrics 00:08:27.497 rmmod nvme_keyring 00:08:27.497 03:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:27.497 03:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:27.497 03:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:27.497 03:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2489636 ']' 00:08:27.497 03:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2489636 00:08:27.497 03:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2489636 ']' 00:08:27.497 03:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2489636 00:08:27.497 03:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:27.497 03:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:27.497 03:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2489636 00:08:27.497 03:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:27.497 03:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:27.497 03:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2489636' 00:08:27.497 killing process with pid 2489636 00:08:27.497 03:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2489636 00:08:27.497 03:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2489636 00:08:27.497 03:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:27.497 03:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:27.497 03:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:27.497 03:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:27.497 03:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:27.497 03:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:27.497 03:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:27.497 03:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:27.497 03:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:27.497 03:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.497 03:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:27.497 03:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.402 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:29.402 00:08:29.402 real 0m30.706s 00:08:29.402 user 0m41.822s 00:08:29.402 sys 0m10.321s 00:08:29.402 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.402 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:29.402 ************************************ 00:08:29.402 END TEST nvmf_zcopy 00:08:29.402 ************************************ 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:29.662 ************************************ 00:08:29.662 START TEST nvmf_nmic 00:08:29.662 ************************************ 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:29.662 * Looking for test storage... 00:08:29.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:29.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.662 --rc genhtml_branch_coverage=1 00:08:29.662 --rc genhtml_function_coverage=1 00:08:29.662 --rc genhtml_legend=1 00:08:29.662 --rc geninfo_all_blocks=1 00:08:29.662 --rc geninfo_unexecuted_blocks=1 00:08:29.662 00:08:29.662 ' 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:29.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.662 --rc genhtml_branch_coverage=1 00:08:29.662 --rc genhtml_function_coverage=1 00:08:29.662 --rc genhtml_legend=1 00:08:29.662 --rc geninfo_all_blocks=1 00:08:29.662 --rc geninfo_unexecuted_blocks=1 00:08:29.662 00:08:29.662 ' 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:29.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.662 --rc genhtml_branch_coverage=1 00:08:29.662 --rc genhtml_function_coverage=1 00:08:29.662 --rc genhtml_legend=1 00:08:29.662 --rc geninfo_all_blocks=1 00:08:29.662 --rc geninfo_unexecuted_blocks=1 00:08:29.662 00:08:29.662 ' 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:29.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.662 --rc genhtml_branch_coverage=1 00:08:29.662 --rc genhtml_function_coverage=1 00:08:29.662 --rc genhtml_legend=1 00:08:29.662 --rc geninfo_all_blocks=1 00:08:29.662 --rc geninfo_unexecuted_blocks=1 00:08:29.662 00:08:29.662 ' 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:29.662 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:29.663 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.663 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.663 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.663 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.663 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.663 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.663 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:29.663 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.663 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:29.663 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:29.663 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:29.663 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:29.663 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.663 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.663 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:29.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:29.663 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:29.663 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:29.663 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:29.663 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:29.663 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:29.663 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:29.663 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:29.663 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.663 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:29.663 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:29.663 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:29.663 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.663 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.663 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.663 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:29.663 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:29.663 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:29.663 03:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:36.249 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:36.249 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:08:36.249 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:36.249 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:36.249 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:36.249 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:36.249 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:36.249 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:08:36.249 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:36.249 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:08:36.249 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:08:36.249 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:08:36.249 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:08:36.249 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:08:36.249 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:08:36.249 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:36.249 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:36.249 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:36.249 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:36.249 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:36.249 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:36.249 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:36.249 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:36.249 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:36.249 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:36.249 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:36.249 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:36.249 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:36.249 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:36.249 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:36.249 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:36.249 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:36.249 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:36.249 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:36.249 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:36.249 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:36.249 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:36.249 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:36.250 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:36.250 Found net devices under 0000:86:00.0: cvl_0_0 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:36.250 Found net devices under 0000:86:00.1: cvl_0_1 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:36.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:36.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:08:36.250 00:08:36.250 --- 10.0.0.2 ping statistics --- 00:08:36.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.250 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:36.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:36.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:08:36.250 00:08:36.250 --- 10.0.0.1 ping statistics --- 00:08:36.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.250 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2496972 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2496972 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2496972 ']' 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:36.250 [2024-12-06 03:16:55.540806] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:08:36.250 [2024-12-06 03:16:55.540855] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.250 [2024-12-06 03:16:55.608209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:36.250 [2024-12-06 03:16:55.652687] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:36.250 [2024-12-06 03:16:55.652723] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:36.250 [2024-12-06 03:16:55.652733] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:36.250 [2024-12-06 03:16:55.652739] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:36.250 [2024-12-06 03:16:55.652743] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:36.250 [2024-12-06 03:16:55.654352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.250 [2024-12-06 03:16:55.654448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:36.250 [2024-12-06 03:16:55.654533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:36.250 [2024-12-06 03:16:55.654535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:36.250 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:36.251 [2024-12-06 03:16:55.793740] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:36.251 Malloc0 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:36.251 [2024-12-06 03:16:55.864727] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:36.251 test case1: single bdev can't be used in multiple subsystems 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:36.251 [2024-12-06 03:16:55.888636] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:36.251 [2024-12-06 03:16:55.888657] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:36.251 [2024-12-06 03:16:55.888665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.251 request: 00:08:36.251 { 00:08:36.251 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:36.251 "namespace": { 00:08:36.251 "bdev_name": "Malloc0", 00:08:36.251 "no_auto_visible": false, 00:08:36.251 "hide_metadata": false 00:08:36.251 }, 00:08:36.251 "method": "nvmf_subsystem_add_ns", 00:08:36.251 "req_id": 1 00:08:36.251 } 00:08:36.251 Got JSON-RPC error response 00:08:36.251 response: 00:08:36.251 { 00:08:36.251 "code": -32602, 00:08:36.251 "message": "Invalid parameters" 00:08:36.251 } 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:36.251 Adding namespace failed - expected result. 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:36.251 test case2: host connect to nvmf target in multiple paths 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:36.251 [2024-12-06 03:16:55.900771] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.251 03:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:37.189 03:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:38.126 03:16:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:38.126 03:16:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:08:38.126 03:16:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:38.126 03:16:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:38.126 03:16:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:08:40.659 03:17:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:40.659 03:17:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:40.659 03:17:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:40.659 03:17:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:40.659 03:17:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:40.659 03:17:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:08:40.659 03:17:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:40.659 [global] 00:08:40.659 thread=1 00:08:40.659 invalidate=1 00:08:40.659 rw=write 00:08:40.659 time_based=1 00:08:40.659 runtime=1 00:08:40.659 ioengine=libaio 00:08:40.659 direct=1 00:08:40.659 bs=4096 00:08:40.659 iodepth=1 00:08:40.659 norandommap=0 00:08:40.659 numjobs=1 00:08:40.659 00:08:40.659 verify_dump=1 00:08:40.659 verify_backlog=512 00:08:40.659 verify_state_save=0 00:08:40.659 do_verify=1 00:08:40.659 verify=crc32c-intel 00:08:40.659 [job0] 00:08:40.659 filename=/dev/nvme0n1 00:08:40.659 Could not set queue depth (nvme0n1) 00:08:40.659 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:40.659 fio-3.35 00:08:40.659 Starting 1 thread 00:08:41.596 00:08:41.596 job0: (groupid=0, jobs=1): err= 0: pid=2498044: Fri Dec 6 03:17:01 2024 00:08:41.596 read: IOPS=2159, BW=8639KiB/s (8847kB/s)(8648KiB/1001msec) 00:08:41.596 slat (nsec): min=6603, max=28635, avg=8104.84, stdev=1181.76 00:08:41.596 clat (usec): min=175, max=1151, avg=246.33, stdev=36.48 00:08:41.596 lat (usec): min=183, max=1159, avg=254.43, stdev=36.59 00:08:41.596 clat percentiles (usec): 00:08:41.596 | 1.00th=[ 208], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 225], 00:08:41.596 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 251], 00:08:41.596 | 70.00th=[ 253], 80.00th=[ 258], 90.00th=[ 262], 95.00th=[ 269], 00:08:41.596 | 99.00th=[ 338], 99.50th=[ 453], 99.90th=[ 725], 99.95th=[ 938], 00:08:41.596 | 99.99th=[ 1156] 00:08:41.596 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:08:41.596 slat (nsec): min=8462, max=45604, avg=11080.85, stdev=1977.51 00:08:41.596 clat (usec): min=114, max=325, avg=159.39, stdev=14.10 00:08:41.596 lat (usec): min=124, max=360, avg=170.47, stdev=14.37 00:08:41.596 clat percentiles (usec): 00:08:41.596 | 1.00th=[ 123], 5.00th=[ 129], 10.00th=[ 145], 20.00th=[ 153], 00:08:41.596 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 163], 00:08:41.596 | 70.00th=[ 165], 80.00th=[ 167], 90.00th=[ 172], 95.00th=[ 176], 00:08:41.596 | 99.00th=[ 190], 99.50th=[ 212], 99.90th=[ 302], 99.95th=[ 326], 00:08:41.596 | 99.99th=[ 326] 00:08:41.596 bw ( KiB/s): min=10760, max=10760, per=100.00%, avg=10760.00, stdev= 0.00, samples=1 00:08:41.596 iops : min= 2690, max= 2690, avg=2690.00, stdev= 0.00, samples=1 00:08:41.596 lat (usec) : 250=80.09%, 500=19.84%, 750=0.02%, 1000=0.02% 00:08:41.596 lat (msec) : 2=0.02% 00:08:41.596 cpu : usr=3.10%, sys=7.70%, ctx=4722, majf=0, minf=1 00:08:41.596 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:41.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:41.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:41.596 issued rwts: total=2162,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:41.596 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:41.596 00:08:41.596 Run status group 0 (all jobs): 00:08:41.596 READ: bw=8639KiB/s (8847kB/s), 8639KiB/s-8639KiB/s (8847kB/s-8847kB/s), io=8648KiB (8856kB), run=1001-1001msec 00:08:41.596 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:08:41.596 00:08:41.596 Disk stats (read/write): 00:08:41.596 nvme0n1: ios=2098/2105, merge=0/0, ticks=503/315, in_queue=818, util=91.18% 00:08:41.596 03:17:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:41.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:41.856 03:17:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:41.856 03:17:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:08:41.856 03:17:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:41.856 03:17:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:41.856 03:17:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:41.856 03:17:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:41.856 03:17:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:08:41.856 03:17:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:41.856 03:17:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:41.856 03:17:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:41.856 03:17:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:41.856 03:17:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:41.856 03:17:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:41.856 03:17:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:41.856 03:17:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:41.856 rmmod nvme_tcp 00:08:41.856 rmmod nvme_fabrics 00:08:41.856 rmmod nvme_keyring 00:08:41.856 03:17:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:41.856 03:17:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:41.856 03:17:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:41.856 03:17:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2496972 ']' 00:08:41.856 03:17:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2496972 00:08:41.856 03:17:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2496972 ']' 00:08:41.856 03:17:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2496972 00:08:41.856 03:17:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:08:41.856 03:17:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:41.856 03:17:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2496972 00:08:41.856 03:17:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:41.856 03:17:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:41.856 03:17:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2496972' 00:08:41.856 killing process with pid 2496972 00:08:41.856 03:17:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2496972 00:08:41.856 03:17:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2496972 00:08:42.116 03:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:42.116 03:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:42.116 03:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:42.116 03:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:08:42.116 03:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:08:42.116 03:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:42.116 03:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:08:42.116 03:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:42.116 03:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:42.116 03:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.116 03:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:42.116 03:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.653 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:44.653 00:08:44.653 real 0m14.655s 00:08:44.653 user 0m32.707s 00:08:44.653 sys 0m5.182s 00:08:44.653 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.653 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.653 ************************************ 00:08:44.653 END TEST nvmf_nmic 00:08:44.653 ************************************ 00:08:44.653 03:17:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:44.653 03:17:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:44.653 03:17:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.653 03:17:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:44.653 ************************************ 00:08:44.653 START TEST nvmf_fio_target 00:08:44.653 ************************************ 00:08:44.653 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:44.653 * Looking for test storage... 00:08:44.653 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:44.653 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:44.653 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:08:44.653 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:44.653 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:44.653 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:44.653 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:44.653 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:44.653 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:44.653 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:44.653 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:44.653 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:44.653 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:44.653 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:44.653 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:44.653 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:44.653 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:44.653 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:44.653 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:44.653 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:44.653 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:44.653 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:44.653 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:44.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.654 --rc genhtml_branch_coverage=1 00:08:44.654 --rc genhtml_function_coverage=1 00:08:44.654 --rc genhtml_legend=1 00:08:44.654 --rc geninfo_all_blocks=1 00:08:44.654 --rc geninfo_unexecuted_blocks=1 00:08:44.654 00:08:44.654 ' 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:44.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.654 --rc genhtml_branch_coverage=1 00:08:44.654 --rc genhtml_function_coverage=1 00:08:44.654 --rc genhtml_legend=1 00:08:44.654 --rc geninfo_all_blocks=1 00:08:44.654 --rc geninfo_unexecuted_blocks=1 00:08:44.654 00:08:44.654 ' 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:44.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.654 --rc genhtml_branch_coverage=1 00:08:44.654 --rc genhtml_function_coverage=1 00:08:44.654 --rc genhtml_legend=1 00:08:44.654 --rc geninfo_all_blocks=1 00:08:44.654 --rc geninfo_unexecuted_blocks=1 00:08:44.654 00:08:44.654 ' 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:44.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.654 --rc genhtml_branch_coverage=1 00:08:44.654 --rc genhtml_function_coverage=1 00:08:44.654 --rc genhtml_legend=1 00:08:44.654 --rc geninfo_all_blocks=1 00:08:44.654 --rc geninfo_unexecuted_blocks=1 00:08:44.654 00:08:44.654 ' 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.654 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:44.655 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:44.655 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:44.655 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:44.655 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.655 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.655 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:44.655 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:44.655 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:44.655 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:44.655 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:44.655 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:44.655 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:44.655 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:44.655 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:44.655 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:44.655 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:44.655 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:44.655 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:44.655 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:44.655 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.655 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.655 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.655 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:44.655 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:44.655 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:08:44.655 03:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:49.923 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:49.923 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:08:49.923 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:49.923 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:49.923 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:49.923 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:49.923 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:49.923 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:08:49.923 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:49.924 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:49.924 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:49.924 Found net devices under 0000:86:00.0: cvl_0_0 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:49.924 Found net devices under 0000:86:00.1: cvl_0_1 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:49.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:49.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.416 ms 00:08:49.924 00:08:49.924 --- 10.0.0.2 ping statistics --- 00:08:49.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.924 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:49.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:49.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:08:49.924 00:08:49.924 --- 10.0.0.1 ping statistics --- 00:08:49.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.924 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:49.924 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:49.925 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:49.925 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:49.925 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:49.925 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:49.925 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:49.925 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:49.925 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:49.925 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:49.925 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2501633 00:08:49.925 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2501633 00:08:49.925 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2501633 ']' 00:08:49.925 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.925 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.925 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.925 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.925 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:49.925 03:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:49.925 [2024-12-06 03:17:10.036328] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:08:49.925 [2024-12-06 03:17:10.036382] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:50.184 [2024-12-06 03:17:10.105379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:50.184 [2024-12-06 03:17:10.149249] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:50.184 [2024-12-06 03:17:10.149285] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:50.184 [2024-12-06 03:17:10.149292] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:50.184 [2024-12-06 03:17:10.149298] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:50.184 [2024-12-06 03:17:10.149307] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:50.184 [2024-12-06 03:17:10.150783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.184 [2024-12-06 03:17:10.150802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:50.184 [2024-12-06 03:17:10.150890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:50.184 [2024-12-06 03:17:10.150892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.184 03:17:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:50.184 03:17:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:08:50.184 03:17:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:50.184 03:17:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:50.184 03:17:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:50.185 03:17:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:50.185 03:17:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:50.444 [2024-12-06 03:17:10.450746] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:50.444 03:17:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:50.704 03:17:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:50.704 03:17:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:50.963 03:17:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:50.963 03:17:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:51.223 03:17:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:51.223 03:17:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:51.223 03:17:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:51.223 03:17:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:51.483 03:17:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:51.742 03:17:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:08:51.742 03:17:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:52.001 03:17:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:08:52.001 03:17:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:52.261 03:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:08:52.261 03:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:08:52.261 03:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:52.522 03:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:52.522 03:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:52.781 03:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:52.781 03:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:53.040 03:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:53.040 [2024-12-06 03:17:13.130041] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:53.040 03:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:08:53.298 03:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:08:53.557 03:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:54.944 03:17:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:08:54.944 03:17:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:08:54.944 03:17:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:54.944 03:17:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:08:54.944 03:17:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:08:54.944 03:17:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:08:56.842 03:17:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:56.842 03:17:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:56.842 03:17:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:56.842 03:17:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:08:56.842 03:17:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:56.842 03:17:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:08:56.842 03:17:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:56.842 [global] 00:08:56.842 thread=1 00:08:56.842 invalidate=1 00:08:56.842 rw=write 00:08:56.842 time_based=1 00:08:56.842 runtime=1 00:08:56.842 ioengine=libaio 00:08:56.842 direct=1 00:08:56.842 bs=4096 00:08:56.842 iodepth=1 00:08:56.842 norandommap=0 00:08:56.842 numjobs=1 00:08:56.842 00:08:56.842 verify_dump=1 00:08:56.842 verify_backlog=512 00:08:56.842 verify_state_save=0 00:08:56.842 do_verify=1 00:08:56.842 verify=crc32c-intel 00:08:56.842 [job0] 00:08:56.842 filename=/dev/nvme0n1 00:08:56.842 [job1] 00:08:56.842 filename=/dev/nvme0n2 00:08:56.842 [job2] 00:08:56.842 filename=/dev/nvme0n3 00:08:56.842 [job3] 00:08:56.842 filename=/dev/nvme0n4 00:08:56.842 Could not set queue depth (nvme0n1) 00:08:56.842 Could not set queue depth (nvme0n2) 00:08:56.842 Could not set queue depth (nvme0n3) 00:08:56.842 Could not set queue depth (nvme0n4) 00:08:57.101 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:57.101 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:57.101 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:57.101 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:57.101 fio-3.35 00:08:57.101 Starting 4 threads 00:08:58.631 00:08:58.631 job0: (groupid=0, jobs=1): err= 0: pid=2503090: Fri Dec 6 03:17:18 2024 00:08:58.631 read: IOPS=1438, BW=5754KiB/s (5892kB/s)(5760KiB/1001msec) 00:08:58.631 slat (nsec): min=3206, max=35739, avg=7197.04, stdev=2872.07 00:08:58.631 clat (usec): min=192, max=41032, avg=467.63, stdev=2517.09 00:08:58.631 lat (usec): min=196, max=41040, avg=474.83, stdev=2517.57 00:08:58.631 clat percentiles (usec): 00:08:58.631 | 1.00th=[ 219], 5.00th=[ 241], 10.00th=[ 251], 20.00th=[ 265], 00:08:58.631 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 302], 00:08:58.631 | 70.00th=[ 310], 80.00th=[ 326], 90.00th=[ 371], 95.00th=[ 453], 00:08:58.631 | 99.00th=[ 635], 99.50th=[ 652], 99.90th=[41157], 99.95th=[41157], 00:08:58.631 | 99.99th=[41157] 00:08:58.631 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:08:58.631 slat (nsec): min=4233, max=41793, avg=10213.35, stdev=2980.16 00:08:58.631 clat (usec): min=127, max=464, avg=191.47, stdev=22.45 00:08:58.631 lat (usec): min=135, max=475, avg=201.68, stdev=22.15 00:08:58.631 clat percentiles (usec): 00:08:58.631 | 1.00th=[ 147], 5.00th=[ 159], 10.00th=[ 167], 20.00th=[ 174], 00:08:58.631 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 196], 00:08:58.631 | 70.00th=[ 202], 80.00th=[ 208], 90.00th=[ 217], 95.00th=[ 225], 00:08:58.631 | 99.00th=[ 247], 99.50th=[ 273], 99.90th=[ 367], 99.95th=[ 465], 00:08:58.631 | 99.99th=[ 465] 00:08:58.631 bw ( KiB/s): min= 4376, max= 4376, per=14.77%, avg=4376.00, stdev= 0.00, samples=1 00:08:58.631 iops : min= 1094, max= 1094, avg=1094.00, stdev= 0.00, samples=1 00:08:58.631 lat (usec) : 250=55.44%, 500=43.25%, 750=1.11% 00:08:58.631 lat (msec) : 50=0.20% 00:08:58.631 cpu : usr=1.00%, sys=3.00%, ctx=2977, majf=0, minf=1 00:08:58.631 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:58.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.631 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.631 issued rwts: total=1440,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:58.631 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:58.631 job1: (groupid=0, jobs=1): err= 0: pid=2503117: Fri Dec 6 03:17:18 2024 00:08:58.631 read: IOPS=1006, BW=4027KiB/s (4124kB/s)(4104KiB/1019msec) 00:08:58.631 slat (nsec): min=6700, max=28272, avg=7971.68, stdev=1829.46 00:08:58.631 clat (usec): min=230, max=41982, avg=716.60, stdev=4016.44 00:08:58.631 lat (usec): min=242, max=41992, avg=724.57, stdev=4017.50 00:08:58.631 clat percentiles (usec): 00:08:58.631 | 1.00th=[ 249], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 281], 00:08:58.631 | 30.00th=[ 289], 40.00th=[ 297], 50.00th=[ 314], 60.00th=[ 334], 00:08:58.631 | 70.00th=[ 351], 80.00th=[ 359], 90.00th=[ 375], 95.00th=[ 383], 00:08:58.631 | 99.00th=[ 553], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:08:58.631 | 99.99th=[42206] 00:08:58.631 write: IOPS=1507, BW=6029KiB/s (6174kB/s)(6144KiB/1019msec); 0 zone resets 00:08:58.631 slat (nsec): min=9314, max=39189, avg=10461.10, stdev=1328.04 00:08:58.631 clat (usec): min=125, max=382, avg=165.46, stdev=17.38 00:08:58.631 lat (usec): min=136, max=414, avg=175.92, stdev=17.73 00:08:58.631 clat percentiles (usec): 00:08:58.631 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 153], 00:08:58.631 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 167], 00:08:58.631 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 184], 95.00th=[ 192], 00:08:58.631 | 99.00th=[ 210], 99.50th=[ 251], 99.90th=[ 363], 99.95th=[ 383], 00:08:58.631 | 99.99th=[ 383] 00:08:58.631 bw ( KiB/s): min= 4096, max= 8192, per=20.74%, avg=6144.00, stdev=2896.31, samples=2 00:08:58.631 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:08:58.631 lat (usec) : 250=60.03%, 500=39.54%, 750=0.04% 00:08:58.631 lat (msec) : 50=0.39% 00:08:58.631 cpu : usr=1.28%, sys=2.36%, ctx=2562, majf=0, minf=1 00:08:58.632 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:58.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.632 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.632 issued rwts: total=1026,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:58.632 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:58.632 job2: (groupid=0, jobs=1): err= 0: pid=2503151: Fri Dec 6 03:17:18 2024 00:08:58.632 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:08:58.632 slat (nsec): min=6696, max=26788, avg=7828.86, stdev=984.36 00:08:58.632 clat (usec): min=183, max=952, avg=260.59, stdev=46.75 00:08:58.632 lat (usec): min=191, max=976, avg=268.42, stdev=46.81 00:08:58.632 clat percentiles (usec): 00:08:58.632 | 1.00th=[ 202], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 225], 00:08:58.632 | 30.00th=[ 233], 40.00th=[ 241], 50.00th=[ 249], 60.00th=[ 260], 00:08:58.632 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 334], 95.00th=[ 351], 00:08:58.632 | 99.00th=[ 375], 99.50th=[ 383], 99.90th=[ 553], 99.95th=[ 668], 00:08:58.632 | 99.99th=[ 955] 00:08:58.632 write: IOPS=2425, BW=9702KiB/s (9935kB/s)(9712KiB/1001msec); 0 zone resets 00:08:58.632 slat (nsec): min=9924, max=43142, avg=11104.27, stdev=1521.47 00:08:58.632 clat (usec): min=123, max=725, avg=170.49, stdev=23.43 00:08:58.632 lat (usec): min=134, max=735, avg=181.60, stdev=23.56 00:08:58.632 clat percentiles (usec): 00:08:58.632 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:08:58.632 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:08:58.632 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 192], 95.00th=[ 202], 00:08:58.632 | 99.00th=[ 239], 99.50th=[ 251], 99.90th=[ 510], 99.95th=[ 519], 00:08:58.632 | 99.99th=[ 725] 00:08:58.632 bw ( KiB/s): min=10656, max=10656, per=35.96%, avg=10656.00, stdev= 0.00, samples=1 00:08:58.632 iops : min= 2664, max= 2664, avg=2664.00, stdev= 0.00, samples=1 00:08:58.632 lat (usec) : 250=77.55%, 500=22.30%, 750=0.13%, 1000=0.02% 00:08:58.632 cpu : usr=2.50%, sys=4.10%, ctx=4477, majf=0, minf=1 00:08:58.632 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:58.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.632 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.632 issued rwts: total=2048,2428,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:58.632 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:58.632 job3: (groupid=0, jobs=1): err= 0: pid=2503164: Fri Dec 6 03:17:18 2024 00:08:58.632 read: IOPS=1855, BW=7421KiB/s (7599kB/s)(7428KiB/1001msec) 00:08:58.632 slat (nsec): min=8468, max=24404, avg=9436.82, stdev=1089.70 00:08:58.632 clat (usec): min=213, max=680, avg=297.08, stdev=61.38 00:08:58.632 lat (usec): min=222, max=690, avg=306.52, stdev=61.40 00:08:58.632 clat percentiles (usec): 00:08:58.632 | 1.00th=[ 223], 5.00th=[ 233], 10.00th=[ 241], 20.00th=[ 251], 00:08:58.632 | 30.00th=[ 260], 40.00th=[ 269], 50.00th=[ 281], 60.00th=[ 293], 00:08:58.632 | 70.00th=[ 310], 80.00th=[ 334], 90.00th=[ 375], 95.00th=[ 441], 00:08:58.632 | 99.00th=[ 506], 99.50th=[ 510], 99.90th=[ 553], 99.95th=[ 685], 00:08:58.632 | 99.99th=[ 685] 00:08:58.632 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:08:58.632 slat (nsec): min=11078, max=65753, avg=13461.74, stdev=2797.92 00:08:58.632 clat (usec): min=142, max=385, avg=190.24, stdev=20.54 00:08:58.632 lat (usec): min=156, max=398, avg=203.71, stdev=21.02 00:08:58.632 clat percentiles (usec): 00:08:58.632 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 176], 00:08:58.632 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 192], 00:08:58.632 | 70.00th=[ 196], 80.00th=[ 202], 90.00th=[ 212], 95.00th=[ 229], 00:08:58.632 | 99.00th=[ 269], 99.50th=[ 285], 99.90th=[ 302], 99.95th=[ 306], 00:08:58.632 | 99.99th=[ 388] 00:08:58.632 bw ( KiB/s): min= 8192, max= 8192, per=27.65%, avg=8192.00, stdev= 0.00, samples=1 00:08:58.632 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:58.632 lat (usec) : 250=60.87%, 500=38.54%, 750=0.59% 00:08:58.632 cpu : usr=3.70%, sys=6.90%, ctx=3906, majf=0, minf=1 00:08:58.632 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:58.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.632 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.632 issued rwts: total=1857,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:58.632 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:58.632 00:08:58.632 Run status group 0 (all jobs): 00:08:58.632 READ: bw=24.4MiB/s (25.6MB/s), 4027KiB/s-8184KiB/s (4124kB/s-8380kB/s), io=24.9MiB (26.1MB), run=1001-1019msec 00:08:58.632 WRITE: bw=28.9MiB/s (30.3MB/s), 6029KiB/s-9702KiB/s (6174kB/s-9935kB/s), io=29.5MiB (30.9MB), run=1001-1019msec 00:08:58.632 00:08:58.632 Disk stats (read/write): 00:08:58.632 nvme0n1: ios=1070/1247, merge=0/0, ticks=962/228, in_queue=1190, util=97.29% 00:08:58.632 nvme0n2: ios=888/1024, merge=0/0, ticks=811/167, in_queue=978, util=85.57% 00:08:58.632 nvme0n3: ios=1587/2041, merge=0/0, ticks=820/347, in_queue=1167, util=97.49% 00:08:58.632 nvme0n4: ios=1536/1627, merge=0/0, ticks=427/283, in_queue=710, util=89.11% 00:08:58.632 03:17:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:08:58.632 [global] 00:08:58.632 thread=1 00:08:58.632 invalidate=1 00:08:58.632 rw=randwrite 00:08:58.632 time_based=1 00:08:58.632 runtime=1 00:08:58.632 ioengine=libaio 00:08:58.632 direct=1 00:08:58.632 bs=4096 00:08:58.632 iodepth=1 00:08:58.632 norandommap=0 00:08:58.632 numjobs=1 00:08:58.632 00:08:58.632 verify_dump=1 00:08:58.632 verify_backlog=512 00:08:58.632 verify_state_save=0 00:08:58.632 do_verify=1 00:08:58.632 verify=crc32c-intel 00:08:58.632 [job0] 00:08:58.632 filename=/dev/nvme0n1 00:08:58.632 [job1] 00:08:58.632 filename=/dev/nvme0n2 00:08:58.632 [job2] 00:08:58.632 filename=/dev/nvme0n3 00:08:58.632 [job3] 00:08:58.632 filename=/dev/nvme0n4 00:08:58.632 Could not set queue depth (nvme0n1) 00:08:58.632 Could not set queue depth (nvme0n2) 00:08:58.632 Could not set queue depth (nvme0n3) 00:08:58.632 Could not set queue depth (nvme0n4) 00:08:58.632 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:58.632 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:58.632 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:58.632 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:58.632 fio-3.35 00:08:58.632 Starting 4 threads 00:09:00.013 00:09:00.013 job0: (groupid=0, jobs=1): err= 0: pid=2503543: Fri Dec 6 03:17:19 2024 00:09:00.013 read: IOPS=25, BW=101KiB/s (103kB/s)(104KiB/1029msec) 00:09:00.013 slat (nsec): min=9082, max=24419, avg=17130.19, stdev=5972.65 00:09:00.013 clat (usec): min=245, max=41452, avg=34727.74, stdev=14956.21 00:09:00.013 lat (usec): min=255, max=41462, avg=34744.87, stdev=14954.93 00:09:00.013 clat percentiles (usec): 00:09:00.013 | 1.00th=[ 245], 5.00th=[ 269], 10.00th=[ 314], 20.00th=[40633], 00:09:00.013 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:00.013 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:00.013 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:00.013 | 99.99th=[41681] 00:09:00.013 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:09:00.013 slat (nsec): min=10053, max=59313, avg=12901.80, stdev=5442.85 00:09:00.013 clat (usec): min=146, max=588, avg=229.55, stdev=49.51 00:09:00.013 lat (usec): min=156, max=600, avg=242.45, stdev=50.69 00:09:00.013 clat percentiles (usec): 00:09:00.013 | 1.00th=[ 155], 5.00th=[ 169], 10.00th=[ 180], 20.00th=[ 192], 00:09:00.013 | 30.00th=[ 204], 40.00th=[ 212], 50.00th=[ 223], 60.00th=[ 233], 00:09:00.013 | 70.00th=[ 243], 80.00th=[ 253], 90.00th=[ 285], 95.00th=[ 334], 00:09:00.013 | 99.00th=[ 383], 99.50th=[ 396], 99.90th=[ 586], 99.95th=[ 586], 00:09:00.013 | 99.99th=[ 586] 00:09:00.013 bw ( KiB/s): min= 4096, max= 4096, per=18.71%, avg=4096.00, stdev= 0.00, samples=1 00:09:00.013 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:00.013 lat (usec) : 250=73.42%, 500=22.12%, 750=0.37% 00:09:00.013 lat (msec) : 50=4.09% 00:09:00.013 cpu : usr=0.68%, sys=0.68%, ctx=538, majf=0, minf=1 00:09:00.013 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:00.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.013 issued rwts: total=26,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:00.013 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:00.013 job1: (groupid=0, jobs=1): err= 0: pid=2503545: Fri Dec 6 03:17:19 2024 00:09:00.013 read: IOPS=1015, BW=4063KiB/s (4160kB/s)(4144KiB/1020msec) 00:09:00.013 slat (nsec): min=3237, max=28124, avg=6652.77, stdev=2713.26 00:09:00.013 clat (usec): min=173, max=41384, avg=713.92, stdev=4364.87 00:09:00.013 lat (usec): min=177, max=41394, avg=720.58, stdev=4366.44 00:09:00.013 clat percentiles (usec): 00:09:00.013 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 210], 00:09:00.013 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 235], 00:09:00.013 | 70.00th=[ 260], 80.00th=[ 277], 90.00th=[ 302], 95.00th=[ 347], 00:09:00.013 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:00.013 | 99.99th=[41157] 00:09:00.013 write: IOPS=1505, BW=6024KiB/s (6168kB/s)(6144KiB/1020msec); 0 zone resets 00:09:00.013 slat (nsec): min=4226, max=36443, avg=9076.11, stdev=3034.20 00:09:00.013 clat (usec): min=125, max=426, avg=164.27, stdev=23.45 00:09:00.013 lat (usec): min=133, max=463, avg=173.35, stdev=24.52 00:09:00.013 clat percentiles (usec): 00:09:00.013 | 1.00th=[ 135], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 151], 00:09:00.013 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 163], 00:09:00.013 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 182], 95.00th=[ 215], 00:09:00.013 | 99.00th=[ 255], 99.50th=[ 277], 99.90th=[ 371], 99.95th=[ 429], 00:09:00.013 | 99.99th=[ 429] 00:09:00.013 bw ( KiB/s): min= 1200, max=11088, per=28.06%, avg=6144.00, stdev=6991.87, samples=2 00:09:00.013 iops : min= 300, max= 2772, avg=1536.00, stdev=1747.97, samples=2 00:09:00.013 lat (usec) : 250=85.93%, 500=13.61% 00:09:00.013 lat (msec) : 50=0.47% 00:09:00.013 cpu : usr=1.28%, sys=1.86%, ctx=2575, majf=0, minf=1 00:09:00.013 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:00.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.013 issued rwts: total=1036,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:00.013 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:00.013 job2: (groupid=0, jobs=1): err= 0: pid=2503550: Fri Dec 6 03:17:19 2024 00:09:00.013 read: IOPS=1521, BW=6085KiB/s (6231kB/s)(6164KiB/1013msec) 00:09:00.013 slat (nsec): min=6823, max=24171, avg=8291.61, stdev=1346.30 00:09:00.013 clat (usec): min=194, max=41246, avg=385.14, stdev=2319.04 00:09:00.013 lat (usec): min=201, max=41257, avg=393.43, stdev=2319.76 00:09:00.013 clat percentiles (usec): 00:09:00.013 | 1.00th=[ 217], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 239], 00:09:00.013 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 253], 00:09:00.013 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 297], 00:09:00.013 | 99.00th=[ 330], 99.50th=[ 478], 99.90th=[41157], 99.95th=[41157], 00:09:00.013 | 99.99th=[41157] 00:09:00.013 write: IOPS=2021, BW=8087KiB/s (8281kB/s)(8192KiB/1013msec); 0 zone resets 00:09:00.013 slat (nsec): min=9103, max=38816, avg=11353.90, stdev=1709.93 00:09:00.013 clat (usec): min=118, max=432, avg=182.51, stdev=41.01 00:09:00.013 lat (usec): min=128, max=444, avg=193.86, stdev=41.54 00:09:00.013 clat percentiles (usec): 00:09:00.013 | 1.00th=[ 128], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 149], 00:09:00.013 | 30.00th=[ 157], 40.00th=[ 165], 50.00th=[ 174], 60.00th=[ 182], 00:09:00.013 | 70.00th=[ 192], 80.00th=[ 215], 90.00th=[ 241], 95.00th=[ 258], 00:09:00.013 | 99.00th=[ 334], 99.50th=[ 347], 99.90th=[ 383], 99.95th=[ 388], 00:09:00.013 | 99.99th=[ 433] 00:09:00.013 bw ( KiB/s): min= 7856, max= 8528, per=37.42%, avg=8192.00, stdev=475.18, samples=2 00:09:00.013 iops : min= 1964, max= 2132, avg=2048.00, stdev=118.79, samples=2 00:09:00.013 lat (usec) : 250=74.78%, 500=25.08% 00:09:00.013 lat (msec) : 50=0.14% 00:09:00.013 cpu : usr=2.08%, sys=3.46%, ctx=3590, majf=0, minf=1 00:09:00.013 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:00.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.013 issued rwts: total=1541,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:00.013 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:00.013 job3: (groupid=0, jobs=1): err= 0: pid=2503551: Fri Dec 6 03:17:19 2024 00:09:00.013 read: IOPS=1278, BW=5115KiB/s (5238kB/s)(5248KiB/1026msec) 00:09:00.014 slat (nsec): min=6969, max=27706, avg=7985.97, stdev=1620.03 00:09:00.014 clat (usec): min=207, max=41168, avg=540.39, stdev=3170.07 00:09:00.014 lat (usec): min=215, max=41177, avg=548.37, stdev=3171.19 00:09:00.014 clat percentiles (usec): 00:09:00.014 | 1.00th=[ 223], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 245], 00:09:00.014 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 269], 60.00th=[ 285], 00:09:00.014 | 70.00th=[ 310], 80.00th=[ 343], 90.00th=[ 383], 95.00th=[ 457], 00:09:00.014 | 99.00th=[ 494], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:00.014 | 99.99th=[41157] 00:09:00.014 write: IOPS=1497, BW=5988KiB/s (6132kB/s)(6144KiB/1026msec); 0 zone resets 00:09:00.014 slat (nsec): min=9681, max=38005, avg=10760.45, stdev=1426.18 00:09:00.014 clat (usec): min=128, max=1156, avg=182.99, stdev=42.68 00:09:00.014 lat (usec): min=138, max=1190, avg=193.75, stdev=43.22 00:09:00.014 clat percentiles (usec): 00:09:00.014 | 1.00th=[ 135], 5.00th=[ 147], 10.00th=[ 153], 20.00th=[ 159], 00:09:00.014 | 30.00th=[ 165], 40.00th=[ 172], 50.00th=[ 178], 60.00th=[ 182], 00:09:00.014 | 70.00th=[ 188], 80.00th=[ 198], 90.00th=[ 225], 95.00th=[ 247], 00:09:00.014 | 99.00th=[ 265], 99.50th=[ 310], 99.90th=[ 619], 99.95th=[ 1156], 00:09:00.014 | 99.99th=[ 1156] 00:09:00.014 bw ( KiB/s): min= 4096, max= 8192, per=28.06%, avg=6144.00, stdev=2896.31, samples=2 00:09:00.014 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:09:00.014 lat (usec) : 250=65.06%, 500=34.48%, 750=0.14% 00:09:00.014 lat (msec) : 2=0.04%, 50=0.28% 00:09:00.014 cpu : usr=0.98%, sys=3.12%, ctx=2849, majf=0, minf=1 00:09:00.014 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:00.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.014 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.014 issued rwts: total=1312,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:00.014 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:00.014 00:09:00.014 Run status group 0 (all jobs): 00:09:00.014 READ: bw=14.9MiB/s (15.6MB/s), 101KiB/s-6085KiB/s (103kB/s-6231kB/s), io=15.3MiB (16.0MB), run=1013-1029msec 00:09:00.014 WRITE: bw=21.4MiB/s (22.4MB/s), 1990KiB/s-8087KiB/s (2038kB/s-8281kB/s), io=22.0MiB (23.1MB), run=1013-1029msec 00:09:00.014 00:09:00.014 Disk stats (read/write): 00:09:00.014 nvme0n1: ios=45/512, merge=0/0, ticks=800/110, in_queue=910, util=93.19% 00:09:00.014 nvme0n2: ios=1055/1536, merge=0/0, ticks=1473/244, in_queue=1717, util=97.12% 00:09:00.014 nvme0n3: ios=1557/2032, merge=0/0, ticks=562/361, in_queue=923, util=91.74% 00:09:00.014 nvme0n4: ios=1346/1536, merge=0/0, ticks=618/263, in_queue=881, util=98.67% 00:09:00.014 03:17:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:00.014 [global] 00:09:00.014 thread=1 00:09:00.014 invalidate=1 00:09:00.014 rw=write 00:09:00.014 time_based=1 00:09:00.014 runtime=1 00:09:00.014 ioengine=libaio 00:09:00.014 direct=1 00:09:00.014 bs=4096 00:09:00.014 iodepth=128 00:09:00.014 norandommap=0 00:09:00.014 numjobs=1 00:09:00.014 00:09:00.014 verify_dump=1 00:09:00.014 verify_backlog=512 00:09:00.014 verify_state_save=0 00:09:00.014 do_verify=1 00:09:00.014 verify=crc32c-intel 00:09:00.014 [job0] 00:09:00.014 filename=/dev/nvme0n1 00:09:00.014 [job1] 00:09:00.014 filename=/dev/nvme0n2 00:09:00.014 [job2] 00:09:00.014 filename=/dev/nvme0n3 00:09:00.014 [job3] 00:09:00.014 filename=/dev/nvme0n4 00:09:00.014 Could not set queue depth (nvme0n1) 00:09:00.014 Could not set queue depth (nvme0n2) 00:09:00.014 Could not set queue depth (nvme0n3) 00:09:00.014 Could not set queue depth (nvme0n4) 00:09:00.273 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:00.273 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:00.273 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:00.273 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:00.273 fio-3.35 00:09:00.273 Starting 4 threads 00:09:01.648 00:09:01.648 job0: (groupid=0, jobs=1): err= 0: pid=2503926: Fri Dec 6 03:17:21 2024 00:09:01.648 read: IOPS=3432, BW=13.4MiB/s (14.1MB/s)(14.0MiB/1044msec) 00:09:01.648 slat (nsec): min=1095, max=44989k, avg=165753.94, stdev=1349456.28 00:09:01.648 clat (usec): min=5562, max=87348, avg=21902.57, stdev=14084.14 00:09:01.648 lat (usec): min=5584, max=87377, avg=22068.33, stdev=14180.23 00:09:01.648 clat percentiles (usec): 00:09:01.648 | 1.00th=[ 8848], 5.00th=[10421], 10.00th=[10945], 20.00th=[12256], 00:09:01.648 | 30.00th=[13304], 40.00th=[13698], 50.00th=[14484], 60.00th=[18744], 00:09:01.648 | 70.00th=[23200], 80.00th=[31327], 90.00th=[44303], 95.00th=[54789], 00:09:01.648 | 99.00th=[67634], 99.50th=[67634], 99.90th=[67634], 99.95th=[80217], 00:09:01.648 | 99.99th=[87557] 00:09:01.648 write: IOPS=3439, BW=13.4MiB/s (14.1MB/s)(14.0MiB/1044msec); 0 zone resets 00:09:01.648 slat (nsec): min=1853, max=13142k, avg=97080.12, stdev=592197.87 00:09:01.648 clat (usec): min=1104, max=42886, avg=15011.27, stdev=8351.09 00:09:01.648 lat (usec): min=1116, max=42897, avg=15108.35, stdev=8404.38 00:09:01.648 clat percentiles (usec): 00:09:01.648 | 1.00th=[ 3130], 5.00th=[ 5473], 10.00th=[ 7373], 20.00th=[ 8848], 00:09:01.648 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11338], 60.00th=[14222], 00:09:01.648 | 70.00th=[16581], 80.00th=[21365], 90.00th=[27919], 95.00th=[32113], 00:09:01.648 | 99.00th=[38536], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:09:01.648 | 99.99th=[42730] 00:09:01.648 bw ( KiB/s): min=12288, max=16384, per=24.23%, avg=14336.00, stdev=2896.31, samples=2 00:09:01.648 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:09:01.648 lat (msec) : 2=0.26%, 4=0.24%, 10=13.84%, 20=55.23%, 50=27.11% 00:09:01.648 lat (msec) : 100=3.32% 00:09:01.648 cpu : usr=2.68%, sys=3.84%, ctx=346, majf=0, minf=2 00:09:01.648 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:09:01.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:01.648 issued rwts: total=3584,3591,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.648 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:01.648 job1: (groupid=0, jobs=1): err= 0: pid=2503927: Fri Dec 6 03:17:21 2024 00:09:01.648 read: IOPS=5117, BW=20.0MiB/s (21.0MB/s)(20.1MiB/1005msec) 00:09:01.648 slat (nsec): min=1141, max=9565.0k, avg=81844.22, stdev=496031.01 00:09:01.648 clat (usec): min=4109, max=37825, avg=10258.07, stdev=3242.12 00:09:01.648 lat (usec): min=4733, max=37832, avg=10339.92, stdev=3278.38 00:09:01.648 clat percentiles (usec): 00:09:01.648 | 1.00th=[ 5604], 5.00th=[ 6194], 10.00th=[ 7635], 20.00th=[ 8029], 00:09:01.648 | 30.00th=[ 8225], 40.00th=[ 8356], 50.00th=[ 9634], 60.00th=[10421], 00:09:01.648 | 70.00th=[11207], 80.00th=[12387], 90.00th=[13829], 95.00th=[16057], 00:09:01.648 | 99.00th=[20579], 99.50th=[25560], 99.90th=[25822], 99.95th=[26084], 00:09:01.648 | 99.99th=[38011] 00:09:01.648 write: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec); 0 zone resets 00:09:01.648 slat (usec): min=2, max=23213, avg=98.26, stdev=640.83 00:09:01.648 clat (usec): min=4331, max=50186, avg=12622.19, stdev=6957.96 00:09:01.648 lat (usec): min=4335, max=50217, avg=12720.46, stdev=7018.16 00:09:01.648 clat percentiles (usec): 00:09:01.648 | 1.00th=[ 5538], 5.00th=[ 7439], 10.00th=[ 7963], 20.00th=[ 8356], 00:09:01.648 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 9896], 60.00th=[10552], 00:09:01.648 | 70.00th=[12256], 80.00th=[16712], 90.00th=[19268], 95.00th=[31065], 00:09:01.648 | 99.00th=[36963], 99.50th=[39584], 99.90th=[39584], 99.95th=[42730], 00:09:01.648 | 99.99th=[50070] 00:09:01.648 bw ( KiB/s): min=16384, max=27840, per=37.37%, avg=22112.00, stdev=8100.62, samples=2 00:09:01.648 iops : min= 4096, max= 6960, avg=5528.00, stdev=2025.15, samples=2 00:09:01.648 lat (msec) : 10=52.04%, 20=42.24%, 50=5.72%, 100=0.01% 00:09:01.648 cpu : usr=3.29%, sys=4.88%, ctx=701, majf=0, minf=1 00:09:01.648 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:01.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:01.648 issued rwts: total=5143,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.648 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:01.648 job2: (groupid=0, jobs=1): err= 0: pid=2503928: Fri Dec 6 03:17:21 2024 00:09:01.648 read: IOPS=3026, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1015msec) 00:09:01.648 slat (nsec): min=1191, max=12246k, avg=91455.85, stdev=699012.66 00:09:01.648 clat (usec): min=5347, max=34763, avg=12597.29, stdev=3795.51 00:09:01.648 lat (usec): min=5353, max=34798, avg=12688.75, stdev=3866.29 00:09:01.648 clat percentiles (usec): 00:09:01.648 | 1.00th=[ 8586], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10290], 00:09:01.648 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11338], 60.00th=[12125], 00:09:01.648 | 70.00th=[12911], 80.00th=[14222], 90.00th=[15533], 95.00th=[19530], 00:09:01.648 | 99.00th=[30540], 99.50th=[32637], 99.90th=[34866], 99.95th=[34866], 00:09:01.648 | 99.99th=[34866] 00:09:01.648 write: IOPS=3157, BW=12.3MiB/s (12.9MB/s)(12.5MiB/1015msec); 0 zone resets 00:09:01.648 slat (nsec): min=1908, max=52947k, avg=191668.10, stdev=1932623.73 00:09:01.648 clat (msec): min=3, max=211, avg=21.01, stdev=25.70 00:09:01.648 lat (msec): min=3, max=211, avg=21.20, stdev=26.07 00:09:01.648 clat percentiles (msec): 00:09:01.648 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 10], 00:09:01.649 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 14], 00:09:01.649 | 70.00th=[ 17], 80.00th=[ 22], 90.00th=[ 43], 95.00th=[ 86], 00:09:01.649 | 99.00th=[ 126], 99.50th=[ 129], 99.90th=[ 180], 99.95th=[ 211], 00:09:01.649 | 99.99th=[ 211] 00:09:01.649 bw ( KiB/s): min=12288, max=12400, per=20.86%, avg=12344.00, stdev=79.20, samples=2 00:09:01.649 iops : min= 3072, max= 3100, avg=3086.00, stdev=19.80, samples=2 00:09:01.649 lat (msec) : 4=0.10%, 10=26.53%, 20=60.44%, 50=8.59%, 100=2.34% 00:09:01.649 lat (msec) : 250=2.01% 00:09:01.649 cpu : usr=2.17%, sys=4.44%, ctx=198, majf=0, minf=1 00:09:01.649 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:01.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.649 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:01.649 issued rwts: total=3072,3205,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.649 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:01.649 job3: (groupid=0, jobs=1): err= 0: pid=2503929: Fri Dec 6 03:17:21 2024 00:09:01.649 read: IOPS=2522, BW=9.85MiB/s (10.3MB/s)(10.0MiB/1015msec) 00:09:01.649 slat (usec): min=2, max=13352, avg=129.51, stdev=908.17 00:09:01.649 clat (usec): min=2615, max=67229, avg=15413.88, stdev=9112.85 00:09:01.649 lat (usec): min=2623, max=67242, avg=15543.39, stdev=9224.50 00:09:01.649 clat percentiles (usec): 00:09:01.649 | 1.00th=[ 3851], 5.00th=[ 6849], 10.00th=[ 7635], 20.00th=[ 9634], 00:09:01.649 | 30.00th=[11863], 40.00th=[13042], 50.00th=[13435], 60.00th=[14222], 00:09:01.649 | 70.00th=[15270], 80.00th=[17433], 90.00th=[22152], 95.00th=[36963], 00:09:01.649 | 99.00th=[51119], 99.50th=[62653], 99.90th=[67634], 99.95th=[67634], 00:09:01.649 | 99.99th=[67634] 00:09:01.649 write: IOPS=2969, BW=11.6MiB/s (12.2MB/s)(11.8MiB/1015msec); 0 zone resets 00:09:01.649 slat (usec): min=2, max=12312, avg=203.38, stdev=1087.17 00:09:01.649 clat (usec): min=290, max=119125, avg=29568.44, stdev=28364.61 00:09:01.649 lat (usec): min=323, max=119138, avg=29771.82, stdev=28563.74 00:09:01.649 clat percentiles (usec): 00:09:01.649 | 1.00th=[ 1582], 5.00th=[ 4555], 10.00th=[ 8979], 20.00th=[ 10683], 00:09:01.649 | 30.00th=[ 11469], 40.00th=[ 11994], 50.00th=[ 15008], 60.00th=[ 22152], 00:09:01.649 | 70.00th=[ 32900], 80.00th=[ 46400], 90.00th=[ 80217], 95.00th=[ 96994], 00:09:01.649 | 99.00th=[109577], 99.50th=[113771], 99.90th=[119014], 99.95th=[119014], 00:09:01.649 | 99.99th=[119014] 00:09:01.649 bw ( KiB/s): min= 9472, max=13616, per=19.51%, avg=11544.00, stdev=2930.25, samples=2 00:09:01.649 iops : min= 2368, max= 3404, avg=2886.00, stdev=732.56, samples=2 00:09:01.649 lat (usec) : 500=0.11%, 1000=0.04% 00:09:01.649 lat (msec) : 2=0.61%, 4=2.19%, 10=16.68%, 20=48.91%, 50=20.65% 00:09:01.649 lat (msec) : 100=8.97%, 250=1.85% 00:09:01.649 cpu : usr=3.16%, sys=4.24%, ctx=258, majf=0, minf=1 00:09:01.649 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:01.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.649 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:01.649 issued rwts: total=2560,3014,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.649 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:01.649 00:09:01.649 Run status group 0 (all jobs): 00:09:01.649 READ: bw=53.7MiB/s (56.3MB/s), 9.85MiB/s-20.0MiB/s (10.3MB/s-21.0MB/s), io=56.1MiB (58.8MB), run=1005-1044msec 00:09:01.649 WRITE: bw=57.8MiB/s (60.6MB/s), 11.6MiB/s-21.9MiB/s (12.2MB/s-23.0MB/s), io=60.3MiB (63.2MB), run=1005-1044msec 00:09:01.649 00:09:01.649 Disk stats (read/write): 00:09:01.649 nvme0n1: ios=2782/3072, merge=0/0, ticks=32778/25578, in_queue=58356, util=86.67% 00:09:01.649 nvme0n2: ios=4147/4607, merge=0/0, ticks=19745/26742, in_queue=46487, util=98.48% 00:09:01.649 nvme0n3: ios=2587/2775, merge=0/0, ticks=26840/38124, in_queue=64964, util=99.17% 00:09:01.649 nvme0n4: ios=2090/2560, merge=0/0, ticks=32454/72516, in_queue=104970, util=99.58% 00:09:01.649 03:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:01.649 [global] 00:09:01.649 thread=1 00:09:01.649 invalidate=1 00:09:01.649 rw=randwrite 00:09:01.649 time_based=1 00:09:01.649 runtime=1 00:09:01.649 ioengine=libaio 00:09:01.649 direct=1 00:09:01.649 bs=4096 00:09:01.649 iodepth=128 00:09:01.649 norandommap=0 00:09:01.649 numjobs=1 00:09:01.649 00:09:01.649 verify_dump=1 00:09:01.649 verify_backlog=512 00:09:01.649 verify_state_save=0 00:09:01.649 do_verify=1 00:09:01.649 verify=crc32c-intel 00:09:01.649 [job0] 00:09:01.649 filename=/dev/nvme0n1 00:09:01.649 [job1] 00:09:01.649 filename=/dev/nvme0n2 00:09:01.649 [job2] 00:09:01.649 filename=/dev/nvme0n3 00:09:01.649 [job3] 00:09:01.649 filename=/dev/nvme0n4 00:09:01.649 Could not set queue depth (nvme0n1) 00:09:01.649 Could not set queue depth (nvme0n2) 00:09:01.649 Could not set queue depth (nvme0n3) 00:09:01.649 Could not set queue depth (nvme0n4) 00:09:01.907 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:01.907 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:01.907 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:01.907 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:01.907 fio-3.35 00:09:01.907 Starting 4 threads 00:09:03.306 00:09:03.306 job0: (groupid=0, jobs=1): err= 0: pid=2504296: Fri Dec 6 03:17:23 2024 00:09:03.306 read: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1011msec) 00:09:03.306 slat (nsec): min=1452, max=13170k, avg=136596.86, stdev=851666.47 00:09:03.306 clat (usec): min=4470, max=61285, avg=15548.57, stdev=8213.59 00:09:03.306 lat (usec): min=4480, max=61289, avg=15685.16, stdev=8284.50 00:09:03.307 clat percentiles (usec): 00:09:03.307 | 1.00th=[ 6652], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[10028], 00:09:03.307 | 30.00th=[11469], 40.00th=[11863], 50.00th=[12256], 60.00th=[14615], 00:09:03.307 | 70.00th=[15795], 80.00th=[19268], 90.00th=[25560], 95.00th=[32375], 00:09:03.307 | 99.00th=[49546], 99.50th=[57410], 99.90th=[61080], 99.95th=[61080], 00:09:03.307 | 99.99th=[61080] 00:09:03.307 write: IOPS=3202, BW=12.5MiB/s (13.1MB/s)(12.6MiB/1011msec); 0 zone resets 00:09:03.307 slat (usec): min=2, max=13030, avg=168.82, stdev=732.16 00:09:03.307 clat (usec): min=1860, max=61285, avg=24720.15, stdev=12938.89 00:09:03.307 lat (usec): min=1871, max=61293, avg=24888.97, stdev=13014.13 00:09:03.307 clat percentiles (usec): 00:09:03.307 | 1.00th=[ 3228], 5.00th=[ 6325], 10.00th=[ 8717], 20.00th=[12518], 00:09:03.307 | 30.00th=[16188], 40.00th=[19792], 50.00th=[22676], 60.00th=[26608], 00:09:03.307 | 70.00th=[32113], 80.00th=[35914], 90.00th=[43254], 95.00th=[50070], 00:09:03.307 | 99.00th=[56361], 99.50th=[56361], 99.90th=[58983], 99.95th=[61080], 00:09:03.307 | 99.99th=[61080] 00:09:03.307 bw ( KiB/s): min=11968, max=12912, per=18.99%, avg=12440.00, stdev=667.51, samples=2 00:09:03.307 iops : min= 2992, max= 3228, avg=3110.00, stdev=166.88, samples=2 00:09:03.307 lat (msec) : 2=0.25%, 4=0.41%, 10=14.33%, 20=45.90%, 50=35.99% 00:09:03.307 lat (msec) : 100=3.12% 00:09:03.307 cpu : usr=3.27%, sys=3.47%, ctx=395, majf=0, minf=1 00:09:03.307 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:03.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.307 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:03.307 issued rwts: total=3072,3238,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.307 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:03.307 job1: (groupid=0, jobs=1): err= 0: pid=2504297: Fri Dec 6 03:17:23 2024 00:09:03.307 read: IOPS=3726, BW=14.6MiB/s (15.3MB/s)(14.6MiB/1006msec) 00:09:03.307 slat (nsec): min=1066, max=21266k, avg=134965.06, stdev=1012083.36 00:09:03.307 clat (usec): min=3105, max=46263, avg=16649.45, stdev=8053.02 00:09:03.307 lat (usec): min=6257, max=46273, avg=16784.42, stdev=8121.36 00:09:03.307 clat percentiles (usec): 00:09:03.307 | 1.00th=[ 6521], 5.00th=[ 8586], 10.00th=[ 9503], 20.00th=[10421], 00:09:03.307 | 30.00th=[10814], 40.00th=[11600], 50.00th=[12780], 60.00th=[16057], 00:09:03.307 | 70.00th=[20317], 80.00th=[24249], 90.00th=[30802], 95.00th=[32113], 00:09:03.307 | 99.00th=[38536], 99.50th=[38536], 99.90th=[43779], 99.95th=[46400], 00:09:03.307 | 99.99th=[46400] 00:09:03.307 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:09:03.307 slat (nsec): min=1913, max=26738k, avg=115583.90, stdev=803813.43 00:09:03.307 clat (usec): min=5859, max=51626, avg=15807.03, stdev=8853.76 00:09:03.307 lat (usec): min=5866, max=51647, avg=15922.62, stdev=8915.41 00:09:03.307 clat percentiles (usec): 00:09:03.307 | 1.00th=[ 7701], 5.00th=[ 8717], 10.00th=[ 9503], 20.00th=[ 9896], 00:09:03.307 | 30.00th=[10683], 40.00th=[11207], 50.00th=[12256], 60.00th=[13304], 00:09:03.307 | 70.00th=[16188], 80.00th=[20055], 90.00th=[29492], 95.00th=[33424], 00:09:03.307 | 99.00th=[49546], 99.50th=[49546], 99.90th=[50594], 99.95th=[50594], 00:09:03.307 | 99.99th=[51643] 00:09:03.307 bw ( KiB/s): min=16384, max=16384, per=25.01%, avg=16384.00, stdev= 0.00, samples=2 00:09:03.307 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:09:03.307 lat (msec) : 4=0.01%, 10=19.75%, 20=54.62%, 50=25.51%, 100=0.11% 00:09:03.307 cpu : usr=3.28%, sys=3.48%, ctx=367, majf=0, minf=1 00:09:03.307 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:03.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.307 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:03.307 issued rwts: total=3749,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.307 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:03.307 job2: (groupid=0, jobs=1): err= 0: pid=2504298: Fri Dec 6 03:17:23 2024 00:09:03.307 read: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec) 00:09:03.307 slat (nsec): min=1122, max=12106k, avg=95427.29, stdev=717510.43 00:09:03.307 clat (usec): min=3159, max=41502, avg=13512.49, stdev=6456.77 00:09:03.307 lat (usec): min=3182, max=41526, avg=13607.92, stdev=6516.95 00:09:03.307 clat percentiles (usec): 00:09:03.307 | 1.00th=[ 3982], 5.00th=[ 7504], 10.00th=[ 8717], 20.00th=[ 9241], 00:09:03.307 | 30.00th=[ 9634], 40.00th=[11207], 50.00th=[11600], 60.00th=[11863], 00:09:03.307 | 70.00th=[13435], 80.00th=[15533], 90.00th=[25560], 95.00th=[29492], 00:09:03.307 | 99.00th=[32113], 99.50th=[32900], 99.90th=[39584], 99.95th=[39584], 00:09:03.307 | 99.99th=[41681] 00:09:03.307 write: IOPS=5367, BW=21.0MiB/s (22.0MB/s)(21.1MiB/1006msec); 0 zone resets 00:09:03.307 slat (nsec): min=1964, max=10204k, avg=78164.30, stdev=449042.76 00:09:03.307 clat (usec): min=661, max=29882, avg=10826.51, stdev=4361.21 00:09:03.307 lat (usec): min=673, max=29890, avg=10904.67, stdev=4395.11 00:09:03.307 clat percentiles (usec): 00:09:03.307 | 1.00th=[ 2409], 5.00th=[ 4113], 10.00th=[ 5473], 20.00th=[ 7767], 00:09:03.307 | 30.00th=[ 8979], 40.00th=[ 9634], 50.00th=[10290], 60.00th=[11207], 00:09:03.307 | 70.00th=[12125], 80.00th=[12780], 90.00th=[17957], 95.00th=[19006], 00:09:03.307 | 99.00th=[23987], 99.50th=[25297], 99.90th=[26608], 99.95th=[26870], 00:09:03.307 | 99.99th=[29754] 00:09:03.307 bw ( KiB/s): min=17552, max=24632, per=32.20%, avg=21092.00, stdev=5006.32, samples=2 00:09:03.307 iops : min= 4388, max= 6158, avg=5273.00, stdev=1251.58, samples=2 00:09:03.307 lat (usec) : 750=0.05%, 1000=0.09% 00:09:03.307 lat (msec) : 2=0.16%, 4=2.26%, 10=34.94%, 20=53.77%, 50=8.73% 00:09:03.307 cpu : usr=3.38%, sys=5.47%, ctx=506, majf=0, minf=2 00:09:03.307 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:03.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.307 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:03.307 issued rwts: total=5120,5400,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.307 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:03.307 job3: (groupid=0, jobs=1): err= 0: pid=2504299: Fri Dec 6 03:17:23 2024 00:09:03.307 read: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec) 00:09:03.307 slat (nsec): min=1628, max=11890k, avg=116214.64, stdev=749075.21 00:09:03.307 clat (usec): min=5317, max=30200, avg=15327.13, stdev=3634.42 00:09:03.307 lat (usec): min=5329, max=34684, avg=15443.34, stdev=3697.49 00:09:03.307 clat percentiles (usec): 00:09:03.307 | 1.00th=[ 6783], 5.00th=[ 9241], 10.00th=[11600], 20.00th=[12780], 00:09:03.307 | 30.00th=[13304], 40.00th=[14484], 50.00th=[15139], 60.00th=[16057], 00:09:03.307 | 70.00th=[16909], 80.00th=[17695], 90.00th=[19530], 95.00th=[21103], 00:09:03.307 | 99.00th=[25035], 99.50th=[28967], 99.90th=[28967], 99.95th=[28967], 00:09:03.307 | 99.99th=[30278] 00:09:03.307 write: IOPS=3780, BW=14.8MiB/s (15.5MB/s)(14.9MiB/1011msec); 0 zone resets 00:09:03.307 slat (usec): min=2, max=26086, avg=140.24, stdev=904.82 00:09:03.307 clat (usec): min=2230, max=68026, avg=19071.24, stdev=11208.75 00:09:03.307 lat (usec): min=2238, max=68031, avg=19211.49, stdev=11295.86 00:09:03.307 clat percentiles (usec): 00:09:03.307 | 1.00th=[ 5997], 5.00th=[ 7570], 10.00th=[ 8848], 20.00th=[11994], 00:09:03.307 | 30.00th=[13042], 40.00th=[13304], 50.00th=[15270], 60.00th=[18744], 00:09:03.307 | 70.00th=[21365], 80.00th=[25560], 90.00th=[31065], 95.00th=[42206], 00:09:03.307 | 99.00th=[65274], 99.50th=[66847], 99.90th=[67634], 99.95th=[67634], 00:09:03.307 | 99.99th=[67634] 00:09:03.307 bw ( KiB/s): min=12288, max=17272, per=22.56%, avg=14780.00, stdev=3524.22, samples=2 00:09:03.307 iops : min= 3072, max= 4318, avg=3695.00, stdev=881.06, samples=2 00:09:03.307 lat (msec) : 4=0.05%, 10=10.14%, 20=68.32%, 50=19.78%, 100=1.70% 00:09:03.307 cpu : usr=2.38%, sys=5.84%, ctx=350, majf=0, minf=1 00:09:03.307 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:09:03.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.307 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:03.307 issued rwts: total=3584,3822,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.307 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:03.307 00:09:03.307 Run status group 0 (all jobs): 00:09:03.307 READ: bw=60.0MiB/s (62.9MB/s), 11.9MiB/s-19.9MiB/s (12.4MB/s-20.8MB/s), io=60.6MiB (63.6MB), run=1006-1011msec 00:09:03.307 WRITE: bw=64.0MiB/s (67.1MB/s), 12.5MiB/s-21.0MiB/s (13.1MB/s-22.0MB/s), io=64.7MiB (67.8MB), run=1006-1011msec 00:09:03.307 00:09:03.307 Disk stats (read/write): 00:09:03.307 nvme0n1: ios=2579/2735, merge=0/0, ticks=35740/66472, in_queue=102212, util=96.09% 00:09:03.307 nvme0n2: ios=3121/3479, merge=0/0, ticks=23150/22045, in_queue=45195, util=97.87% 00:09:03.307 nvme0n3: ios=4666/4631, merge=0/0, ticks=42572/33279, in_queue=75851, util=98.34% 00:09:03.307 nvme0n4: ios=2938/3072, merge=0/0, ticks=26284/32960, in_queue=59244, util=88.98% 00:09:03.307 03:17:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:03.307 03:17:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2504533 00:09:03.307 03:17:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:03.307 03:17:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:03.307 [global] 00:09:03.307 thread=1 00:09:03.308 invalidate=1 00:09:03.308 rw=read 00:09:03.308 time_based=1 00:09:03.308 runtime=10 00:09:03.308 ioengine=libaio 00:09:03.308 direct=1 00:09:03.308 bs=4096 00:09:03.308 iodepth=1 00:09:03.308 norandommap=1 00:09:03.308 numjobs=1 00:09:03.308 00:09:03.308 [job0] 00:09:03.308 filename=/dev/nvme0n1 00:09:03.308 [job1] 00:09:03.308 filename=/dev/nvme0n2 00:09:03.308 [job2] 00:09:03.308 filename=/dev/nvme0n3 00:09:03.308 [job3] 00:09:03.308 filename=/dev/nvme0n4 00:09:03.308 Could not set queue depth (nvme0n1) 00:09:03.308 Could not set queue depth (nvme0n2) 00:09:03.308 Could not set queue depth (nvme0n3) 00:09:03.308 Could not set queue depth (nvme0n4) 00:09:03.574 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:03.574 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:03.574 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:03.574 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:03.574 fio-3.35 00:09:03.574 Starting 4 threads 00:09:06.094 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:06.351 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:06.351 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=274432, buflen=4096 00:09:06.351 fio: pid=2504679, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:06.608 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=4706304, buflen=4096 00:09:06.608 fio: pid=2504678, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:06.608 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:06.608 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:06.866 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=46149632, buflen=4096 00:09:06.866 fio: pid=2504676, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:06.866 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:06.866 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:06.866 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=57470976, buflen=4096 00:09:06.866 fio: pid=2504677, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:06.866 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:06.866 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:07.124 00:09:07.124 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2504676: Fri Dec 6 03:17:27 2024 00:09:07.124 read: IOPS=3609, BW=14.1MiB/s (14.8MB/s)(44.0MiB/3122msec) 00:09:07.124 slat (usec): min=6, max=30966, avg=13.03, stdev=355.75 00:09:07.124 clat (usec): min=185, max=841, avg=261.13, stdev=40.07 00:09:07.124 lat (usec): min=192, max=31413, avg=274.16, stdev=360.98 00:09:07.124 clat percentiles (usec): 00:09:07.124 | 1.00th=[ 206], 5.00th=[ 217], 10.00th=[ 221], 20.00th=[ 229], 00:09:07.124 | 30.00th=[ 237], 40.00th=[ 245], 50.00th=[ 265], 60.00th=[ 273], 00:09:07.124 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 302], 00:09:07.124 | 99.00th=[ 433], 99.50th=[ 441], 99.90th=[ 461], 99.95th=[ 502], 00:09:07.124 | 99.99th=[ 652] 00:09:07.124 bw ( KiB/s): min=13168, max=16960, per=45.66%, avg=14530.00, stdev=1357.43, samples=6 00:09:07.124 iops : min= 3292, max= 4240, avg=3632.50, stdev=339.36, samples=6 00:09:07.124 lat (usec) : 250=43.34%, 500=56.60%, 750=0.04%, 1000=0.01% 00:09:07.124 cpu : usr=0.83%, sys=3.27%, ctx=11272, majf=0, minf=1 00:09:07.124 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:07.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.124 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.124 issued rwts: total=11268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:07.124 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:07.124 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2504677: Fri Dec 6 03:17:27 2024 00:09:07.124 read: IOPS=4210, BW=16.4MiB/s (17.2MB/s)(54.8MiB/3333msec) 00:09:07.124 slat (usec): min=6, max=31164, avg=13.44, stdev=335.26 00:09:07.124 clat (usec): min=170, max=21847, avg=220.49, stdev=183.51 00:09:07.124 lat (usec): min=178, max=31525, avg=233.94, stdev=383.79 00:09:07.124 clat percentiles (usec): 00:09:07.124 | 1.00th=[ 184], 5.00th=[ 192], 10.00th=[ 200], 20.00th=[ 206], 00:09:07.124 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 223], 00:09:07.124 | 70.00th=[ 227], 80.00th=[ 233], 90.00th=[ 239], 95.00th=[ 245], 00:09:07.124 | 99.00th=[ 260], 99.50th=[ 269], 99.90th=[ 314], 99.95th=[ 330], 00:09:07.124 | 99.99th=[ 1172] 00:09:07.124 bw ( KiB/s): min=15720, max=17576, per=53.72%, avg=17093.33, stdev=700.11, samples=6 00:09:07.124 iops : min= 3930, max= 4394, avg=4273.33, stdev=175.03, samples=6 00:09:07.124 lat (usec) : 250=97.36%, 500=2.62% 00:09:07.124 lat (msec) : 2=0.01%, 50=0.01% 00:09:07.124 cpu : usr=2.55%, sys=6.60%, ctx=14036, majf=0, minf=2 00:09:07.124 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:07.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.124 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.124 issued rwts: total=14032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:07.124 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:07.124 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2504678: Fri Dec 6 03:17:27 2024 00:09:07.124 read: IOPS=393, BW=1572KiB/s (1610kB/s)(4596KiB/2924msec) 00:09:07.124 slat (nsec): min=8030, max=30310, avg=9664.10, stdev=3429.91 00:09:07.124 clat (usec): min=218, max=41973, avg=2514.39, stdev=9288.24 00:09:07.124 lat (usec): min=228, max=41997, avg=2524.05, stdev=9291.15 00:09:07.124 clat percentiles (usec): 00:09:07.124 | 1.00th=[ 243], 5.00th=[ 255], 10.00th=[ 260], 20.00th=[ 265], 00:09:07.124 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 281], 00:09:07.124 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[41157], 00:09:07.124 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:09:07.124 | 99.99th=[42206] 00:09:07.124 bw ( KiB/s): min= 96, max= 8088, per=5.73%, avg=1822.40, stdev=3508.64, samples=5 00:09:07.124 iops : min= 24, max= 2022, avg=455.60, stdev=877.16, samples=5 00:09:07.124 lat (usec) : 250=2.70%, 500=91.65%, 750=0.09% 00:09:07.125 lat (msec) : 50=5.48% 00:09:07.125 cpu : usr=0.27%, sys=0.27%, ctx=1153, majf=0, minf=2 00:09:07.125 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:07.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.125 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.125 issued rwts: total=1150,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:07.125 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:07.125 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2504679: Fri Dec 6 03:17:27 2024 00:09:07.125 read: IOPS=24, BW=98.2KiB/s (101kB/s)(268KiB/2730msec) 00:09:07.125 slat (nsec): min=11437, max=34671, avg=23069.90, stdev=2351.27 00:09:07.125 clat (usec): min=405, max=43034, avg=40391.80, stdev=4965.91 00:09:07.125 lat (usec): min=439, max=43063, avg=40414.88, stdev=4964.52 00:09:07.125 clat percentiles (usec): 00:09:07.125 | 1.00th=[ 404], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:07.125 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:07.125 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:07.125 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:09:07.125 | 99.99th=[43254] 00:09:07.125 bw ( KiB/s): min= 96, max= 104, per=0.31%, avg=99.20, stdev= 4.38, samples=5 00:09:07.125 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:09:07.125 lat (usec) : 500=1.47% 00:09:07.125 lat (msec) : 50=97.06% 00:09:07.125 cpu : usr=0.00%, sys=0.11%, ctx=68, majf=0, minf=2 00:09:07.125 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:07.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.125 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.125 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:07.125 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:07.125 00:09:07.125 Run status group 0 (all jobs): 00:09:07.125 READ: bw=31.1MiB/s (32.6MB/s), 98.2KiB/s-16.4MiB/s (101kB/s-17.2MB/s), io=104MiB (109MB), run=2730-3333msec 00:09:07.125 00:09:07.125 Disk stats (read/write): 00:09:07.125 nvme0n1: ios=11267/0, merge=0/0, ticks=2897/0, in_queue=2897, util=93.77% 00:09:07.125 nvme0n2: ios=13217/0, merge=0/0, ticks=2788/0, in_queue=2788, util=95.02% 00:09:07.125 nvme0n3: ios=1184/0, merge=0/0, ticks=3766/0, in_queue=3766, util=99.09% 00:09:07.125 nvme0n4: ios=64/0, merge=0/0, ticks=2585/0, in_queue=2585, util=96.45% 00:09:07.125 03:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:07.125 03:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:07.382 03:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:07.382 03:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:07.640 03:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:07.640 03:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:07.898 03:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:07.898 03:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:07.898 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:07.898 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2504533 00:09:07.898 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:07.898 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:08.164 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.164 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:08.164 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:08.164 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:08.164 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:08.164 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:08.164 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:08.164 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:08.164 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:08.164 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:08.164 nvmf hotplug test: fio failed as expected 00:09:08.164 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:08.421 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:08.421 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:08.421 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:08.421 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:08.421 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:08.421 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:08.421 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:08.421 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:08.421 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:08.421 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:08.421 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:08.421 rmmod nvme_tcp 00:09:08.421 rmmod nvme_fabrics 00:09:08.422 rmmod nvme_keyring 00:09:08.422 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:08.422 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:08.422 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:08.422 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2501633 ']' 00:09:08.422 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2501633 00:09:08.422 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2501633 ']' 00:09:08.422 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2501633 00:09:08.422 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:08.422 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:08.422 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2501633 00:09:08.422 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:08.422 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:08.422 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2501633' 00:09:08.422 killing process with pid 2501633 00:09:08.422 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2501633 00:09:08.422 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2501633 00:09:08.679 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:08.679 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:08.679 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:08.679 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:08.679 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:08.679 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:08.679 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:08.679 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:08.679 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:08.679 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.679 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:08.679 03:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.212 03:17:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:11.212 00:09:11.212 real 0m26.446s 00:09:11.212 user 1m46.568s 00:09:11.212 sys 0m8.474s 00:09:11.212 03:17:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.213 03:17:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:11.213 ************************************ 00:09:11.213 END TEST nvmf_fio_target 00:09:11.213 ************************************ 00:09:11.213 03:17:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:11.213 03:17:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:11.213 03:17:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.213 03:17:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:11.213 ************************************ 00:09:11.213 START TEST nvmf_bdevio 00:09:11.213 ************************************ 00:09:11.213 03:17:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:11.213 * Looking for test storage... 00:09:11.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:11.213 03:17:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:11.213 03:17:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:09:11.213 03:17:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:11.213 03:17:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:11.213 03:17:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:11.213 03:17:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:11.213 03:17:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:11.213 03:17:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:11.213 03:17:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:11.213 03:17:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:11.213 03:17:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:11.213 03:17:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:11.213 03:17:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:11.213 03:17:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:11.213 03:17:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:11.213 03:17:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:11.213 03:17:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:11.213 03:17:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:11.213 03:17:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:11.213 03:17:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:11.213 03:17:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:11.213 03:17:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:11.213 03:17:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:11.213 03:17:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:11.213 03:17:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:11.213 03:17:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:11.213 03:17:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:11.213 03:17:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:11.213 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:11.213 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:11.213 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:11.213 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:11.213 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:11.213 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:11.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.213 --rc genhtml_branch_coverage=1 00:09:11.213 --rc genhtml_function_coverage=1 00:09:11.213 --rc genhtml_legend=1 00:09:11.213 --rc geninfo_all_blocks=1 00:09:11.213 --rc geninfo_unexecuted_blocks=1 00:09:11.213 00:09:11.213 ' 00:09:11.213 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:11.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.213 --rc genhtml_branch_coverage=1 00:09:11.213 --rc genhtml_function_coverage=1 00:09:11.213 --rc genhtml_legend=1 00:09:11.213 --rc geninfo_all_blocks=1 00:09:11.213 --rc geninfo_unexecuted_blocks=1 00:09:11.213 00:09:11.213 ' 00:09:11.213 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:11.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.213 --rc genhtml_branch_coverage=1 00:09:11.213 --rc genhtml_function_coverage=1 00:09:11.213 --rc genhtml_legend=1 00:09:11.213 --rc geninfo_all_blocks=1 00:09:11.213 --rc geninfo_unexecuted_blocks=1 00:09:11.213 00:09:11.213 ' 00:09:11.213 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:11.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.213 --rc genhtml_branch_coverage=1 00:09:11.213 --rc genhtml_function_coverage=1 00:09:11.213 --rc genhtml_legend=1 00:09:11.213 --rc geninfo_all_blocks=1 00:09:11.213 --rc geninfo_unexecuted_blocks=1 00:09:11.213 00:09:11.213 ' 00:09:11.213 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:11.213 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:11.213 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:11.213 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:11.213 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:11.213 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:11.213 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:11.213 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:11.213 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:11.213 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:11.213 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:11.213 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:11.213 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:11.213 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:11.213 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:11.213 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:11.213 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:11.213 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:11.213 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:11.213 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:11.213 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:11.213 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:11.213 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:11.213 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.213 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.213 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.213 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:11.214 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.214 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:11.214 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:11.214 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:11.214 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:11.214 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:11.214 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:11.214 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:11.214 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:11.214 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:11.214 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:11.214 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:11.214 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:11.214 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:11.214 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:11.214 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:11.214 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:11.214 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:11.214 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:11.214 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:11.214 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.214 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.214 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.214 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:11.214 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:11.214 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:11.214 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:16.481 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:16.481 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:16.481 Found net devices under 0000:86:00.0: cvl_0_0 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.481 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:16.482 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:16.482 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:16.482 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:16.482 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.482 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:16.482 Found net devices under 0000:86:00.1: cvl_0_1 00:09:16.482 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.482 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:16.482 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:16.482 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:16.482 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:16.482 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:16.482 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:16.482 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:16.482 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:16.482 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:16.482 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:16.482 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:16.482 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:16.482 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:16.482 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:16.482 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:16.482 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:16.482 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:16.482 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:16.482 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:16.482 03:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:16.482 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:16.482 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:16.482 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:16.482 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:16.482 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:16.482 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:16.482 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:16.482 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:16.482 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:16.482 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.428 ms 00:09:16.482 00:09:16.482 --- 10.0.0.2 ping statistics --- 00:09:16.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.482 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:09:16.482 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:16.482 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:16.482 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:09:16.482 00:09:16.482 --- 10.0.0.1 ping statistics --- 00:09:16.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.482 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:09:16.482 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:16.482 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:16.482 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:16.482 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:16.482 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:16.482 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:16.482 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:16.482 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:16.482 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:16.482 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:16.482 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:16.482 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:16.482 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:16.482 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2508923 00:09:16.482 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:16.482 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2508923 00:09:16.482 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2508923 ']' 00:09:16.482 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.482 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:16.482 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.482 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:16.482 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:16.482 [2024-12-06 03:17:36.240634] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:09:16.482 [2024-12-06 03:17:36.240677] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:16.482 [2024-12-06 03:17:36.308024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:16.482 [2024-12-06 03:17:36.347596] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:16.482 [2024-12-06 03:17:36.347634] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:16.482 [2024-12-06 03:17:36.347641] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:16.482 [2024-12-06 03:17:36.347648] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:16.482 [2024-12-06 03:17:36.347653] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:16.482 [2024-12-06 03:17:36.349162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:16.482 [2024-12-06 03:17:36.349272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:16.482 [2024-12-06 03:17:36.349357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:16.482 [2024-12-06 03:17:36.349358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:16.482 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:16.482 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:16.482 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:16.482 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:16.482 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:16.483 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:16.483 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:16.483 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.483 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:16.483 [2024-12-06 03:17:36.498913] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:16.483 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.483 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:16.483 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.483 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:16.483 Malloc0 00:09:16.483 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.483 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:16.483 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.483 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:16.483 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.483 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:16.483 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.483 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:16.483 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.483 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:16.483 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.483 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:16.483 [2024-12-06 03:17:36.561590] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:16.483 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.483 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:16.483 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:16.483 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:16.483 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:16.483 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:16.483 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:16.483 { 00:09:16.483 "params": { 00:09:16.483 "name": "Nvme$subsystem", 00:09:16.483 "trtype": "$TEST_TRANSPORT", 00:09:16.483 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:16.483 "adrfam": "ipv4", 00:09:16.483 "trsvcid": "$NVMF_PORT", 00:09:16.483 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:16.483 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:16.483 "hdgst": ${hdgst:-false}, 00:09:16.483 "ddgst": ${ddgst:-false} 00:09:16.483 }, 00:09:16.483 "method": "bdev_nvme_attach_controller" 00:09:16.483 } 00:09:16.483 EOF 00:09:16.483 )") 00:09:16.483 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:16.483 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:16.483 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:16.483 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:16.483 "params": { 00:09:16.483 "name": "Nvme1", 00:09:16.483 "trtype": "tcp", 00:09:16.483 "traddr": "10.0.0.2", 00:09:16.483 "adrfam": "ipv4", 00:09:16.483 "trsvcid": "4420", 00:09:16.483 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:16.483 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:16.483 "hdgst": false, 00:09:16.483 "ddgst": false 00:09:16.483 }, 00:09:16.483 "method": "bdev_nvme_attach_controller" 00:09:16.483 }' 00:09:16.483 [2024-12-06 03:17:36.610013] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:09:16.483 [2024-12-06 03:17:36.610058] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2508995 ] 00:09:16.740 [2024-12-06 03:17:36.674957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:16.740 [2024-12-06 03:17:36.719416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.740 [2024-12-06 03:17:36.719511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:16.740 [2024-12-06 03:17:36.719514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.997 I/O targets: 00:09:16.997 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:16.997 00:09:16.997 00:09:16.997 CUnit - A unit testing framework for C - Version 2.1-3 00:09:16.997 http://cunit.sourceforge.net/ 00:09:16.997 00:09:16.997 00:09:16.997 Suite: bdevio tests on: Nvme1n1 00:09:16.997 Test: blockdev write read block ...passed 00:09:16.997 Test: blockdev write zeroes read block ...passed 00:09:16.997 Test: blockdev write zeroes read no split ...passed 00:09:16.997 Test: blockdev write zeroes read split ...passed 00:09:16.997 Test: blockdev write zeroes read split partial ...passed 00:09:16.997 Test: blockdev reset ...[2024-12-06 03:17:37.117430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:16.997 [2024-12-06 03:17:37.117494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14bcf30 (9): Bad file descriptor 00:09:17.254 [2024-12-06 03:17:37.173582] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:17.254 passed 00:09:17.254 Test: blockdev write read 8 blocks ...passed 00:09:17.254 Test: blockdev write read size > 128k ...passed 00:09:17.254 Test: blockdev write read invalid size ...passed 00:09:17.254 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:17.254 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:17.254 Test: blockdev write read max offset ...passed 00:09:17.254 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:17.254 Test: blockdev writev readv 8 blocks ...passed 00:09:17.254 Test: blockdev writev readv 30 x 1block ...passed 00:09:17.512 Test: blockdev writev readv block ...passed 00:09:17.512 Test: blockdev writev readv size > 128k ...passed 00:09:17.512 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:17.512 Test: blockdev comparev and writev ...[2024-12-06 03:17:37.466906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:17.512 [2024-12-06 03:17:37.466936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:17.512 [2024-12-06 03:17:37.466954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:17.512 [2024-12-06 03:17:37.466963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:17.512 [2024-12-06 03:17:37.467231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:17.512 [2024-12-06 03:17:37.467247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:17.512 [2024-12-06 03:17:37.467258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:17.512 [2024-12-06 03:17:37.467266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:17.512 [2024-12-06 03:17:37.467506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:17.512 [2024-12-06 03:17:37.467518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:17.512 [2024-12-06 03:17:37.467531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:17.512 [2024-12-06 03:17:37.467539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:17.512 [2024-12-06 03:17:37.467779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:17.512 [2024-12-06 03:17:37.467791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:17.512 [2024-12-06 03:17:37.467804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:17.512 [2024-12-06 03:17:37.467812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:17.512 passed 00:09:17.512 Test: blockdev nvme passthru rw ...passed 00:09:17.512 Test: blockdev nvme passthru vendor specific ...[2024-12-06 03:17:37.551317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:17.512 [2024-12-06 03:17:37.551335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:17.512 [2024-12-06 03:17:37.551448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:17.512 [2024-12-06 03:17:37.551459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:17.512 [2024-12-06 03:17:37.551580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:17.512 [2024-12-06 03:17:37.551590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:17.512 [2024-12-06 03:17:37.551700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:17.512 [2024-12-06 03:17:37.551711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:17.512 passed 00:09:17.512 Test: blockdev nvme admin passthru ...passed 00:09:17.512 Test: blockdev copy ...passed 00:09:17.512 00:09:17.512 Run Summary: Type Total Ran Passed Failed Inactive 00:09:17.512 suites 1 1 n/a 0 0 00:09:17.512 tests 23 23 23 0 0 00:09:17.512 asserts 152 152 152 0 n/a 00:09:17.512 00:09:17.512 Elapsed time = 1.294 seconds 00:09:17.814 03:17:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:17.814 03:17:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.814 03:17:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:17.814 03:17:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.814 03:17:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:17.814 03:17:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:17.814 03:17:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:17.814 03:17:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:17.814 03:17:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:17.814 03:17:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:17.814 03:17:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:17.814 03:17:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:17.814 rmmod nvme_tcp 00:09:17.814 rmmod nvme_fabrics 00:09:17.814 rmmod nvme_keyring 00:09:17.814 03:17:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:17.814 03:17:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:17.814 03:17:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:17.814 03:17:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2508923 ']' 00:09:17.814 03:17:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2508923 00:09:17.814 03:17:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2508923 ']' 00:09:17.814 03:17:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2508923 00:09:17.814 03:17:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:17.814 03:17:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:17.814 03:17:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2508923 00:09:17.814 03:17:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:17.814 03:17:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:17.814 03:17:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2508923' 00:09:17.814 killing process with pid 2508923 00:09:17.814 03:17:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2508923 00:09:17.814 03:17:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2508923 00:09:18.072 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:18.072 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:18.072 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:18.072 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:18.072 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:18.072 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:18.072 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:18.072 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:18.072 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:18.072 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.072 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:18.072 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:20.601 00:09:20.601 real 0m9.320s 00:09:20.601 user 0m10.508s 00:09:20.601 sys 0m4.492s 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:20.601 ************************************ 00:09:20.601 END TEST nvmf_bdevio 00:09:20.601 ************************************ 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:20.601 00:09:20.601 real 4m26.622s 00:09:20.601 user 10m12.324s 00:09:20.601 sys 1m31.923s 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:20.601 ************************************ 00:09:20.601 END TEST nvmf_target_core 00:09:20.601 ************************************ 00:09:20.601 03:17:40 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:20.601 03:17:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:20.601 03:17:40 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.601 03:17:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:20.601 ************************************ 00:09:20.601 START TEST nvmf_target_extra 00:09:20.601 ************************************ 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:20.601 * Looking for test storage... 00:09:20.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:20.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.601 --rc genhtml_branch_coverage=1 00:09:20.601 --rc genhtml_function_coverage=1 00:09:20.601 --rc genhtml_legend=1 00:09:20.601 --rc geninfo_all_blocks=1 00:09:20.601 --rc geninfo_unexecuted_blocks=1 00:09:20.601 00:09:20.601 ' 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:20.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.601 --rc genhtml_branch_coverage=1 00:09:20.601 --rc genhtml_function_coverage=1 00:09:20.601 --rc genhtml_legend=1 00:09:20.601 --rc geninfo_all_blocks=1 00:09:20.601 --rc geninfo_unexecuted_blocks=1 00:09:20.601 00:09:20.601 ' 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:20.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.601 --rc genhtml_branch_coverage=1 00:09:20.601 --rc genhtml_function_coverage=1 00:09:20.601 --rc genhtml_legend=1 00:09:20.601 --rc geninfo_all_blocks=1 00:09:20.601 --rc geninfo_unexecuted_blocks=1 00:09:20.601 00:09:20.601 ' 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:20.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.601 --rc genhtml_branch_coverage=1 00:09:20.601 --rc genhtml_function_coverage=1 00:09:20.601 --rc genhtml_legend=1 00:09:20.601 --rc geninfo_all_blocks=1 00:09:20.601 --rc geninfo_unexecuted_blocks=1 00:09:20.601 00:09:20.601 ' 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:20.601 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:20.601 ************************************ 00:09:20.601 START TEST nvmf_example 00:09:20.601 ************************************ 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:20.601 * Looking for test storage... 00:09:20.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:20.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.601 --rc genhtml_branch_coverage=1 00:09:20.601 --rc genhtml_function_coverage=1 00:09:20.601 --rc genhtml_legend=1 00:09:20.601 --rc geninfo_all_blocks=1 00:09:20.601 --rc geninfo_unexecuted_blocks=1 00:09:20.601 00:09:20.601 ' 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:20.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.601 --rc genhtml_branch_coverage=1 00:09:20.601 --rc genhtml_function_coverage=1 00:09:20.601 --rc genhtml_legend=1 00:09:20.601 --rc geninfo_all_blocks=1 00:09:20.601 --rc geninfo_unexecuted_blocks=1 00:09:20.601 00:09:20.601 ' 00:09:20.601 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:20.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.601 --rc genhtml_branch_coverage=1 00:09:20.601 --rc genhtml_function_coverage=1 00:09:20.602 --rc genhtml_legend=1 00:09:20.602 --rc geninfo_all_blocks=1 00:09:20.602 --rc geninfo_unexecuted_blocks=1 00:09:20.602 00:09:20.602 ' 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:20.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.602 --rc genhtml_branch_coverage=1 00:09:20.602 --rc genhtml_function_coverage=1 00:09:20.602 --rc genhtml_legend=1 00:09:20.602 --rc geninfo_all_blocks=1 00:09:20.602 --rc geninfo_unexecuted_blocks=1 00:09:20.602 00:09:20.602 ' 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:20.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:20.602 03:17:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:25.862 03:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:25.862 03:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:25.862 03:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:25.862 03:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:25.862 03:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:25.862 03:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:25.862 03:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:25.862 03:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:25.862 03:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:25.862 03:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:25.862 03:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:25.862 03:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:25.862 03:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:25.862 03:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:25.862 03:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:25.862 03:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:25.862 03:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:25.862 03:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:25.862 03:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:25.862 03:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:25.862 03:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:25.862 03:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:25.862 03:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:25.862 03:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:25.862 03:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:25.862 03:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:25.862 03:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:25.862 03:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:25.862 03:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:25.862 03:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:25.862 03:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:25.862 03:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:25.862 03:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:25.862 03:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:25.862 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:25.862 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:26.119 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:26.119 Found net devices under 0000:86:00.0: cvl_0_0 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:26.119 Found net devices under 0000:86:00.1: cvl_0_1 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:26.119 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:26.120 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:26.120 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:26.120 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:26.120 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:26.120 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:26.120 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:26.120 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:26.120 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:26.120 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:26.120 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:26.120 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:26.120 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:26.120 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:26.120 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.388 ms 00:09:26.120 00:09:26.120 --- 10.0.0.2 ping statistics --- 00:09:26.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.120 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:09:26.120 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:26.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:26.120 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:09:26.120 00:09:26.120 --- 10.0.0.1 ping statistics --- 00:09:26.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.120 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:09:26.120 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:26.120 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:09:26.120 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:26.120 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:26.120 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:26.120 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:26.120 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:26.120 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:26.120 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:26.376 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:26.376 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:26.376 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:26.376 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:26.376 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:26.376 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:26.376 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2512770 00:09:26.376 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:26.376 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:26.376 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2512770 00:09:26.376 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2512770 ']' 00:09:26.376 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.376 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:26.376 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.376 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:26.376 03:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:27.305 03:17:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:27.305 03:17:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:09:27.305 03:17:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:27.305 03:17:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:27.305 03:17:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:27.305 03:17:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:27.305 03:17:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.305 03:17:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:27.305 03:17:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.305 03:17:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:27.305 03:17:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.305 03:17:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:27.305 03:17:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.305 03:17:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:27.305 03:17:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:27.305 03:17:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.305 03:17:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:27.305 03:17:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.305 03:17:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:27.305 03:17:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:27.305 03:17:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.305 03:17:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:27.305 03:17:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.305 03:17:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:27.305 03:17:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.305 03:17:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:27.305 03:17:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.305 03:17:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:27.305 03:17:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:39.499 Initializing NVMe Controllers 00:09:39.499 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:39.499 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:39.499 Initialization complete. Launching workers. 00:09:39.499 ======================================================== 00:09:39.499 Latency(us) 00:09:39.499 Device Information : IOPS MiB/s Average min max 00:09:39.499 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18042.37 70.48 3546.77 711.17 20057.05 00:09:39.499 ======================================================== 00:09:39.499 Total : 18042.37 70.48 3546.77 711.17 20057.05 00:09:39.499 00:09:39.499 03:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:39.499 03:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:39.499 03:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:39.499 03:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:09:39.499 03:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:39.499 03:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:09:39.499 03:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:39.499 03:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:39.499 rmmod nvme_tcp 00:09:39.499 rmmod nvme_fabrics 00:09:39.499 rmmod nvme_keyring 00:09:39.499 03:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:39.499 03:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:09:39.499 03:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:09:39.499 03:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2512770 ']' 00:09:39.499 03:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2512770 00:09:39.499 03:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2512770 ']' 00:09:39.499 03:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2512770 00:09:39.499 03:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:09:39.499 03:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:39.499 03:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2512770 00:09:39.499 03:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:09:39.499 03:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:09:39.499 03:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2512770' 00:09:39.499 killing process with pid 2512770 00:09:39.499 03:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2512770 00:09:39.499 03:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2512770 00:09:39.499 nvmf threads initialize successfully 00:09:39.499 bdev subsystem init successfully 00:09:39.499 created a nvmf target service 00:09:39.499 create targets's poll groups done 00:09:39.499 all subsystems of target started 00:09:39.499 nvmf target is running 00:09:39.499 all subsystems of target stopped 00:09:39.499 destroy targets's poll groups done 00:09:39.499 destroyed the nvmf target service 00:09:39.499 bdev subsystem finish successfully 00:09:39.499 nvmf threads destroy successfully 00:09:39.499 03:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:39.499 03:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:39.499 03:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:39.499 03:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:09:39.499 03:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:09:39.499 03:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:09:39.499 03:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:39.499 03:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:39.499 03:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:39.499 03:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.499 03:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:39.499 03:17:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.067 03:17:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:40.067 03:17:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:40.067 03:17:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:40.067 03:17:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:40.067 00:09:40.067 real 0m19.525s 00:09:40.067 user 0m46.368s 00:09:40.067 sys 0m5.847s 00:09:40.067 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.067 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:40.067 ************************************ 00:09:40.067 END TEST nvmf_example 00:09:40.067 ************************************ 00:09:40.067 03:18:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:40.067 03:18:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:40.067 03:18:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.067 03:18:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:40.067 ************************************ 00:09:40.067 START TEST nvmf_filesystem 00:09:40.067 ************************************ 00:09:40.067 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:40.067 * Looking for test storage... 00:09:40.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:40.067 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:40.067 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:09:40.067 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:40.328 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:40.328 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:40.328 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:40.328 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:40.328 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:40.328 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:40.328 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:40.328 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:40.328 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:40.328 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:40.328 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:40.328 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:40.328 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:40.328 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:40.328 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:40.328 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:40.328 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:40.328 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:40.328 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:40.328 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:40.328 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:40.328 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:40.328 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:40.328 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:40.328 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:40.328 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:40.328 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:40.328 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:40.328 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:40.328 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:40.328 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:40.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.328 --rc genhtml_branch_coverage=1 00:09:40.328 --rc genhtml_function_coverage=1 00:09:40.328 --rc genhtml_legend=1 00:09:40.328 --rc geninfo_all_blocks=1 00:09:40.328 --rc geninfo_unexecuted_blocks=1 00:09:40.328 00:09:40.328 ' 00:09:40.328 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:40.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.328 --rc genhtml_branch_coverage=1 00:09:40.328 --rc genhtml_function_coverage=1 00:09:40.328 --rc genhtml_legend=1 00:09:40.328 --rc geninfo_all_blocks=1 00:09:40.328 --rc geninfo_unexecuted_blocks=1 00:09:40.328 00:09:40.328 ' 00:09:40.328 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:40.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.328 --rc genhtml_branch_coverage=1 00:09:40.328 --rc genhtml_function_coverage=1 00:09:40.328 --rc genhtml_legend=1 00:09:40.328 --rc geninfo_all_blocks=1 00:09:40.329 --rc geninfo_unexecuted_blocks=1 00:09:40.329 00:09:40.329 ' 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:40.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.329 --rc genhtml_branch_coverage=1 00:09:40.329 --rc genhtml_function_coverage=1 00:09:40.329 --rc genhtml_legend=1 00:09:40.329 --rc geninfo_all_blocks=1 00:09:40.329 --rc geninfo_unexecuted_blocks=1 00:09:40.329 00:09:40.329 ' 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:09:40.329 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:09:40.330 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:09:40.330 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:09:40.330 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:09:40.330 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:09:40.330 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:09:40.330 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:09:40.330 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:09:40.330 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:09:40.330 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:40.330 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:09:40.330 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:09:40.330 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:09:40.330 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:09:40.330 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:09:40.330 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:09:40.330 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:09:40.330 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:09:40.330 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:09:40.330 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:09:40.330 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:09:40.330 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:40.330 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:09:40.330 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:09:40.330 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:09:40.330 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:40.330 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:40.330 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:40.330 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:40.330 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:40.330 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:40.330 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:40.330 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:40.330 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:40.330 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:40.330 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:40.330 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:40.330 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:40.330 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:40.330 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:40.330 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:40.330 #define SPDK_CONFIG_H 00:09:40.330 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:40.330 #define SPDK_CONFIG_APPS 1 00:09:40.330 #define SPDK_CONFIG_ARCH native 00:09:40.330 #undef SPDK_CONFIG_ASAN 00:09:40.330 #undef SPDK_CONFIG_AVAHI 00:09:40.330 #undef SPDK_CONFIG_CET 00:09:40.330 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:40.330 #define SPDK_CONFIG_COVERAGE 1 00:09:40.330 #define SPDK_CONFIG_CROSS_PREFIX 00:09:40.330 #undef SPDK_CONFIG_CRYPTO 00:09:40.330 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:40.330 #undef SPDK_CONFIG_CUSTOMOCF 00:09:40.330 #undef SPDK_CONFIG_DAOS 00:09:40.330 #define SPDK_CONFIG_DAOS_DIR 00:09:40.330 #define SPDK_CONFIG_DEBUG 1 00:09:40.330 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:40.330 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:40.330 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:40.330 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:40.330 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:40.330 #undef SPDK_CONFIG_DPDK_UADK 00:09:40.330 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:40.330 #define SPDK_CONFIG_EXAMPLES 1 00:09:40.330 #undef SPDK_CONFIG_FC 00:09:40.330 #define SPDK_CONFIG_FC_PATH 00:09:40.330 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:40.330 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:40.330 #define SPDK_CONFIG_FSDEV 1 00:09:40.330 #undef SPDK_CONFIG_FUSE 00:09:40.330 #undef SPDK_CONFIG_FUZZER 00:09:40.330 #define SPDK_CONFIG_FUZZER_LIB 00:09:40.330 #undef SPDK_CONFIG_GOLANG 00:09:40.330 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:40.330 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:40.330 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:40.330 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:40.330 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:40.330 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:40.330 #undef SPDK_CONFIG_HAVE_LZ4 00:09:40.330 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:40.330 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:40.330 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:40.330 #define SPDK_CONFIG_IDXD 1 00:09:40.330 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:40.330 #undef SPDK_CONFIG_IPSEC_MB 00:09:40.330 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:40.330 #define SPDK_CONFIG_ISAL 1 00:09:40.330 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:40.330 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:40.330 #define SPDK_CONFIG_LIBDIR 00:09:40.330 #undef SPDK_CONFIG_LTO 00:09:40.330 #define SPDK_CONFIG_MAX_LCORES 128 00:09:40.330 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:09:40.330 #define SPDK_CONFIG_NVME_CUSE 1 00:09:40.330 #undef SPDK_CONFIG_OCF 00:09:40.330 #define SPDK_CONFIG_OCF_PATH 00:09:40.330 #define SPDK_CONFIG_OPENSSL_PATH 00:09:40.330 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:40.330 #define SPDK_CONFIG_PGO_DIR 00:09:40.330 #undef SPDK_CONFIG_PGO_USE 00:09:40.330 #define SPDK_CONFIG_PREFIX /usr/local 00:09:40.330 #undef SPDK_CONFIG_RAID5F 00:09:40.330 #undef SPDK_CONFIG_RBD 00:09:40.330 #define SPDK_CONFIG_RDMA 1 00:09:40.330 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:40.330 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:40.330 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:40.330 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:40.330 #define SPDK_CONFIG_SHARED 1 00:09:40.330 #undef SPDK_CONFIG_SMA 00:09:40.330 #define SPDK_CONFIG_TESTS 1 00:09:40.330 #undef SPDK_CONFIG_TSAN 00:09:40.330 #define SPDK_CONFIG_UBLK 1 00:09:40.330 #define SPDK_CONFIG_UBSAN 1 00:09:40.330 #undef SPDK_CONFIG_UNIT_TESTS 00:09:40.330 #undef SPDK_CONFIG_URING 00:09:40.330 #define SPDK_CONFIG_URING_PATH 00:09:40.330 #undef SPDK_CONFIG_URING_ZNS 00:09:40.330 #undef SPDK_CONFIG_USDT 00:09:40.331 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:40.331 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:40.331 #define SPDK_CONFIG_VFIO_USER 1 00:09:40.331 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:40.331 #define SPDK_CONFIG_VHOST 1 00:09:40.331 #define SPDK_CONFIG_VIRTIO 1 00:09:40.331 #undef SPDK_CONFIG_VTUNE 00:09:40.331 #define SPDK_CONFIG_VTUNE_DIR 00:09:40.331 #define SPDK_CONFIG_WERROR 1 00:09:40.331 #define SPDK_CONFIG_WPDK_DIR 00:09:40.331 #undef SPDK_CONFIG_XNVME 00:09:40.331 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:40.331 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:40.332 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:40.333 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2515190 ]] 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2515190 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.ZgL2jN 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.ZgL2jN/tests/target /tmp/spdk.ZgL2jN 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:09:40.334 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=189092835328 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=195963961344 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6871126016 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97971949568 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981980672 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39169748992 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39192793088 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23044096 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97980420096 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981980672 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=1560576 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19596382208 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19596394496 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:09:40.335 * Looking for test storage... 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=189092835328 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9085718528 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:40.335 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:40.335 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:40.336 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:40.336 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:40.594 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:40.594 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:40.594 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:40.594 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:40.594 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:40.594 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:40.594 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:40.594 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:40.594 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:40.594 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:40.594 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:40.594 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:40.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.595 --rc genhtml_branch_coverage=1 00:09:40.595 --rc genhtml_function_coverage=1 00:09:40.595 --rc genhtml_legend=1 00:09:40.595 --rc geninfo_all_blocks=1 00:09:40.595 --rc geninfo_unexecuted_blocks=1 00:09:40.595 00:09:40.595 ' 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:40.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.595 --rc genhtml_branch_coverage=1 00:09:40.595 --rc genhtml_function_coverage=1 00:09:40.595 --rc genhtml_legend=1 00:09:40.595 --rc geninfo_all_blocks=1 00:09:40.595 --rc geninfo_unexecuted_blocks=1 00:09:40.595 00:09:40.595 ' 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:40.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.595 --rc genhtml_branch_coverage=1 00:09:40.595 --rc genhtml_function_coverage=1 00:09:40.595 --rc genhtml_legend=1 00:09:40.595 --rc geninfo_all_blocks=1 00:09:40.595 --rc geninfo_unexecuted_blocks=1 00:09:40.595 00:09:40.595 ' 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:40.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.595 --rc genhtml_branch_coverage=1 00:09:40.595 --rc genhtml_function_coverage=1 00:09:40.595 --rc genhtml_legend=1 00:09:40.595 --rc geninfo_all_blocks=1 00:09:40.595 --rc geninfo_unexecuted_blocks=1 00:09:40.595 00:09:40.595 ' 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:40.595 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:40.595 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:40.596 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.596 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:40.596 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.596 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:40.596 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:40.596 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:40.596 03:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:45.907 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:45.907 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:45.907 Found net devices under 0000:86:00.0: cvl_0_0 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:45.907 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.908 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:45.908 Found net devices under 0000:86:00.1: cvl_0_1 00:09:45.908 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.908 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:45.908 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:09:45.908 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:45.908 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:45.908 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:45.908 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:45.908 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:45.908 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:45.908 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:45.908 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:45.908 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:45.908 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:45.908 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:45.908 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:45.908 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:45.908 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:45.908 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:45.908 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:45.908 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:45.908 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:45.908 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:45.908 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:45.908 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:45.908 03:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:45.908 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:45.908 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:45.908 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:45.908 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:45.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:45.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms 00:09:45.908 00:09:45.908 --- 10.0.0.2 ping statistics --- 00:09:45.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.908 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:09:45.908 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:45.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:45.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:09:45.908 00:09:45.908 --- 10.0.0.1 ping statistics --- 00:09:45.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.908 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:09:45.908 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:45.908 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:09:45.908 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:45.908 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:45.908 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:45.908 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:45.908 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:45.908 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:45.908 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:46.166 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:46.166 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:46.166 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.166 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:46.166 ************************************ 00:09:46.166 START TEST nvmf_filesystem_no_in_capsule 00:09:46.166 ************************************ 00:09:46.166 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:09:46.166 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:46.166 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:46.166 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:46.166 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:46.166 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:46.166 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2518557 00:09:46.166 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2518557 00:09:46.166 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:46.166 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2518557 ']' 00:09:46.166 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.166 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:46.166 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.166 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:46.166 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:46.166 [2024-12-06 03:18:06.150658] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:09:46.166 [2024-12-06 03:18:06.150702] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.166 [2024-12-06 03:18:06.215436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:46.166 [2024-12-06 03:18:06.256108] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:46.166 [2024-12-06 03:18:06.256147] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:46.166 [2024-12-06 03:18:06.256155] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:46.166 [2024-12-06 03:18:06.256161] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:46.166 [2024-12-06 03:18:06.256167] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:46.166 [2024-12-06 03:18:06.257696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.166 [2024-12-06 03:18:06.257714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:46.166 [2024-12-06 03:18:06.257806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:46.166 [2024-12-06 03:18:06.257808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.423 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:46.423 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:09:46.423 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:46.423 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:46.423 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:46.423 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:46.423 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:46.423 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:46.423 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.423 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:46.423 [2024-12-06 03:18:06.408289] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:46.423 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.423 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:46.423 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.423 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:46.423 Malloc1 00:09:46.423 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.423 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:46.423 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.423 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:46.423 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.423 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:46.423 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.423 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:46.423 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.423 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:46.423 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.680 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:46.680 [2024-12-06 03:18:06.565191] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:46.680 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.680 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:46.680 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:09:46.680 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:09:46.680 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:09:46.680 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:09:46.680 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:46.680 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.680 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:46.680 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.680 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:09:46.680 { 00:09:46.680 "name": "Malloc1", 00:09:46.680 "aliases": [ 00:09:46.680 "0b9f218e-109e-4e0a-bfaf-ddc8b82178a5" 00:09:46.680 ], 00:09:46.680 "product_name": "Malloc disk", 00:09:46.680 "block_size": 512, 00:09:46.680 "num_blocks": 1048576, 00:09:46.680 "uuid": "0b9f218e-109e-4e0a-bfaf-ddc8b82178a5", 00:09:46.680 "assigned_rate_limits": { 00:09:46.680 "rw_ios_per_sec": 0, 00:09:46.680 "rw_mbytes_per_sec": 0, 00:09:46.680 "r_mbytes_per_sec": 0, 00:09:46.680 "w_mbytes_per_sec": 0 00:09:46.680 }, 00:09:46.680 "claimed": true, 00:09:46.680 "claim_type": "exclusive_write", 00:09:46.680 "zoned": false, 00:09:46.680 "supported_io_types": { 00:09:46.680 "read": true, 00:09:46.680 "write": true, 00:09:46.680 "unmap": true, 00:09:46.680 "flush": true, 00:09:46.680 "reset": true, 00:09:46.680 "nvme_admin": false, 00:09:46.680 "nvme_io": false, 00:09:46.680 "nvme_io_md": false, 00:09:46.680 "write_zeroes": true, 00:09:46.680 "zcopy": true, 00:09:46.680 "get_zone_info": false, 00:09:46.680 "zone_management": false, 00:09:46.680 "zone_append": false, 00:09:46.680 "compare": false, 00:09:46.680 "compare_and_write": false, 00:09:46.680 "abort": true, 00:09:46.680 "seek_hole": false, 00:09:46.680 "seek_data": false, 00:09:46.680 "copy": true, 00:09:46.680 "nvme_iov_md": false 00:09:46.680 }, 00:09:46.680 "memory_domains": [ 00:09:46.680 { 00:09:46.680 "dma_device_id": "system", 00:09:46.680 "dma_device_type": 1 00:09:46.680 }, 00:09:46.680 { 00:09:46.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.680 "dma_device_type": 2 00:09:46.680 } 00:09:46.680 ], 00:09:46.680 "driver_specific": {} 00:09:46.680 } 00:09:46.680 ]' 00:09:46.680 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:09:46.680 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:09:46.680 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:09:46.680 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:09:46.680 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:09:46.680 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:09:46.680 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:46.680 03:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:48.053 03:18:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:48.053 03:18:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:09:48.053 03:18:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:48.053 03:18:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:48.053 03:18:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:09:49.953 03:18:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:49.953 03:18:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:49.953 03:18:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:49.953 03:18:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:49.953 03:18:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:49.953 03:18:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:09:49.953 03:18:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:49.953 03:18:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:49.953 03:18:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:49.953 03:18:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:49.953 03:18:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:49.953 03:18:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:49.953 03:18:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:49.953 03:18:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:49.953 03:18:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:49.953 03:18:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:49.953 03:18:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:50.211 03:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:50.774 03:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:51.708 03:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:51.708 03:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:51.708 03:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:51.708 03:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.708 03:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:51.708 ************************************ 00:09:51.708 START TEST filesystem_ext4 00:09:51.708 ************************************ 00:09:51.708 03:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:51.708 03:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:51.708 03:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:51.708 03:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:51.708 03:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:09:51.708 03:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:51.708 03:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:09:51.708 03:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:09:51.708 03:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:09:51.708 03:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:09:51.708 03:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:51.708 mke2fs 1.47.0 (5-Feb-2023) 00:09:51.708 Discarding device blocks: 0/522240 done 00:09:51.708 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:51.708 Filesystem UUID: 365cadca-cbc3-48ef-ba39-a993b4462841 00:09:51.708 Superblock backups stored on blocks: 00:09:51.708 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:51.708 00:09:51.708 Allocating group tables: 0/64 done 00:09:51.708 Writing inode tables: 0/64 done 00:09:51.967 Creating journal (8192 blocks): done 00:09:54.163 Writing superblocks and filesystem accounting information: 0/6426/64 done 00:09:54.163 00:09:54.164 03:18:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:09:54.164 03:18:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:59.426 03:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:59.684 03:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:09:59.684 03:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:59.684 03:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:09:59.684 03:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:59.684 03:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:59.684 03:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2518557 00:09:59.684 03:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:59.684 03:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:59.684 03:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:59.684 03:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:59.684 00:09:59.684 real 0m7.920s 00:09:59.684 user 0m0.037s 00:09:59.684 sys 0m0.066s 00:09:59.684 03:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.684 03:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:59.684 ************************************ 00:09:59.684 END TEST filesystem_ext4 00:09:59.684 ************************************ 00:09:59.684 03:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:59.684 03:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:59.684 03:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.684 03:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.684 ************************************ 00:09:59.684 START TEST filesystem_btrfs 00:09:59.684 ************************************ 00:09:59.684 03:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:59.684 03:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:59.684 03:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:59.684 03:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:59.684 03:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:09:59.684 03:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:59.684 03:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:09:59.684 03:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:09:59.684 03:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:09:59.684 03:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:09:59.684 03:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:59.943 btrfs-progs v6.8.1 00:09:59.943 See https://btrfs.readthedocs.io for more information. 00:09:59.943 00:09:59.943 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:59.943 NOTE: several default settings have changed in version 5.15, please make sure 00:09:59.943 this does not affect your deployments: 00:09:59.943 - DUP for metadata (-m dup) 00:09:59.943 - enabled no-holes (-O no-holes) 00:09:59.943 - enabled free-space-tree (-R free-space-tree) 00:09:59.943 00:09:59.943 Label: (null) 00:09:59.943 UUID: 9be88f04-c0d3-4bab-90e7-d31a9b063c70 00:09:59.943 Node size: 16384 00:09:59.943 Sector size: 4096 (CPU page size: 4096) 00:09:59.943 Filesystem size: 510.00MiB 00:09:59.943 Block group profiles: 00:09:59.943 Data: single 8.00MiB 00:09:59.943 Metadata: DUP 32.00MiB 00:09:59.943 System: DUP 8.00MiB 00:09:59.943 SSD detected: yes 00:09:59.943 Zoned device: no 00:09:59.943 Features: extref, skinny-metadata, no-holes, free-space-tree 00:09:59.943 Checksum: crc32c 00:09:59.943 Number of devices: 1 00:09:59.943 Devices: 00:09:59.943 ID SIZE PATH 00:09:59.943 1 510.00MiB /dev/nvme0n1p1 00:09:59.943 00:09:59.943 03:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:09:59.943 03:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:00.878 03:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:00.878 03:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:00.878 03:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:00.878 03:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:00.878 03:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:00.879 03:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:00.879 03:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2518557 00:10:00.879 03:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:00.879 03:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:00.879 03:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:00.879 03:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:00.879 00:10:00.879 real 0m1.120s 00:10:00.879 user 0m0.023s 00:10:00.879 sys 0m0.115s 00:10:00.879 03:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.879 03:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:00.879 ************************************ 00:10:00.879 END TEST filesystem_btrfs 00:10:00.879 ************************************ 00:10:00.879 03:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:00.879 03:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:00.879 03:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:00.879 03:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:00.879 ************************************ 00:10:00.879 START TEST filesystem_xfs 00:10:00.879 ************************************ 00:10:00.879 03:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:00.879 03:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:00.879 03:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:00.879 03:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:00.879 03:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:00.879 03:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:00.879 03:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:00.879 03:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:00.879 03:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:00.879 03:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:00.879 03:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:00.879 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:00.879 = sectsz=512 attr=2, projid32bit=1 00:10:00.879 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:00.879 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:00.879 data = bsize=4096 blocks=130560, imaxpct=25 00:10:00.879 = sunit=0 swidth=0 blks 00:10:00.879 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:00.879 log =internal log bsize=4096 blocks=16384, version=2 00:10:00.879 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:00.879 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:01.814 Discarding blocks...Done. 00:10:01.814 03:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:01.814 03:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:04.345 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:04.345 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:04.345 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:04.345 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:04.345 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:04.345 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:04.345 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2518557 00:10:04.345 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:04.345 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:04.345 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:04.345 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:04.345 00:10:04.345 real 0m3.367s 00:10:04.345 user 0m0.026s 00:10:04.345 sys 0m0.072s 00:10:04.345 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.345 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:04.345 ************************************ 00:10:04.345 END TEST filesystem_xfs 00:10:04.345 ************************************ 00:10:04.345 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:04.345 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:04.345 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:04.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.345 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:04.345 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:04.345 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:04.345 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:04.345 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:04.345 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:04.604 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:04.604 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:04.604 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.604 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.604 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.604 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:04.604 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2518557 00:10:04.604 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2518557 ']' 00:10:04.604 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2518557 00:10:04.604 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:04.604 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:04.604 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2518557 00:10:04.604 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:04.604 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:04.604 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2518557' 00:10:04.604 killing process with pid 2518557 00:10:04.604 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2518557 00:10:04.604 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2518557 00:10:04.862 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:04.862 00:10:04.862 real 0m18.767s 00:10:04.862 user 1m14.043s 00:10:04.862 sys 0m1.362s 00:10:04.862 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.862 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.862 ************************************ 00:10:04.862 END TEST nvmf_filesystem_no_in_capsule 00:10:04.862 ************************************ 00:10:04.862 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:04.862 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:04.862 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.862 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:04.862 ************************************ 00:10:04.862 START TEST nvmf_filesystem_in_capsule 00:10:04.862 ************************************ 00:10:04.862 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:04.862 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:04.862 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:04.862 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:04.862 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:04.862 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.862 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2522179 00:10:04.862 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2522179 00:10:04.862 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:04.862 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2522179 ']' 00:10:04.862 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.862 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:04.863 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.863 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:04.863 03:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:05.120 [2024-12-06 03:18:25.008514] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:10:05.120 [2024-12-06 03:18:25.008558] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:05.120 [2024-12-06 03:18:25.075320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:05.120 [2024-12-06 03:18:25.116871] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:05.120 [2024-12-06 03:18:25.116911] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:05.120 [2024-12-06 03:18:25.116918] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:05.120 [2024-12-06 03:18:25.116925] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:05.120 [2024-12-06 03:18:25.116931] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:05.120 [2024-12-06 03:18:25.118446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:05.120 [2024-12-06 03:18:25.118545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:05.120 [2024-12-06 03:18:25.118622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:05.120 [2024-12-06 03:18:25.118623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.120 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:05.120 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:05.120 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:05.120 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:05.120 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:05.378 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:05.378 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:05.378 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:05.378 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.378 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:05.378 [2024-12-06 03:18:25.265414] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:05.378 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.378 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:05.378 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.378 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:05.378 Malloc1 00:10:05.378 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.378 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:05.378 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.378 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:05.378 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.378 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:05.378 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.378 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:05.378 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.378 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:05.378 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.378 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:05.378 [2024-12-06 03:18:25.429132] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:05.378 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.378 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:05.378 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:05.378 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:05.378 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:05.378 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:05.378 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:05.378 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.378 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:05.378 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.378 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:05.378 { 00:10:05.378 "name": "Malloc1", 00:10:05.378 "aliases": [ 00:10:05.378 "998c54ca-914e-4906-94a7-7599ecc7fc66" 00:10:05.378 ], 00:10:05.378 "product_name": "Malloc disk", 00:10:05.378 "block_size": 512, 00:10:05.378 "num_blocks": 1048576, 00:10:05.378 "uuid": "998c54ca-914e-4906-94a7-7599ecc7fc66", 00:10:05.378 "assigned_rate_limits": { 00:10:05.378 "rw_ios_per_sec": 0, 00:10:05.378 "rw_mbytes_per_sec": 0, 00:10:05.378 "r_mbytes_per_sec": 0, 00:10:05.378 "w_mbytes_per_sec": 0 00:10:05.378 }, 00:10:05.378 "claimed": true, 00:10:05.378 "claim_type": "exclusive_write", 00:10:05.378 "zoned": false, 00:10:05.378 "supported_io_types": { 00:10:05.378 "read": true, 00:10:05.378 "write": true, 00:10:05.378 "unmap": true, 00:10:05.378 "flush": true, 00:10:05.378 "reset": true, 00:10:05.378 "nvme_admin": false, 00:10:05.378 "nvme_io": false, 00:10:05.378 "nvme_io_md": false, 00:10:05.378 "write_zeroes": true, 00:10:05.378 "zcopy": true, 00:10:05.378 "get_zone_info": false, 00:10:05.378 "zone_management": false, 00:10:05.378 "zone_append": false, 00:10:05.378 "compare": false, 00:10:05.378 "compare_and_write": false, 00:10:05.378 "abort": true, 00:10:05.378 "seek_hole": false, 00:10:05.378 "seek_data": false, 00:10:05.378 "copy": true, 00:10:05.378 "nvme_iov_md": false 00:10:05.378 }, 00:10:05.378 "memory_domains": [ 00:10:05.378 { 00:10:05.378 "dma_device_id": "system", 00:10:05.378 "dma_device_type": 1 00:10:05.378 }, 00:10:05.378 { 00:10:05.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.378 "dma_device_type": 2 00:10:05.378 } 00:10:05.378 ], 00:10:05.378 "driver_specific": {} 00:10:05.378 } 00:10:05.378 ]' 00:10:05.378 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:05.378 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:05.378 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:05.636 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:05.636 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:05.636 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:05.636 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:05.636 03:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:06.569 03:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:06.569 03:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:06.569 03:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:06.569 03:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:06.569 03:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:09.100 03:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:09.100 03:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:09.100 03:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:09.100 03:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:09.100 03:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:09.100 03:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:09.100 03:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:09.100 03:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:09.100 03:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:09.100 03:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:09.100 03:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:09.100 03:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:09.100 03:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:09.100 03:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:09.100 03:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:09.100 03:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:09.100 03:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:09.100 03:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:09.100 03:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:10.032 03:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:10.032 03:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:10.032 03:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:10.032 03:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.032 03:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:10.289 ************************************ 00:10:10.289 START TEST filesystem_in_capsule_ext4 00:10:10.289 ************************************ 00:10:10.289 03:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:10.289 03:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:10.289 03:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:10.289 03:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:10.289 03:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:10.289 03:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:10.289 03:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:10.289 03:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:10.289 03:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:10.289 03:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:10.289 03:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:10.289 mke2fs 1.47.0 (5-Feb-2023) 00:10:10.289 Discarding device blocks: 0/522240 done 00:10:10.289 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:10.289 Filesystem UUID: c2f204de-5b14-4131-86d1-c798145a8642 00:10:10.289 Superblock backups stored on blocks: 00:10:10.289 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:10.289 00:10:10.290 Allocating group tables: 0/64 done 00:10:10.290 Writing inode tables: 0/64 done 00:10:13.566 Creating journal (8192 blocks): done 00:10:13.566 Writing superblocks and filesystem accounting information: 0/64 done 00:10:13.566 00:10:13.566 03:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:13.566 03:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:18.842 03:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:18.842 03:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:18.842 03:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:18.842 03:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:18.842 03:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:18.842 03:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:19.101 03:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2522179 00:10:19.101 03:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:19.101 03:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:19.101 03:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:19.101 03:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:19.101 00:10:19.101 real 0m8.828s 00:10:19.101 user 0m0.025s 00:10:19.101 sys 0m0.077s 00:10:19.101 03:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.101 03:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:19.101 ************************************ 00:10:19.101 END TEST filesystem_in_capsule_ext4 00:10:19.101 ************************************ 00:10:19.101 03:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:19.101 03:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:19.101 03:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.101 03:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:19.101 ************************************ 00:10:19.101 START TEST filesystem_in_capsule_btrfs 00:10:19.101 ************************************ 00:10:19.101 03:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:19.101 03:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:19.101 03:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:19.101 03:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:19.101 03:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:19.101 03:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:19.101 03:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:19.101 03:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:19.101 03:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:19.101 03:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:19.101 03:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:19.360 btrfs-progs v6.8.1 00:10:19.360 See https://btrfs.readthedocs.io for more information. 00:10:19.360 00:10:19.360 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:19.360 NOTE: several default settings have changed in version 5.15, please make sure 00:10:19.360 this does not affect your deployments: 00:10:19.360 - DUP for metadata (-m dup) 00:10:19.360 - enabled no-holes (-O no-holes) 00:10:19.360 - enabled free-space-tree (-R free-space-tree) 00:10:19.360 00:10:19.360 Label: (null) 00:10:19.360 UUID: 47396b25-485b-45ef-be2f-91209401886b 00:10:19.360 Node size: 16384 00:10:19.360 Sector size: 4096 (CPU page size: 4096) 00:10:19.360 Filesystem size: 510.00MiB 00:10:19.360 Block group profiles: 00:10:19.360 Data: single 8.00MiB 00:10:19.360 Metadata: DUP 32.00MiB 00:10:19.360 System: DUP 8.00MiB 00:10:19.360 SSD detected: yes 00:10:19.360 Zoned device: no 00:10:19.360 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:19.360 Checksum: crc32c 00:10:19.360 Number of devices: 1 00:10:19.360 Devices: 00:10:19.360 ID SIZE PATH 00:10:19.360 1 510.00MiB /dev/nvme0n1p1 00:10:19.360 00:10:19.360 03:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:19.360 03:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:19.927 03:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:19.927 03:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:19.927 03:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:19.927 03:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:19.927 03:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:19.927 03:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:19.927 03:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2522179 00:10:19.927 03:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:19.927 03:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:19.927 03:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:19.927 03:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:19.927 00:10:19.927 real 0m0.953s 00:10:19.927 user 0m0.028s 00:10:19.927 sys 0m0.114s 00:10:19.927 03:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.927 03:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:19.927 ************************************ 00:10:19.927 END TEST filesystem_in_capsule_btrfs 00:10:19.927 ************************************ 00:10:20.186 03:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:20.186 03:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:20.186 03:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:20.186 03:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.186 ************************************ 00:10:20.186 START TEST filesystem_in_capsule_xfs 00:10:20.186 ************************************ 00:10:20.186 03:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:20.186 03:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:20.186 03:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:20.186 03:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:20.186 03:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:20.186 03:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:20.186 03:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:20.186 03:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:10:20.186 03:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:20.186 03:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:20.186 03:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:20.186 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:20.186 = sectsz=512 attr=2, projid32bit=1 00:10:20.186 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:20.186 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:20.186 data = bsize=4096 blocks=130560, imaxpct=25 00:10:20.186 = sunit=0 swidth=0 blks 00:10:20.187 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:20.187 log =internal log bsize=4096 blocks=16384, version=2 00:10:20.187 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:20.187 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:21.563 Discarding blocks...Done. 00:10:21.563 03:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:21.563 03:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:22.938 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:23.197 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:23.197 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:23.197 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:23.197 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:23.197 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:23.197 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2522179 00:10:23.197 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:23.197 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:23.197 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:23.197 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:23.197 00:10:23.197 real 0m3.045s 00:10:23.197 user 0m0.026s 00:10:23.197 sys 0m0.074s 00:10:23.197 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.197 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:23.197 ************************************ 00:10:23.197 END TEST filesystem_in_capsule_xfs 00:10:23.197 ************************************ 00:10:23.197 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:23.197 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:23.197 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:23.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.454 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:23.454 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:23.454 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:23.454 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:23.454 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:23.454 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:23.454 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:23.454 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:23.454 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.454 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:23.454 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.454 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:23.454 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2522179 00:10:23.454 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2522179 ']' 00:10:23.454 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2522179 00:10:23.454 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:23.454 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:23.454 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2522179 00:10:23.454 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:23.454 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:23.454 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2522179' 00:10:23.454 killing process with pid 2522179 00:10:23.454 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2522179 00:10:23.454 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2522179 00:10:23.712 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:23.713 00:10:23.713 real 0m18.864s 00:10:23.713 user 1m14.316s 00:10:23.713 sys 0m1.422s 00:10:23.713 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.713 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:23.713 ************************************ 00:10:23.713 END TEST nvmf_filesystem_in_capsule 00:10:23.713 ************************************ 00:10:23.971 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:23.971 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:23.971 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:23.971 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:23.971 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:23.971 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:23.971 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:23.971 rmmod nvme_tcp 00:10:23.971 rmmod nvme_fabrics 00:10:23.971 rmmod nvme_keyring 00:10:23.971 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:23.971 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:23.971 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:23.971 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:23.971 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:23.971 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:23.971 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:23.971 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:23.971 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:10:23.971 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:23.971 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:23.971 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:23.971 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:23.971 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.971 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:23.971 03:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:25.876 03:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:25.876 00:10:25.876 real 0m45.917s 00:10:25.876 user 2m30.337s 00:10:25.876 sys 0m7.099s 00:10:25.876 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.876 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:25.876 ************************************ 00:10:25.876 END TEST nvmf_filesystem 00:10:25.876 ************************************ 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:26.135 ************************************ 00:10:26.135 START TEST nvmf_target_discovery 00:10:26.135 ************************************ 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:26.135 * Looking for test storage... 00:10:26.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:26.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.135 --rc genhtml_branch_coverage=1 00:10:26.135 --rc genhtml_function_coverage=1 00:10:26.135 --rc genhtml_legend=1 00:10:26.135 --rc geninfo_all_blocks=1 00:10:26.135 --rc geninfo_unexecuted_blocks=1 00:10:26.135 00:10:26.135 ' 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:26.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.135 --rc genhtml_branch_coverage=1 00:10:26.135 --rc genhtml_function_coverage=1 00:10:26.135 --rc genhtml_legend=1 00:10:26.135 --rc geninfo_all_blocks=1 00:10:26.135 --rc geninfo_unexecuted_blocks=1 00:10:26.135 00:10:26.135 ' 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:26.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.135 --rc genhtml_branch_coverage=1 00:10:26.135 --rc genhtml_function_coverage=1 00:10:26.135 --rc genhtml_legend=1 00:10:26.135 --rc geninfo_all_blocks=1 00:10:26.135 --rc geninfo_unexecuted_blocks=1 00:10:26.135 00:10:26.135 ' 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:26.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.135 --rc genhtml_branch_coverage=1 00:10:26.135 --rc genhtml_function_coverage=1 00:10:26.135 --rc genhtml_legend=1 00:10:26.135 --rc geninfo_all_blocks=1 00:10:26.135 --rc geninfo_unexecuted_blocks=1 00:10:26.135 00:10:26.135 ' 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:26.135 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:26.136 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.136 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.136 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.136 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:26.136 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.136 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:26.136 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:26.136 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:26.136 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:26.136 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:26.136 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:26.136 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:26.136 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:26.136 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:26.136 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:26.136 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:26.136 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:26.136 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:26.136 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:26.136 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:26.136 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:26.136 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:26.136 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:26.394 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:26.394 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:26.394 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:26.394 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:26.394 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:26.394 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:26.394 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:26.394 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:26.394 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:10:26.394 03:18:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.836 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:31.837 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:31.837 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:31.837 Found net devices under 0000:86:00.0: cvl_0_0 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:31.837 Found net devices under 0000:86:00.1: cvl_0_1 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:31.837 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:31.837 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:31.837 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.474 ms 00:10:31.837 00:10:31.838 --- 10.0.0.2 ping statistics --- 00:10:31.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.838 rtt min/avg/max/mdev = 0.474/0.474/0.474/0.000 ms 00:10:31.838 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:31.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:31.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:10:31.838 00:10:31.838 --- 10.0.0.1 ping statistics --- 00:10:31.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.838 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:10:31.838 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:31.838 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:10:31.838 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:31.838 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:31.838 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:31.838 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:31.838 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:31.838 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:31.838 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:31.838 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:31.838 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:31.838 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:31.838 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.838 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2528941 00:10:31.838 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2528941 00:10:31.838 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:31.838 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2528941 ']' 00:10:31.838 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.838 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:31.838 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.838 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:31.838 03:18:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.838 [2024-12-06 03:18:51.880160] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:10:31.838 [2024-12-06 03:18:51.880205] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:31.838 [2024-12-06 03:18:51.946101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:32.097 [2024-12-06 03:18:51.990075] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:32.097 [2024-12-06 03:18:51.990110] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:32.097 [2024-12-06 03:18:51.990117] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:32.097 [2024-12-06 03:18:51.990124] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:32.097 [2024-12-06 03:18:51.990130] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:32.097 [2024-12-06 03:18:51.991689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:32.097 [2024-12-06 03:18:51.991784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:32.097 [2024-12-06 03:18:51.991873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:32.097 [2024-12-06 03:18:51.991875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.097 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:32.097 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:10:32.097 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:32.097 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:32.097 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:32.097 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:32.097 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:32.097 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.097 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:32.097 [2024-12-06 03:18:52.143011] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:32.097 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.097 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:32.097 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:32.097 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:32.097 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.097 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:32.097 Null1 00:10:32.097 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.097 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:32.097 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.097 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:32.097 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.097 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:32.097 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.097 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:32.097 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.097 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:32.097 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.097 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:32.097 [2024-12-06 03:18:52.201090] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:32.097 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.097 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:32.097 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:32.097 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.097 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:32.097 Null2 00:10:32.097 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.097 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:32.097 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.097 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:32.097 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.098 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:32.098 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.098 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:32.098 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.098 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:32.098 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.098 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:32.356 Null3 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:32.356 Null4 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.356 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:10:32.615 00:10:32.615 Discovery Log Number of Records 6, Generation counter 6 00:10:32.615 =====Discovery Log Entry 0====== 00:10:32.615 trtype: tcp 00:10:32.615 adrfam: ipv4 00:10:32.615 subtype: current discovery subsystem 00:10:32.615 treq: not required 00:10:32.615 portid: 0 00:10:32.615 trsvcid: 4420 00:10:32.615 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:32.615 traddr: 10.0.0.2 00:10:32.615 eflags: explicit discovery connections, duplicate discovery information 00:10:32.615 sectype: none 00:10:32.615 =====Discovery Log Entry 1====== 00:10:32.615 trtype: tcp 00:10:32.615 adrfam: ipv4 00:10:32.615 subtype: nvme subsystem 00:10:32.615 treq: not required 00:10:32.615 portid: 0 00:10:32.615 trsvcid: 4420 00:10:32.615 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:32.615 traddr: 10.0.0.2 00:10:32.615 eflags: none 00:10:32.615 sectype: none 00:10:32.615 =====Discovery Log Entry 2====== 00:10:32.615 trtype: tcp 00:10:32.615 adrfam: ipv4 00:10:32.615 subtype: nvme subsystem 00:10:32.615 treq: not required 00:10:32.615 portid: 0 00:10:32.615 trsvcid: 4420 00:10:32.615 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:32.615 traddr: 10.0.0.2 00:10:32.615 eflags: none 00:10:32.615 sectype: none 00:10:32.615 =====Discovery Log Entry 3====== 00:10:32.615 trtype: tcp 00:10:32.615 adrfam: ipv4 00:10:32.615 subtype: nvme subsystem 00:10:32.615 treq: not required 00:10:32.615 portid: 0 00:10:32.615 trsvcid: 4420 00:10:32.615 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:32.615 traddr: 10.0.0.2 00:10:32.615 eflags: none 00:10:32.615 sectype: none 00:10:32.615 =====Discovery Log Entry 4====== 00:10:32.615 trtype: tcp 00:10:32.615 adrfam: ipv4 00:10:32.615 subtype: nvme subsystem 00:10:32.615 treq: not required 00:10:32.615 portid: 0 00:10:32.615 trsvcid: 4420 00:10:32.615 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:32.615 traddr: 10.0.0.2 00:10:32.615 eflags: none 00:10:32.615 sectype: none 00:10:32.615 =====Discovery Log Entry 5====== 00:10:32.615 trtype: tcp 00:10:32.615 adrfam: ipv4 00:10:32.615 subtype: discovery subsystem referral 00:10:32.615 treq: not required 00:10:32.615 portid: 0 00:10:32.615 trsvcid: 4430 00:10:32.615 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:32.615 traddr: 10.0.0.2 00:10:32.615 eflags: none 00:10:32.615 sectype: none 00:10:32.615 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:32.615 Perform nvmf subsystem discovery via RPC 00:10:32.615 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:32.615 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.615 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:32.615 [ 00:10:32.615 { 00:10:32.615 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:32.615 "subtype": "Discovery", 00:10:32.615 "listen_addresses": [ 00:10:32.615 { 00:10:32.615 "trtype": "TCP", 00:10:32.615 "adrfam": "IPv4", 00:10:32.615 "traddr": "10.0.0.2", 00:10:32.615 "trsvcid": "4420" 00:10:32.615 } 00:10:32.615 ], 00:10:32.615 "allow_any_host": true, 00:10:32.615 "hosts": [] 00:10:32.615 }, 00:10:32.615 { 00:10:32.615 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:32.615 "subtype": "NVMe", 00:10:32.616 "listen_addresses": [ 00:10:32.616 { 00:10:32.616 "trtype": "TCP", 00:10:32.616 "adrfam": "IPv4", 00:10:32.616 "traddr": "10.0.0.2", 00:10:32.616 "trsvcid": "4420" 00:10:32.616 } 00:10:32.616 ], 00:10:32.616 "allow_any_host": true, 00:10:32.616 "hosts": [], 00:10:32.616 "serial_number": "SPDK00000000000001", 00:10:32.616 "model_number": "SPDK bdev Controller", 00:10:32.616 "max_namespaces": 32, 00:10:32.616 "min_cntlid": 1, 00:10:32.616 "max_cntlid": 65519, 00:10:32.616 "namespaces": [ 00:10:32.616 { 00:10:32.616 "nsid": 1, 00:10:32.616 "bdev_name": "Null1", 00:10:32.616 "name": "Null1", 00:10:32.616 "nguid": "39419998AFBD44C8850683EAC653DA7A", 00:10:32.616 "uuid": "39419998-afbd-44c8-8506-83eac653da7a" 00:10:32.616 } 00:10:32.616 ] 00:10:32.616 }, 00:10:32.616 { 00:10:32.616 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:32.616 "subtype": "NVMe", 00:10:32.616 "listen_addresses": [ 00:10:32.616 { 00:10:32.616 "trtype": "TCP", 00:10:32.616 "adrfam": "IPv4", 00:10:32.616 "traddr": "10.0.0.2", 00:10:32.616 "trsvcid": "4420" 00:10:32.616 } 00:10:32.616 ], 00:10:32.616 "allow_any_host": true, 00:10:32.616 "hosts": [], 00:10:32.616 "serial_number": "SPDK00000000000002", 00:10:32.616 "model_number": "SPDK bdev Controller", 00:10:32.616 "max_namespaces": 32, 00:10:32.616 "min_cntlid": 1, 00:10:32.616 "max_cntlid": 65519, 00:10:32.616 "namespaces": [ 00:10:32.616 { 00:10:32.616 "nsid": 1, 00:10:32.616 "bdev_name": "Null2", 00:10:32.616 "name": "Null2", 00:10:32.616 "nguid": "02A27C6C2529433388C811493E14276A", 00:10:32.616 "uuid": "02a27c6c-2529-4333-88c8-11493e14276a" 00:10:32.616 } 00:10:32.616 ] 00:10:32.616 }, 00:10:32.616 { 00:10:32.616 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:32.616 "subtype": "NVMe", 00:10:32.616 "listen_addresses": [ 00:10:32.616 { 00:10:32.616 "trtype": "TCP", 00:10:32.616 "adrfam": "IPv4", 00:10:32.616 "traddr": "10.0.0.2", 00:10:32.616 "trsvcid": "4420" 00:10:32.616 } 00:10:32.616 ], 00:10:32.616 "allow_any_host": true, 00:10:32.616 "hosts": [], 00:10:32.616 "serial_number": "SPDK00000000000003", 00:10:32.616 "model_number": "SPDK bdev Controller", 00:10:32.616 "max_namespaces": 32, 00:10:32.616 "min_cntlid": 1, 00:10:32.616 "max_cntlid": 65519, 00:10:32.616 "namespaces": [ 00:10:32.616 { 00:10:32.616 "nsid": 1, 00:10:32.616 "bdev_name": "Null3", 00:10:32.616 "name": "Null3", 00:10:32.616 "nguid": "AB5B1C55F7D945778AFF4B376F7F02FD", 00:10:32.616 "uuid": "ab5b1c55-f7d9-4577-8aff-4b376f7f02fd" 00:10:32.616 } 00:10:32.616 ] 00:10:32.616 }, 00:10:32.616 { 00:10:32.616 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:32.616 "subtype": "NVMe", 00:10:32.616 "listen_addresses": [ 00:10:32.616 { 00:10:32.616 "trtype": "TCP", 00:10:32.616 "adrfam": "IPv4", 00:10:32.616 "traddr": "10.0.0.2", 00:10:32.616 "trsvcid": "4420" 00:10:32.616 } 00:10:32.616 ], 00:10:32.616 "allow_any_host": true, 00:10:32.616 "hosts": [], 00:10:32.616 "serial_number": "SPDK00000000000004", 00:10:32.616 "model_number": "SPDK bdev Controller", 00:10:32.616 "max_namespaces": 32, 00:10:32.616 "min_cntlid": 1, 00:10:32.616 "max_cntlid": 65519, 00:10:32.616 "namespaces": [ 00:10:32.616 { 00:10:32.616 "nsid": 1, 00:10:32.616 "bdev_name": "Null4", 00:10:32.616 "name": "Null4", 00:10:32.616 "nguid": "5F1A5A5C93564FC29941AC33257264CC", 00:10:32.616 "uuid": "5f1a5a5c-9356-4fc2-9941-ac33257264cc" 00:10:32.616 } 00:10:32.616 ] 00:10:32.616 } 00:10:32.616 ] 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:32.616 rmmod nvme_tcp 00:10:32.616 rmmod nvme_fabrics 00:10:32.616 rmmod nvme_keyring 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2528941 ']' 00:10:32.616 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2528941 00:10:32.617 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2528941 ']' 00:10:32.617 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2528941 00:10:32.617 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:10:32.875 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:32.875 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2528941 00:10:32.876 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:32.876 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:32.876 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2528941' 00:10:32.876 killing process with pid 2528941 00:10:32.876 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2528941 00:10:32.876 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2528941 00:10:32.876 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:32.876 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:32.876 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:32.876 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:10:32.876 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:10:32.876 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:32.876 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:10:32.876 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:32.876 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:32.876 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.876 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.876 03:18:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.407 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:35.407 00:10:35.407 real 0m8.975s 00:10:35.407 user 0m5.626s 00:10:35.407 sys 0m4.563s 00:10:35.407 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:35.407 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:35.407 ************************************ 00:10:35.407 END TEST nvmf_target_discovery 00:10:35.407 ************************************ 00:10:35.407 03:18:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:35.407 03:18:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:35.407 03:18:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:35.407 03:18:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:35.407 ************************************ 00:10:35.407 START TEST nvmf_referrals 00:10:35.407 ************************************ 00:10:35.407 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:35.407 * Looking for test storage... 00:10:35.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:35.407 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:35.407 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:10:35.407 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:35.407 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:35.407 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:35.407 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:35.407 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:35.407 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:35.407 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:35.407 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:35.407 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:35.407 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:35.407 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:35.407 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:35.407 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:35.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.408 --rc genhtml_branch_coverage=1 00:10:35.408 --rc genhtml_function_coverage=1 00:10:35.408 --rc genhtml_legend=1 00:10:35.408 --rc geninfo_all_blocks=1 00:10:35.408 --rc geninfo_unexecuted_blocks=1 00:10:35.408 00:10:35.408 ' 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:35.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.408 --rc genhtml_branch_coverage=1 00:10:35.408 --rc genhtml_function_coverage=1 00:10:35.408 --rc genhtml_legend=1 00:10:35.408 --rc geninfo_all_blocks=1 00:10:35.408 --rc geninfo_unexecuted_blocks=1 00:10:35.408 00:10:35.408 ' 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:35.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.408 --rc genhtml_branch_coverage=1 00:10:35.408 --rc genhtml_function_coverage=1 00:10:35.408 --rc genhtml_legend=1 00:10:35.408 --rc geninfo_all_blocks=1 00:10:35.408 --rc geninfo_unexecuted_blocks=1 00:10:35.408 00:10:35.408 ' 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:35.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.408 --rc genhtml_branch_coverage=1 00:10:35.408 --rc genhtml_function_coverage=1 00:10:35.408 --rc genhtml_legend=1 00:10:35.408 --rc geninfo_all_blocks=1 00:10:35.408 --rc geninfo_unexecuted_blocks=1 00:10:35.408 00:10:35.408 ' 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:35.408 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:35.408 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:35.409 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:10:35.409 03:18:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:40.664 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:40.664 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.664 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:40.665 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.665 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:40.665 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:40.665 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.665 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:40.665 Found net devices under 0000:86:00.0: cvl_0_0 00:10:40.665 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.665 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:40.665 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.665 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:40.665 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.665 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:40.665 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:40.665 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.665 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:40.665 Found net devices under 0000:86:00.1: cvl_0_1 00:10:40.665 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.665 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:40.665 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:10:40.665 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:40.665 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:40.665 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:40.665 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:40.665 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:40.665 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.665 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:40.665 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:40.665 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:40.665 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:40.665 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:40.665 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:40.665 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:40.665 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.665 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:40.665 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:40.665 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:40.665 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:40.665 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:40.665 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:40.665 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:40.665 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:40.923 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:40.923 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:40.923 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:40.923 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:40.923 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.923 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:10:40.923 00:10:40.923 --- 10.0.0.2 ping statistics --- 00:10:40.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.923 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:10:40.923 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:40.923 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.923 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:10:40.923 00:10:40.923 --- 10.0.0.1 ping statistics --- 00:10:40.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.923 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:10:40.923 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.923 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:10:40.923 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:40.923 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.923 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:40.923 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:40.923 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.923 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:40.923 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:40.923 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:40.924 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:40.924 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:40.924 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:40.924 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2532716 00:10:40.924 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:40.924 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2532716 00:10:40.924 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2532716 ']' 00:10:40.924 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.924 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:40.924 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.924 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:40.924 03:19:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:40.924 [2024-12-06 03:19:00.978618] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:10:40.924 [2024-12-06 03:19:00.978660] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.924 [2024-12-06 03:19:01.044466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:41.183 [2024-12-06 03:19:01.088972] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:41.183 [2024-12-06 03:19:01.089007] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:41.183 [2024-12-06 03:19:01.089014] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:41.183 [2024-12-06 03:19:01.089020] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:41.183 [2024-12-06 03:19:01.089025] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:41.183 [2024-12-06 03:19:01.090540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.183 [2024-12-06 03:19:01.090636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:41.183 [2024-12-06 03:19:01.090730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:41.183 [2024-12-06 03:19:01.090731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.183 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:41.183 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:10:41.183 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:41.183 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:41.183 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:41.183 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:41.183 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:41.183 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.183 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:41.183 [2024-12-06 03:19:01.241372] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:41.183 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.183 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:41.183 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.183 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:41.183 [2024-12-06 03:19:01.265126] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:41.183 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.183 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:41.183 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.183 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:41.183 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.183 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:41.183 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.183 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:41.183 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.183 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:41.183 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.183 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:41.183 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.183 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:41.183 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:41.183 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.183 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:41.183 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.441 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:41.441 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:41.441 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:41.441 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:41.441 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:41.441 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.441 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:41.441 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:41.441 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.441 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:41.441 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:41.441 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:41.441 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:41.441 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:41.441 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:41.441 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:41.441 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:41.700 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:41.700 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:41.700 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:41.700 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.700 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:41.700 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.700 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:41.700 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.700 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:41.700 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.700 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:41.700 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.700 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:41.700 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.700 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:41.700 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:41.700 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.700 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:41.700 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.700 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:41.700 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:41.700 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:41.700 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:41.700 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:41.700 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:41.700 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:41.958 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:41.958 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:41.958 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:41.958 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.958 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:41.958 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.958 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:41.958 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.958 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:41.958 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.958 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:41.958 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:41.958 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:41.958 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:41.958 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.958 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:41.958 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:41.958 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.958 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:41.958 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:41.958 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:41.958 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:41.958 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:41.958 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:41.958 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:41.958 03:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:41.958 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:41.958 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:41.958 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:41.958 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:41.958 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:41.958 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:41.958 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:42.217 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:42.217 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:42.217 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:42.217 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:42.217 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:42.217 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:42.475 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:42.475 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:42.475 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.475 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:42.476 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.476 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:42.476 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:42.476 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:42.476 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:42.476 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:42.476 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.476 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:42.476 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.476 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:42.476 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:42.476 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:42.476 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:42.476 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:42.476 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:42.476 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:42.476 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:42.734 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:42.734 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:42.734 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:42.734 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:42.734 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:42.734 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:42.734 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:42.734 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:42.734 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:42.734 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:42.734 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:42.734 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:42.734 03:19:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:42.992 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:42.992 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:42.992 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.992 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:42.992 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.992 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:42.992 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:42.992 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.992 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:42.992 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.992 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:42.992 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:42.992 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:42.992 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:42.992 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:42.992 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:42.992 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:43.251 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:43.251 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:43.251 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:43.251 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:43.251 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:43.251 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:10:43.251 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:43.251 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:10:43.251 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:43.251 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:43.251 rmmod nvme_tcp 00:10:43.251 rmmod nvme_fabrics 00:10:43.251 rmmod nvme_keyring 00:10:43.251 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:43.251 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:10:43.251 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:10:43.251 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2532716 ']' 00:10:43.251 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2532716 00:10:43.251 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2532716 ']' 00:10:43.251 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2532716 00:10:43.251 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:10:43.251 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:43.251 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2532716 00:10:43.510 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:43.510 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:43.510 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2532716' 00:10:43.510 killing process with pid 2532716 00:10:43.510 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2532716 00:10:43.510 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2532716 00:10:43.510 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:43.510 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:43.511 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:43.511 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:10:43.511 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:10:43.511 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:43.511 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:10:43.511 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:43.511 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:43.511 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.511 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.511 03:19:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.042 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:46.042 00:10:46.042 real 0m10.535s 00:10:46.042 user 0m12.428s 00:10:46.042 sys 0m4.910s 00:10:46.042 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.042 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:46.042 ************************************ 00:10:46.042 END TEST nvmf_referrals 00:10:46.042 ************************************ 00:10:46.042 03:19:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:46.042 03:19:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:46.042 03:19:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.042 03:19:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:46.042 ************************************ 00:10:46.042 START TEST nvmf_connect_disconnect 00:10:46.042 ************************************ 00:10:46.042 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:46.042 * Looking for test storage... 00:10:46.042 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:46.042 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:46.042 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:10:46.042 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:46.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.043 --rc genhtml_branch_coverage=1 00:10:46.043 --rc genhtml_function_coverage=1 00:10:46.043 --rc genhtml_legend=1 00:10:46.043 --rc geninfo_all_blocks=1 00:10:46.043 --rc geninfo_unexecuted_blocks=1 00:10:46.043 00:10:46.043 ' 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:46.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.043 --rc genhtml_branch_coverage=1 00:10:46.043 --rc genhtml_function_coverage=1 00:10:46.043 --rc genhtml_legend=1 00:10:46.043 --rc geninfo_all_blocks=1 00:10:46.043 --rc geninfo_unexecuted_blocks=1 00:10:46.043 00:10:46.043 ' 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:46.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.043 --rc genhtml_branch_coverage=1 00:10:46.043 --rc genhtml_function_coverage=1 00:10:46.043 --rc genhtml_legend=1 00:10:46.043 --rc geninfo_all_blocks=1 00:10:46.043 --rc geninfo_unexecuted_blocks=1 00:10:46.043 00:10:46.043 ' 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:46.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.043 --rc genhtml_branch_coverage=1 00:10:46.043 --rc genhtml_function_coverage=1 00:10:46.043 --rc genhtml_legend=1 00:10:46.043 --rc geninfo_all_blocks=1 00:10:46.043 --rc geninfo_unexecuted_blocks=1 00:10:46.043 00:10:46.043 ' 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:46.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:10:46.043 03:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:51.311 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:51.311 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:51.311 Found net devices under 0000:86:00.0: cvl_0_0 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:51.311 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:51.312 Found net devices under 0000:86:00.1: cvl_0_1 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:51.312 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:51.312 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:10:51.312 00:10:51.312 --- 10.0.0.2 ping statistics --- 00:10:51.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.312 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:51.312 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:51.312 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:10:51.312 00:10:51.312 --- 10.0.0.1 ping statistics --- 00:10:51.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.312 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2536575 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2536575 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2536575 ']' 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:51.312 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:51.312 [2024-12-06 03:19:11.424001] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:10:51.312 [2024-12-06 03:19:11.424047] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.571 [2024-12-06 03:19:11.489155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:51.571 [2024-12-06 03:19:11.532413] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:51.571 [2024-12-06 03:19:11.532452] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:51.571 [2024-12-06 03:19:11.532460] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:51.571 [2024-12-06 03:19:11.532466] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:51.571 [2024-12-06 03:19:11.532472] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:51.571 [2024-12-06 03:19:11.534017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:51.571 [2024-12-06 03:19:11.534035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:51.571 [2024-12-06 03:19:11.534125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:51.571 [2024-12-06 03:19:11.534126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.571 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:51.571 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:10:51.571 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:51.571 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:51.571 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:51.571 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:51.571 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:51.571 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.571 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:51.571 [2024-12-06 03:19:11.676600] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:51.571 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.571 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:51.571 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.571 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:51.830 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.830 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:51.830 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:51.830 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.830 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:51.830 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.830 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:51.830 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.830 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:51.830 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.830 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:51.830 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.830 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:51.830 [2024-12-06 03:19:11.740243] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:51.830 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.830 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:10:51.830 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:10:51.830 03:19:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:10:55.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.406 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.245 03:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:08.245 03:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:08.245 03:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:08.245 03:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:08.245 03:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:08.245 03:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:08.245 03:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:08.245 03:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:08.245 rmmod nvme_tcp 00:11:08.245 rmmod nvme_fabrics 00:11:08.245 rmmod nvme_keyring 00:11:08.245 03:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:08.245 03:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:08.245 03:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:08.245 03:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2536575 ']' 00:11:08.245 03:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2536575 00:11:08.245 03:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2536575 ']' 00:11:08.245 03:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2536575 00:11:08.245 03:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:11:08.245 03:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:08.245 03:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2536575 00:11:08.245 03:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:08.245 03:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:08.245 03:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2536575' 00:11:08.245 killing process with pid 2536575 00:11:08.245 03:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2536575 00:11:08.245 03:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2536575 00:11:08.502 03:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:08.502 03:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:08.502 03:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:08.502 03:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:08.502 03:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:11:08.502 03:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:08.502 03:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:11:08.502 03:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:08.502 03:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:08.502 03:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.502 03:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:08.502 03:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.400 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:10.400 00:11:10.400 real 0m24.765s 00:11:10.400 user 1m8.655s 00:11:10.400 sys 0m5.355s 00:11:10.400 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:10.400 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:10.400 ************************************ 00:11:10.400 END TEST nvmf_connect_disconnect 00:11:10.400 ************************************ 00:11:10.400 03:19:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:10.400 03:19:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:10.400 03:19:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:10.400 03:19:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:10.659 ************************************ 00:11:10.659 START TEST nvmf_multitarget 00:11:10.659 ************************************ 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:10.659 * Looking for test storage... 00:11:10.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:10.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.659 --rc genhtml_branch_coverage=1 00:11:10.659 --rc genhtml_function_coverage=1 00:11:10.659 --rc genhtml_legend=1 00:11:10.659 --rc geninfo_all_blocks=1 00:11:10.659 --rc geninfo_unexecuted_blocks=1 00:11:10.659 00:11:10.659 ' 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:10.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.659 --rc genhtml_branch_coverage=1 00:11:10.659 --rc genhtml_function_coverage=1 00:11:10.659 --rc genhtml_legend=1 00:11:10.659 --rc geninfo_all_blocks=1 00:11:10.659 --rc geninfo_unexecuted_blocks=1 00:11:10.659 00:11:10.659 ' 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:10.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.659 --rc genhtml_branch_coverage=1 00:11:10.659 --rc genhtml_function_coverage=1 00:11:10.659 --rc genhtml_legend=1 00:11:10.659 --rc geninfo_all_blocks=1 00:11:10.659 --rc geninfo_unexecuted_blocks=1 00:11:10.659 00:11:10.659 ' 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:10.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.659 --rc genhtml_branch_coverage=1 00:11:10.659 --rc genhtml_function_coverage=1 00:11:10.659 --rc genhtml_legend=1 00:11:10.659 --rc geninfo_all_blocks=1 00:11:10.659 --rc geninfo_unexecuted_blocks=1 00:11:10.659 00:11:10.659 ' 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:10.659 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:10.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:10.660 03:19:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:15.923 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:15.923 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:15.923 Found net devices under 0000:86:00.0: cvl_0_0 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:15.923 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:15.924 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:15.924 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:15.924 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:15.924 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:15.924 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:15.924 Found net devices under 0000:86:00.1: cvl_0_1 00:11:15.924 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:15.924 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:15.924 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:11:15.924 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:15.924 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:15.924 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:15.924 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:15.924 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:15.924 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:15.924 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:15.924 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:15.924 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:15.924 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:15.924 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:15.924 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:15.924 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:15.924 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:15.924 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:15.924 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:15.924 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:15.924 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:16.183 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:16.183 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:16.183 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:16.183 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:16.183 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:16.183 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:16.183 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:16.183 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:16.183 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:16.183 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.462 ms 00:11:16.183 00:11:16.183 --- 10.0.0.2 ping statistics --- 00:11:16.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.183 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:11:16.183 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:16.183 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:16.183 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:11:16.183 00:11:16.183 --- 10.0.0.1 ping statistics --- 00:11:16.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.183 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:11:16.183 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:16.183 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:11:16.183 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:16.183 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:16.183 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:16.183 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:16.183 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:16.183 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:16.183 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:16.183 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:16.183 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:16.183 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:16.183 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:16.183 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2542955 00:11:16.183 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:16.183 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2542955 00:11:16.183 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2542955 ']' 00:11:16.183 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.183 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:16.183 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.183 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:16.183 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:16.443 [2024-12-06 03:19:36.349779] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:11:16.443 [2024-12-06 03:19:36.349825] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:16.443 [2024-12-06 03:19:36.415259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:16.443 [2024-12-06 03:19:36.458041] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:16.443 [2024-12-06 03:19:36.458079] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:16.443 [2024-12-06 03:19:36.458086] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:16.443 [2024-12-06 03:19:36.458092] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:16.443 [2024-12-06 03:19:36.458098] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:16.443 [2024-12-06 03:19:36.459607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.443 [2024-12-06 03:19:36.459705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:16.443 [2024-12-06 03:19:36.459788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:16.443 [2024-12-06 03:19:36.459790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.443 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:16.443 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:11:16.443 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:16.443 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:16.443 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:16.701 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:16.701 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:16.701 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:16.701 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:16.701 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:16.701 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:16.701 "nvmf_tgt_1" 00:11:16.701 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:16.960 "nvmf_tgt_2" 00:11:16.960 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:16.960 03:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:16.960 03:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:16.960 03:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:17.218 true 00:11:17.218 03:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:17.218 true 00:11:17.218 03:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:17.218 03:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:17.218 03:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:17.218 03:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:17.218 03:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:17.218 03:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:17.218 03:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:11:17.476 03:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:17.476 03:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:11:17.476 03:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:17.476 03:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:17.476 rmmod nvme_tcp 00:11:17.476 rmmod nvme_fabrics 00:11:17.476 rmmod nvme_keyring 00:11:17.476 03:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:17.476 03:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:11:17.476 03:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:11:17.476 03:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2542955 ']' 00:11:17.476 03:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2542955 00:11:17.476 03:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2542955 ']' 00:11:17.476 03:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2542955 00:11:17.476 03:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:11:17.476 03:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:17.476 03:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2542955 00:11:17.476 03:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:17.476 03:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:17.476 03:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2542955' 00:11:17.476 killing process with pid 2542955 00:11:17.476 03:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2542955 00:11:17.476 03:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2542955 00:11:17.735 03:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:17.735 03:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:17.735 03:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:17.735 03:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:11:17.735 03:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:11:17.735 03:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:11:17.735 03:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:17.735 03:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:17.735 03:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:17.735 03:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.735 03:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.735 03:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.639 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:19.639 00:11:19.639 real 0m9.151s 00:11:19.639 user 0m7.090s 00:11:19.639 sys 0m4.576s 00:11:19.639 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.639 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:19.639 ************************************ 00:11:19.639 END TEST nvmf_multitarget 00:11:19.639 ************************************ 00:11:19.639 03:19:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:19.639 03:19:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:19.639 03:19:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.639 03:19:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:19.898 ************************************ 00:11:19.898 START TEST nvmf_rpc 00:11:19.898 ************************************ 00:11:19.898 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:19.898 * Looking for test storage... 00:11:19.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.898 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:19.898 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:11:19.898 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:19.898 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:19.898 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:19.898 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:19.898 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:19.898 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:19.898 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:19.898 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:19.898 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:19.898 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:19.898 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:19.898 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:19.898 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:19.898 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:19.898 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:19.898 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:19.898 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:19.898 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:19.898 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:19.898 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:19.898 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:19.898 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:19.898 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:19.898 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:19.898 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:19.898 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:19.898 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:19.898 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:19.898 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:19.898 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:19.898 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:19.898 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:19.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.898 --rc genhtml_branch_coverage=1 00:11:19.898 --rc genhtml_function_coverage=1 00:11:19.898 --rc genhtml_legend=1 00:11:19.898 --rc geninfo_all_blocks=1 00:11:19.898 --rc geninfo_unexecuted_blocks=1 00:11:19.898 00:11:19.898 ' 00:11:19.898 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:19.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.898 --rc genhtml_branch_coverage=1 00:11:19.899 --rc genhtml_function_coverage=1 00:11:19.899 --rc genhtml_legend=1 00:11:19.899 --rc geninfo_all_blocks=1 00:11:19.899 --rc geninfo_unexecuted_blocks=1 00:11:19.899 00:11:19.899 ' 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:19.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.899 --rc genhtml_branch_coverage=1 00:11:19.899 --rc genhtml_function_coverage=1 00:11:19.899 --rc genhtml_legend=1 00:11:19.899 --rc geninfo_all_blocks=1 00:11:19.899 --rc geninfo_unexecuted_blocks=1 00:11:19.899 00:11:19.899 ' 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:19.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.899 --rc genhtml_branch_coverage=1 00:11:19.899 --rc genhtml_function_coverage=1 00:11:19.899 --rc genhtml_legend=1 00:11:19.899 --rc geninfo_all_blocks=1 00:11:19.899 --rc geninfo_unexecuted_blocks=1 00:11:19.899 00:11:19.899 ' 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:19.899 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:11:19.899 03:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:26.467 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:26.467 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:26.467 Found net devices under 0000:86:00.0: cvl_0_0 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:26.467 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:26.468 Found net devices under 0000:86:00.1: cvl_0_1 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:26.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:26.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.441 ms 00:11:26.468 00:11:26.468 --- 10.0.0.2 ping statistics --- 00:11:26.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.468 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:26.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:26.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:11:26.468 00:11:26.468 --- 10.0.0.1 ping statistics --- 00:11:26.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.468 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2546734 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2546734 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2546734 ']' 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:26.468 03:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.468 [2024-12-06 03:19:45.854684] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:11:26.468 [2024-12-06 03:19:45.854733] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.468 [2024-12-06 03:19:45.920330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:26.468 [2024-12-06 03:19:45.960595] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:26.468 [2024-12-06 03:19:45.960632] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:26.468 [2024-12-06 03:19:45.960640] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:26.468 [2024-12-06 03:19:45.960646] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:26.468 [2024-12-06 03:19:45.960651] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:26.468 [2024-12-06 03:19:45.962227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.468 [2024-12-06 03:19:45.962328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:26.468 [2024-12-06 03:19:45.962396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:26.468 [2024-12-06 03:19:45.962398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.468 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:26.468 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:26.468 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:26.468 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:26.468 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.468 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:26.468 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:26.468 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.468 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.468 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.468 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:26.468 "tick_rate": 2300000000, 00:11:26.468 "poll_groups": [ 00:11:26.468 { 00:11:26.468 "name": "nvmf_tgt_poll_group_000", 00:11:26.468 "admin_qpairs": 0, 00:11:26.468 "io_qpairs": 0, 00:11:26.468 "current_admin_qpairs": 0, 00:11:26.468 "current_io_qpairs": 0, 00:11:26.468 "pending_bdev_io": 0, 00:11:26.468 "completed_nvme_io": 0, 00:11:26.468 "transports": [] 00:11:26.468 }, 00:11:26.468 { 00:11:26.468 "name": "nvmf_tgt_poll_group_001", 00:11:26.468 "admin_qpairs": 0, 00:11:26.468 "io_qpairs": 0, 00:11:26.468 "current_admin_qpairs": 0, 00:11:26.468 "current_io_qpairs": 0, 00:11:26.468 "pending_bdev_io": 0, 00:11:26.468 "completed_nvme_io": 0, 00:11:26.468 "transports": [] 00:11:26.468 }, 00:11:26.468 { 00:11:26.468 "name": "nvmf_tgt_poll_group_002", 00:11:26.468 "admin_qpairs": 0, 00:11:26.468 "io_qpairs": 0, 00:11:26.468 "current_admin_qpairs": 0, 00:11:26.468 "current_io_qpairs": 0, 00:11:26.468 "pending_bdev_io": 0, 00:11:26.468 "completed_nvme_io": 0, 00:11:26.468 "transports": [] 00:11:26.468 }, 00:11:26.468 { 00:11:26.468 "name": "nvmf_tgt_poll_group_003", 00:11:26.468 "admin_qpairs": 0, 00:11:26.468 "io_qpairs": 0, 00:11:26.468 "current_admin_qpairs": 0, 00:11:26.468 "current_io_qpairs": 0, 00:11:26.468 "pending_bdev_io": 0, 00:11:26.468 "completed_nvme_io": 0, 00:11:26.468 "transports": [] 00:11:26.468 } 00:11:26.468 ] 00:11:26.468 }' 00:11:26.468 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:26.468 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:26.468 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:26.468 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:26.468 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.469 [2024-12-06 03:19:46.217385] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:26.469 "tick_rate": 2300000000, 00:11:26.469 "poll_groups": [ 00:11:26.469 { 00:11:26.469 "name": "nvmf_tgt_poll_group_000", 00:11:26.469 "admin_qpairs": 0, 00:11:26.469 "io_qpairs": 0, 00:11:26.469 "current_admin_qpairs": 0, 00:11:26.469 "current_io_qpairs": 0, 00:11:26.469 "pending_bdev_io": 0, 00:11:26.469 "completed_nvme_io": 0, 00:11:26.469 "transports": [ 00:11:26.469 { 00:11:26.469 "trtype": "TCP" 00:11:26.469 } 00:11:26.469 ] 00:11:26.469 }, 00:11:26.469 { 00:11:26.469 "name": "nvmf_tgt_poll_group_001", 00:11:26.469 "admin_qpairs": 0, 00:11:26.469 "io_qpairs": 0, 00:11:26.469 "current_admin_qpairs": 0, 00:11:26.469 "current_io_qpairs": 0, 00:11:26.469 "pending_bdev_io": 0, 00:11:26.469 "completed_nvme_io": 0, 00:11:26.469 "transports": [ 00:11:26.469 { 00:11:26.469 "trtype": "TCP" 00:11:26.469 } 00:11:26.469 ] 00:11:26.469 }, 00:11:26.469 { 00:11:26.469 "name": "nvmf_tgt_poll_group_002", 00:11:26.469 "admin_qpairs": 0, 00:11:26.469 "io_qpairs": 0, 00:11:26.469 "current_admin_qpairs": 0, 00:11:26.469 "current_io_qpairs": 0, 00:11:26.469 "pending_bdev_io": 0, 00:11:26.469 "completed_nvme_io": 0, 00:11:26.469 "transports": [ 00:11:26.469 { 00:11:26.469 "trtype": "TCP" 00:11:26.469 } 00:11:26.469 ] 00:11:26.469 }, 00:11:26.469 { 00:11:26.469 "name": "nvmf_tgt_poll_group_003", 00:11:26.469 "admin_qpairs": 0, 00:11:26.469 "io_qpairs": 0, 00:11:26.469 "current_admin_qpairs": 0, 00:11:26.469 "current_io_qpairs": 0, 00:11:26.469 "pending_bdev_io": 0, 00:11:26.469 "completed_nvme_io": 0, 00:11:26.469 "transports": [ 00:11:26.469 { 00:11:26.469 "trtype": "TCP" 00:11:26.469 } 00:11:26.469 ] 00:11:26.469 } 00:11:26.469 ] 00:11:26.469 }' 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.469 Malloc1 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.469 [2024-12-06 03:19:46.398021] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:26.469 [2024-12-06 03:19:46.426684] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:11:26.469 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:26.469 could not add new controller: failed to write to nvme-fabrics device 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.469 03:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:27.841 03:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:27.841 03:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:27.841 03:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:27.841 03:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:27.841 03:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:29.737 03:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:29.737 03:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:29.737 03:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:29.737 03:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:29.737 03:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:29.737 03:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:29.737 03:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:29.737 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.737 03:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:29.737 03:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:29.737 03:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:29.737 03:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:29.737 03:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:29.737 03:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:29.737 03:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:29.737 03:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:29.738 03:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.738 03:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.738 03:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.738 03:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:29.738 03:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:29.738 03:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:29.738 03:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:29.738 03:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.738 03:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:29.738 03:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.738 03:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:29.738 03:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.738 03:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:29.738 03:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:29.738 03:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:29.738 [2024-12-06 03:19:49.709572] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:11:29.738 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:29.738 could not add new controller: failed to write to nvme-fabrics device 00:11:29.738 03:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:29.738 03:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:29.738 03:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:29.738 03:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:29.738 03:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:29.738 03:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.738 03:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.738 03:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.738 03:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:31.110 03:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:31.110 03:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:31.110 03:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:31.110 03:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:31.110 03:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:33.012 03:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:33.012 03:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:33.012 03:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:33.012 03:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:33.012 03:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:33.012 03:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:33.012 03:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:33.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.012 03:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:33.012 03:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:33.012 03:19:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:33.012 03:19:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:33.012 03:19:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:33.012 03:19:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:33.012 03:19:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:33.012 03:19:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:33.012 03:19:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.012 03:19:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.012 03:19:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.012 03:19:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:33.012 03:19:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:33.012 03:19:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:33.012 03:19:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.012 03:19:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.012 03:19:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.012 03:19:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:33.012 03:19:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.012 03:19:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.012 [2024-12-06 03:19:53.061395] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:33.012 03:19:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.012 03:19:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:33.012 03:19:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.012 03:19:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.013 03:19:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.013 03:19:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:33.013 03:19:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.013 03:19:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.013 03:19:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.013 03:19:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:34.387 03:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:34.387 03:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:34.387 03:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:34.387 03:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:34.387 03:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:36.287 03:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:36.287 03:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:36.287 03:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:36.287 03:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:36.287 03:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:36.287 03:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:36.287 03:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:36.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.287 03:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:36.287 03:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:36.287 03:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:36.287 03:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:36.287 03:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:36.287 03:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:36.287 03:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:36.287 03:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:36.287 03:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.287 03:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.287 03:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.287 03:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:36.287 03:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.287 03:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.287 03:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.287 03:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:36.287 03:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:36.287 03:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.287 03:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.288 03:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.288 03:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:36.288 03:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.288 03:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.288 [2024-12-06 03:19:56.408457] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:36.288 03:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.288 03:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:36.288 03:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.288 03:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.288 03:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.288 03:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:36.288 03:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.288 03:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.545 03:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.545 03:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:37.480 03:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:37.480 03:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:37.480 03:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:37.480 03:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:37.480 03:19:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:40.008 03:19:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:40.008 03:19:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:40.008 03:19:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:40.008 03:19:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:40.008 03:19:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:40.008 03:19:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:40.008 03:19:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:40.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.008 03:19:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:40.008 03:19:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:40.008 03:19:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:40.008 03:19:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:40.008 03:19:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:40.008 03:19:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:40.008 03:19:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:40.008 03:19:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:40.008 03:19:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.008 03:19:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.008 03:19:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.008 03:19:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:40.008 03:19:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.008 03:19:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.008 03:19:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.008 03:19:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:40.008 03:19:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:40.008 03:19:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.008 03:19:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.008 03:19:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.008 03:19:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:40.008 03:19:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.008 03:19:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.008 [2024-12-06 03:19:59.807677] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:40.008 03:19:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.008 03:19:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:40.008 03:19:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.008 03:19:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.008 03:19:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.008 03:19:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:40.008 03:19:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.008 03:19:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.008 03:19:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.008 03:19:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:40.942 03:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:40.942 03:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:40.942 03:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:40.942 03:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:40.942 03:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:43.469 03:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:43.469 03:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:43.469 03:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:43.469 03:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:43.469 03:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:43.469 03:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:43.469 03:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:43.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.469 03:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:43.469 03:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:43.469 03:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:43.469 03:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:43.469 03:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:43.469 03:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:43.469 03:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:43.469 03:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:43.469 03:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.469 03:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.469 03:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.469 03:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:43.469 03:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.469 03:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.469 03:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.469 03:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:43.469 03:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:43.469 03:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.469 03:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.469 03:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.469 03:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:43.469 03:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.469 03:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.469 [2024-12-06 03:20:03.155995] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:43.469 03:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.469 03:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:43.469 03:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.469 03:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.469 03:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.469 03:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:43.469 03:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.469 03:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.469 03:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.469 03:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:44.402 03:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:44.402 03:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:44.402 03:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:44.402 03:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:44.402 03:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:46.300 03:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:46.300 03:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:46.300 03:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:46.300 03:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:46.300 03:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:46.300 03:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:46.300 03:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:46.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.558 03:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:46.558 03:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:46.558 03:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:46.558 03:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.558 03:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:46.558 03:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.558 03:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:46.558 03:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:46.558 03:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.558 03:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.558 03:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.558 03:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:46.558 03:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.558 03:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.558 03:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.558 03:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:46.558 03:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:46.558 03:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.558 03:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.558 03:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.558 03:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.558 03:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.558 03:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.558 [2024-12-06 03:20:06.506443] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.558 03:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.558 03:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:46.558 03:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.558 03:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.558 03:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.558 03:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:46.558 03:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.558 03:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.558 03:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.558 03:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:47.489 03:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:47.489 03:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:47.489 03:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:47.489 03:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:47.489 03:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:50.017 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:50.017 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:50.017 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:50.017 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:50.017 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:50.017 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:50.017 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:50.017 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.017 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:50.017 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:50.017 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:50.017 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:50.017 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:50.017 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:50.017 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:50.017 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:50.017 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.017 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.017 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.017 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:50.017 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.017 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.017 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.017 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:50.017 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:50.017 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:50.017 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.017 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.017 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.017 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:50.017 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.017 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.017 [2024-12-06 03:20:09.772833] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:50.017 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.017 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.018 [2024-12-06 03:20:09.820958] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.018 [2024-12-06 03:20:09.869084] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.018 [2024-12-06 03:20:09.917248] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.018 [2024-12-06 03:20:09.965415] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.018 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.019 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.019 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:50.019 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.019 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.019 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.019 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:50.019 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.019 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.019 03:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.019 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:50.019 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.019 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.019 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.019 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:50.019 "tick_rate": 2300000000, 00:11:50.019 "poll_groups": [ 00:11:50.019 { 00:11:50.019 "name": "nvmf_tgt_poll_group_000", 00:11:50.019 "admin_qpairs": 2, 00:11:50.019 "io_qpairs": 168, 00:11:50.019 "current_admin_qpairs": 0, 00:11:50.019 "current_io_qpairs": 0, 00:11:50.019 "pending_bdev_io": 0, 00:11:50.019 "completed_nvme_io": 234, 00:11:50.019 "transports": [ 00:11:50.019 { 00:11:50.019 "trtype": "TCP" 00:11:50.019 } 00:11:50.019 ] 00:11:50.019 }, 00:11:50.019 { 00:11:50.019 "name": "nvmf_tgt_poll_group_001", 00:11:50.019 "admin_qpairs": 2, 00:11:50.019 "io_qpairs": 168, 00:11:50.019 "current_admin_qpairs": 0, 00:11:50.019 "current_io_qpairs": 0, 00:11:50.019 "pending_bdev_io": 0, 00:11:50.019 "completed_nvme_io": 218, 00:11:50.019 "transports": [ 00:11:50.019 { 00:11:50.019 "trtype": "TCP" 00:11:50.019 } 00:11:50.019 ] 00:11:50.019 }, 00:11:50.019 { 00:11:50.019 "name": "nvmf_tgt_poll_group_002", 00:11:50.019 "admin_qpairs": 1, 00:11:50.019 "io_qpairs": 168, 00:11:50.019 "current_admin_qpairs": 0, 00:11:50.019 "current_io_qpairs": 0, 00:11:50.019 "pending_bdev_io": 0, 00:11:50.019 "completed_nvme_io": 266, 00:11:50.019 "transports": [ 00:11:50.019 { 00:11:50.019 "trtype": "TCP" 00:11:50.019 } 00:11:50.019 ] 00:11:50.019 }, 00:11:50.019 { 00:11:50.019 "name": "nvmf_tgt_poll_group_003", 00:11:50.019 "admin_qpairs": 2, 00:11:50.019 "io_qpairs": 168, 00:11:50.019 "current_admin_qpairs": 0, 00:11:50.019 "current_io_qpairs": 0, 00:11:50.019 "pending_bdev_io": 0, 00:11:50.019 "completed_nvme_io": 304, 00:11:50.019 "transports": [ 00:11:50.019 { 00:11:50.019 "trtype": "TCP" 00:11:50.019 } 00:11:50.019 ] 00:11:50.019 } 00:11:50.019 ] 00:11:50.019 }' 00:11:50.019 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:50.019 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:50.019 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:50.019 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:50.019 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:50.019 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:50.019 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:50.019 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:50.019 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:50.019 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:11:50.019 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:50.019 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:50.019 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:50.019 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:50.019 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:11:50.019 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:50.019 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:11:50.019 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:50.019 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:50.019 rmmod nvme_tcp 00:11:50.019 rmmod nvme_fabrics 00:11:50.277 rmmod nvme_keyring 00:11:50.277 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:50.277 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:11:50.277 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:11:50.277 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2546734 ']' 00:11:50.277 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2546734 00:11:50.277 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2546734 ']' 00:11:50.277 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2546734 00:11:50.277 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:11:50.277 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:50.277 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2546734 00:11:50.277 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:50.277 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:50.278 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2546734' 00:11:50.278 killing process with pid 2546734 00:11:50.278 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2546734 00:11:50.278 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2546734 00:11:50.536 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:50.536 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:50.536 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:50.536 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:11:50.536 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:11:50.536 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:50.537 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:11:50.537 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:50.537 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:50.537 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.537 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:50.537 03:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.439 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:52.439 00:11:52.439 real 0m32.721s 00:11:52.439 user 1m39.093s 00:11:52.439 sys 0m6.408s 00:11:52.439 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:52.439 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.439 ************************************ 00:11:52.439 END TEST nvmf_rpc 00:11:52.439 ************************************ 00:11:52.439 03:20:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:52.439 03:20:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:52.439 03:20:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:52.439 03:20:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:52.439 ************************************ 00:11:52.439 START TEST nvmf_invalid 00:11:52.439 ************************************ 00:11:52.439 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:52.699 * Looking for test storage... 00:11:52.699 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:52.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.699 --rc genhtml_branch_coverage=1 00:11:52.699 --rc genhtml_function_coverage=1 00:11:52.699 --rc genhtml_legend=1 00:11:52.699 --rc geninfo_all_blocks=1 00:11:52.699 --rc geninfo_unexecuted_blocks=1 00:11:52.699 00:11:52.699 ' 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:52.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.699 --rc genhtml_branch_coverage=1 00:11:52.699 --rc genhtml_function_coverage=1 00:11:52.699 --rc genhtml_legend=1 00:11:52.699 --rc geninfo_all_blocks=1 00:11:52.699 --rc geninfo_unexecuted_blocks=1 00:11:52.699 00:11:52.699 ' 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:52.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.699 --rc genhtml_branch_coverage=1 00:11:52.699 --rc genhtml_function_coverage=1 00:11:52.699 --rc genhtml_legend=1 00:11:52.699 --rc geninfo_all_blocks=1 00:11:52.699 --rc geninfo_unexecuted_blocks=1 00:11:52.699 00:11:52.699 ' 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:52.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.699 --rc genhtml_branch_coverage=1 00:11:52.699 --rc genhtml_function_coverage=1 00:11:52.699 --rc genhtml_legend=1 00:11:52.699 --rc geninfo_all_blocks=1 00:11:52.699 --rc geninfo_unexecuted_blocks=1 00:11:52.699 00:11:52.699 ' 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:52.699 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:52.700 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.700 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.700 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.700 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:11:52.700 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.700 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:11:52.700 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:52.700 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:52.700 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:52.700 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:52.700 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:52.700 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:52.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:52.700 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:52.700 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:52.700 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:52.700 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:52.700 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:52.700 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:52.700 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:52.700 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:52.700 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:52.700 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:52.700 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:52.700 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:52.700 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:52.700 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:52.700 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.700 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:52.700 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.700 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:52.700 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:52.700 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:11:52.700 03:20:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:59.261 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:59.261 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:11:59.261 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:59.261 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:59.261 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:59.261 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:59.261 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:59.261 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:11:59.261 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:59.261 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:11:59.261 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:11:59.261 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:11:59.261 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:11:59.261 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:11:59.261 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:11:59.261 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:59.261 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:59.261 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:59.261 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:59.262 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:59.262 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:59.262 Found net devices under 0000:86:00.0: cvl_0_0 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:59.262 Found net devices under 0000:86:00.1: cvl_0_1 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:59.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:59.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.449 ms 00:11:59.262 00:11:59.262 --- 10.0.0.2 ping statistics --- 00:11:59.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.262 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:59.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:59.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:11:59.262 00:11:59.262 --- 10.0.0.1 ping statistics --- 00:11:59.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.262 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2554338 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2554338 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2554338 ']' 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:59.262 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.263 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:59.263 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:59.263 [2024-12-06 03:20:18.655756] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:11:59.263 [2024-12-06 03:20:18.655807] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:59.263 [2024-12-06 03:20:18.726999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:59.263 [2024-12-06 03:20:18.771585] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:59.263 [2024-12-06 03:20:18.771623] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:59.263 [2024-12-06 03:20:18.771631] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:59.263 [2024-12-06 03:20:18.771638] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:59.263 [2024-12-06 03:20:18.771643] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:59.263 [2024-12-06 03:20:18.773161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.263 [2024-12-06 03:20:18.773260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:59.263 [2024-12-06 03:20:18.773335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:59.263 [2024-12-06 03:20:18.773336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.263 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:59.263 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:11:59.263 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:59.263 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:59.263 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:59.263 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:59.263 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:59.263 03:20:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode7879 00:11:59.263 [2024-12-06 03:20:19.091724] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:59.263 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:11:59.263 { 00:11:59.263 "nqn": "nqn.2016-06.io.spdk:cnode7879", 00:11:59.263 "tgt_name": "foobar", 00:11:59.263 "method": "nvmf_create_subsystem", 00:11:59.263 "req_id": 1 00:11:59.263 } 00:11:59.263 Got JSON-RPC error response 00:11:59.263 response: 00:11:59.263 { 00:11:59.263 "code": -32603, 00:11:59.263 "message": "Unable to find target foobar" 00:11:59.263 }' 00:11:59.263 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:11:59.263 { 00:11:59.263 "nqn": "nqn.2016-06.io.spdk:cnode7879", 00:11:59.263 "tgt_name": "foobar", 00:11:59.263 "method": "nvmf_create_subsystem", 00:11:59.263 "req_id": 1 00:11:59.263 } 00:11:59.263 Got JSON-RPC error response 00:11:59.263 response: 00:11:59.263 { 00:11:59.263 "code": -32603, 00:11:59.263 "message": "Unable to find target foobar" 00:11:59.263 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:59.263 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:59.263 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode5667 00:11:59.263 [2024-12-06 03:20:19.308484] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5667: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:59.263 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:11:59.263 { 00:11:59.263 "nqn": "nqn.2016-06.io.spdk:cnode5667", 00:11:59.263 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:59.263 "method": "nvmf_create_subsystem", 00:11:59.263 "req_id": 1 00:11:59.263 } 00:11:59.263 Got JSON-RPC error response 00:11:59.263 response: 00:11:59.263 { 00:11:59.263 "code": -32602, 00:11:59.263 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:59.263 }' 00:11:59.263 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:11:59.263 { 00:11:59.263 "nqn": "nqn.2016-06.io.spdk:cnode5667", 00:11:59.263 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:59.263 "method": "nvmf_create_subsystem", 00:11:59.263 "req_id": 1 00:11:59.263 } 00:11:59.263 Got JSON-RPC error response 00:11:59.263 response: 00:11:59.263 { 00:11:59.263 "code": -32602, 00:11:59.263 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:59.263 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:59.263 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:59.263 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode27263 00:11:59.523 [2024-12-06 03:20:19.517188] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27263: invalid model number 'SPDK_Controller' 00:11:59.523 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:11:59.523 { 00:11:59.523 "nqn": "nqn.2016-06.io.spdk:cnode27263", 00:11:59.523 "model_number": "SPDK_Controller\u001f", 00:11:59.523 "method": "nvmf_create_subsystem", 00:11:59.523 "req_id": 1 00:11:59.523 } 00:11:59.523 Got JSON-RPC error response 00:11:59.523 response: 00:11:59.523 { 00:11:59.523 "code": -32602, 00:11:59.523 "message": "Invalid MN SPDK_Controller\u001f" 00:11:59.523 }' 00:11:59.523 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:11:59.523 { 00:11:59.523 "nqn": "nqn.2016-06.io.spdk:cnode27263", 00:11:59.523 "model_number": "SPDK_Controller\u001f", 00:11:59.523 "method": "nvmf_create_subsystem", 00:11:59.523 "req_id": 1 00:11:59.523 } 00:11:59.523 Got JSON-RPC error response 00:11:59.523 response: 00:11:59.523 { 00:11:59.523 "code": -32602, 00:11:59.523 "message": "Invalid MN SPDK_Controller\u001f" 00:11:59.523 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:59.523 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:11:59.523 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:11:59.523 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:59.523 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:59.523 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:59.523 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:59.523 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.523 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:11:59.523 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:11:59.523 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:11:59.523 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:11:59.524 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.821 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.821 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:11:59.821 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:11:59.821 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:11:59.821 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.821 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.821 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:11:59.821 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:11:59.821 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:11:59.821 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.821 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.821 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:11:59.821 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:11:59.821 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:11:59.821 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.821 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.821 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:11:59.821 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:11:59.821 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:11:59.821 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.821 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.821 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ v == \- ]] 00:11:59.821 03:20:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'v< /dev/null' 00:12:02.502 03:20:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.409 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:04.409 00:12:04.409 real 0m11.921s 00:12:04.409 user 0m18.804s 00:12:04.409 sys 0m5.310s 00:12:04.409 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:04.409 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:04.409 ************************************ 00:12:04.409 END TEST nvmf_invalid 00:12:04.409 ************************************ 00:12:04.409 03:20:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:04.409 03:20:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:04.409 03:20:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.409 03:20:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:04.667 ************************************ 00:12:04.667 START TEST nvmf_connect_stress 00:12:04.667 ************************************ 00:12:04.667 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:04.667 * Looking for test storage... 00:12:04.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:04.667 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:04.667 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:12:04.667 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:04.667 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:04.667 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:04.667 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:04.667 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:04.667 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:04.667 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:04.667 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:04.667 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:04.667 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:04.667 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:04.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.668 --rc genhtml_branch_coverage=1 00:12:04.668 --rc genhtml_function_coverage=1 00:12:04.668 --rc genhtml_legend=1 00:12:04.668 --rc geninfo_all_blocks=1 00:12:04.668 --rc geninfo_unexecuted_blocks=1 00:12:04.668 00:12:04.668 ' 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:04.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.668 --rc genhtml_branch_coverage=1 00:12:04.668 --rc genhtml_function_coverage=1 00:12:04.668 --rc genhtml_legend=1 00:12:04.668 --rc geninfo_all_blocks=1 00:12:04.668 --rc geninfo_unexecuted_blocks=1 00:12:04.668 00:12:04.668 ' 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:04.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.668 --rc genhtml_branch_coverage=1 00:12:04.668 --rc genhtml_function_coverage=1 00:12:04.668 --rc genhtml_legend=1 00:12:04.668 --rc geninfo_all_blocks=1 00:12:04.668 --rc geninfo_unexecuted_blocks=1 00:12:04.668 00:12:04.668 ' 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:04.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.668 --rc genhtml_branch_coverage=1 00:12:04.668 --rc genhtml_function_coverage=1 00:12:04.668 --rc genhtml_legend=1 00:12:04.668 --rc geninfo_all_blocks=1 00:12:04.668 --rc geninfo_unexecuted_blocks=1 00:12:04.668 00:12:04.668 ' 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:04.668 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:04.668 03:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:09.934 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:09.934 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:09.934 Found net devices under 0000:86:00.0: cvl_0_0 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:09.934 Found net devices under 0000:86:00.1: cvl_0_1 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:09.934 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:09.935 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:09.935 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:09.935 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:09.935 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:09.935 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:09.935 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:09.935 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:09.935 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:09.935 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:09.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:09.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:12:09.935 00:12:09.935 --- 10.0.0.2 ping statistics --- 00:12:09.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.935 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:12:09.935 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:09.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:09.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:12:09.935 00:12:09.935 --- 10.0.0.1 ping statistics --- 00:12:09.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.935 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:12:09.935 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:09.935 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:12:09.935 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:09.935 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:09.935 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:09.935 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:09.935 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:09.935 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:09.935 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:09.935 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:09.935 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:09.935 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:09.935 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.935 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2558517 00:12:09.935 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:09.935 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2558517 00:12:09.935 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2558517 ']' 00:12:09.935 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.935 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:09.935 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.935 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:09.935 03:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.935 [2024-12-06 03:20:29.854853] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:12:09.935 [2024-12-06 03:20:29.854897] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.935 [2024-12-06 03:20:29.921069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:09.935 [2024-12-06 03:20:29.962717] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:09.935 [2024-12-06 03:20:29.962757] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:09.935 [2024-12-06 03:20:29.962765] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:09.935 [2024-12-06 03:20:29.962774] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:09.935 [2024-12-06 03:20:29.962779] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:09.935 [2024-12-06 03:20:29.964137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:09.935 [2024-12-06 03:20:29.964227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:09.935 [2024-12-06 03:20:29.964228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.194 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:10.194 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:12:10.194 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:10.194 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:10.194 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.194 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:10.194 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:10.194 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.194 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.194 [2024-12-06 03:20:30.115068] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:10.194 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.195 [2024-12-06 03:20:30.135301] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.195 NULL1 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2558543 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2558543 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.195 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.454 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.454 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2558543 00:12:10.454 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.454 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.454 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.021 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.021 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2558543 00:12:11.021 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.021 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.021 03:20:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.281 03:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.281 03:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2558543 00:12:11.281 03:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.281 03:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.281 03:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.539 03:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.539 03:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2558543 00:12:11.539 03:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.540 03:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.540 03:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.798 03:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.798 03:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2558543 00:12:11.798 03:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.798 03:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.798 03:20:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.057 03:20:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.057 03:20:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2558543 00:12:12.057 03:20:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.057 03:20:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.057 03:20:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.624 03:20:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.624 03:20:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2558543 00:12:12.624 03:20:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.624 03:20:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.624 03:20:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.883 03:20:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.883 03:20:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2558543 00:12:12.883 03:20:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.883 03:20:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.883 03:20:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.142 03:20:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.142 03:20:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2558543 00:12:13.142 03:20:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.142 03:20:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.142 03:20:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.401 03:20:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.401 03:20:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2558543 00:12:13.401 03:20:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.401 03:20:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.401 03:20:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.969 03:20:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.969 03:20:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2558543 00:12:13.969 03:20:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.969 03:20:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.969 03:20:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.228 03:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.228 03:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2558543 00:12:14.228 03:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.228 03:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.228 03:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.486 03:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.486 03:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2558543 00:12:14.486 03:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.486 03:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.486 03:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.744 03:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.744 03:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2558543 00:12:14.744 03:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.744 03:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.744 03:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.002 03:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.002 03:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2558543 00:12:15.002 03:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.002 03:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.002 03:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.567 03:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.567 03:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2558543 00:12:15.567 03:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.567 03:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.567 03:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.825 03:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.825 03:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2558543 00:12:15.825 03:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.825 03:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.825 03:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.083 03:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.083 03:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2558543 00:12:16.083 03:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.083 03:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.083 03:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.341 03:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.341 03:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2558543 00:12:16.341 03:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.341 03:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.341 03:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.908 03:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.908 03:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2558543 00:12:16.908 03:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.908 03:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.908 03:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.166 03:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.166 03:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2558543 00:12:17.166 03:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.166 03:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.166 03:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.425 03:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.425 03:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2558543 00:12:17.425 03:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.425 03:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.425 03:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.684 03:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.684 03:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2558543 00:12:17.684 03:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.684 03:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.684 03:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.943 03:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.943 03:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2558543 00:12:17.943 03:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.943 03:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.943 03:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:18.511 03:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.511 03:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2558543 00:12:18.512 03:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:18.512 03:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.512 03:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:18.771 03:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.771 03:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2558543 00:12:18.771 03:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:18.771 03:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.771 03:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:19.030 03:20:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.030 03:20:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2558543 00:12:19.030 03:20:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:19.030 03:20:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.030 03:20:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:19.288 03:20:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.288 03:20:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2558543 00:12:19.288 03:20:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:19.288 03:20:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.288 03:20:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:19.547 03:20:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.547 03:20:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2558543 00:12:19.547 03:20:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:19.547 03:20:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.547 03:20:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:20.114 03:20:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.114 03:20:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2558543 00:12:20.114 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:20.114 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.114 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:20.372 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.372 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2558543 00:12:20.372 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:20.372 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.372 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:20.372 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:20.631 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.631 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2558543 00:12:20.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2558543) - No such process 00:12:20.631 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2558543 00:12:20.631 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:20.631 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:20.631 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:20.631 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:20.631 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:12:20.631 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:20.631 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:12:20.631 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:20.631 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:20.631 rmmod nvme_tcp 00:12:20.631 rmmod nvme_fabrics 00:12:20.631 rmmod nvme_keyring 00:12:20.631 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:20.631 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:12:20.631 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:12:20.631 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2558517 ']' 00:12:20.631 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2558517 00:12:20.631 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2558517 ']' 00:12:20.631 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2558517 00:12:20.631 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:12:20.631 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:20.631 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2558517 00:12:20.631 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:20.631 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:20.631 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2558517' 00:12:20.631 killing process with pid 2558517 00:12:20.631 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2558517 00:12:20.631 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2558517 00:12:20.891 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:20.891 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:20.891 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:20.891 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:12:20.891 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:20.891 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:12:20.891 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:12:20.891 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:20.891 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:20.891 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.891 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:20.891 03:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.426 03:20:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:23.426 00:12:23.426 real 0m18.442s 00:12:23.426 user 0m40.068s 00:12:23.426 sys 0m7.961s 00:12:23.426 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:23.426 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:23.426 ************************************ 00:12:23.426 END TEST nvmf_connect_stress 00:12:23.426 ************************************ 00:12:23.426 03:20:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:23.426 03:20:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:23.426 03:20:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.426 03:20:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:23.426 ************************************ 00:12:23.426 START TEST nvmf_fused_ordering 00:12:23.426 ************************************ 00:12:23.426 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:23.426 * Looking for test storage... 00:12:23.426 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:23.426 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:23.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.427 --rc genhtml_branch_coverage=1 00:12:23.427 --rc genhtml_function_coverage=1 00:12:23.427 --rc genhtml_legend=1 00:12:23.427 --rc geninfo_all_blocks=1 00:12:23.427 --rc geninfo_unexecuted_blocks=1 00:12:23.427 00:12:23.427 ' 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:23.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.427 --rc genhtml_branch_coverage=1 00:12:23.427 --rc genhtml_function_coverage=1 00:12:23.427 --rc genhtml_legend=1 00:12:23.427 --rc geninfo_all_blocks=1 00:12:23.427 --rc geninfo_unexecuted_blocks=1 00:12:23.427 00:12:23.427 ' 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:23.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.427 --rc genhtml_branch_coverage=1 00:12:23.427 --rc genhtml_function_coverage=1 00:12:23.427 --rc genhtml_legend=1 00:12:23.427 --rc geninfo_all_blocks=1 00:12:23.427 --rc geninfo_unexecuted_blocks=1 00:12:23.427 00:12:23.427 ' 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:23.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.427 --rc genhtml_branch_coverage=1 00:12:23.427 --rc genhtml_function_coverage=1 00:12:23.427 --rc genhtml_legend=1 00:12:23.427 --rc geninfo_all_blocks=1 00:12:23.427 --rc geninfo_unexecuted_blocks=1 00:12:23.427 00:12:23.427 ' 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:23.427 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:23.427 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:23.428 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:23.428 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:23.428 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:23.428 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:23.428 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:23.428 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.428 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:23.428 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.428 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:23.428 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:23.428 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:12:23.428 03:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:28.700 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:28.700 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:12:28.700 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:28.700 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:28.700 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:28.700 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:28.700 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:28.700 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:12:28.700 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:28.700 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:12:28.700 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:12:28.700 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:12:28.700 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:12:28.700 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:12:28.700 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:12:28.700 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:28.700 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:28.700 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:28.701 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:28.701 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:28.701 Found net devices under 0000:86:00.0: cvl_0_0 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:28.701 Found net devices under 0000:86:00.1: cvl_0_1 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:28.701 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:28.701 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:12:28.701 00:12:28.701 --- 10.0.0.2 ping statistics --- 00:12:28.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.701 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:28.701 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:28.701 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:12:28.701 00:12:28.701 --- 10.0.0.1 ping statistics --- 00:12:28.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.701 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2563885 00:12:28.701 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:28.702 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2563885 00:12:28.702 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2563885 ']' 00:12:28.702 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.702 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:28.702 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.702 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:28.702 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:28.702 [2024-12-06 03:20:48.712552] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:12:28.702 [2024-12-06 03:20:48.712597] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:28.702 [2024-12-06 03:20:48.778542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.702 [2024-12-06 03:20:48.819700] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:28.702 [2024-12-06 03:20:48.819738] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:28.702 [2024-12-06 03:20:48.819746] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:28.702 [2024-12-06 03:20:48.819752] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:28.702 [2024-12-06 03:20:48.819757] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:28.702 [2024-12-06 03:20:48.820304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.961 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:28.961 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:12:28.961 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:28.961 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:28.961 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:28.961 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:28.961 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:28.961 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.961 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:28.961 [2024-12-06 03:20:48.957248] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:28.961 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.961 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:28.961 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.961 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:28.961 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.961 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:28.961 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.961 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:28.961 [2024-12-06 03:20:48.973412] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:28.961 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.961 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:28.961 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.961 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:28.961 NULL1 00:12:28.961 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.961 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:28.961 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.961 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:28.961 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.961 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:28.961 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.961 03:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:28.961 03:20:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.961 03:20:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:28.961 [2024-12-06 03:20:49.032502] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:12:28.961 [2024-12-06 03:20:49.032533] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2563930 ] 00:12:29.530 Attached to nqn.2016-06.io.spdk:cnode1 00:12:29.530 Namespace ID: 1 size: 1GB 00:12:29.530 fused_ordering(0) 00:12:29.530 fused_ordering(1) 00:12:29.530 fused_ordering(2) 00:12:29.530 fused_ordering(3) 00:12:29.530 fused_ordering(4) 00:12:29.530 fused_ordering(5) 00:12:29.530 fused_ordering(6) 00:12:29.530 fused_ordering(7) 00:12:29.530 fused_ordering(8) 00:12:29.530 fused_ordering(9) 00:12:29.530 fused_ordering(10) 00:12:29.530 fused_ordering(11) 00:12:29.530 fused_ordering(12) 00:12:29.530 fused_ordering(13) 00:12:29.530 fused_ordering(14) 00:12:29.530 fused_ordering(15) 00:12:29.530 fused_ordering(16) 00:12:29.530 fused_ordering(17) 00:12:29.530 fused_ordering(18) 00:12:29.530 fused_ordering(19) 00:12:29.530 fused_ordering(20) 00:12:29.530 fused_ordering(21) 00:12:29.530 fused_ordering(22) 00:12:29.530 fused_ordering(23) 00:12:29.530 fused_ordering(24) 00:12:29.530 fused_ordering(25) 00:12:29.530 fused_ordering(26) 00:12:29.530 fused_ordering(27) 00:12:29.530 fused_ordering(28) 00:12:29.530 fused_ordering(29) 00:12:29.530 fused_ordering(30) 00:12:29.530 fused_ordering(31) 00:12:29.530 fused_ordering(32) 00:12:29.530 fused_ordering(33) 00:12:29.530 fused_ordering(34) 00:12:29.530 fused_ordering(35) 00:12:29.530 fused_ordering(36) 00:12:29.530 fused_ordering(37) 00:12:29.530 fused_ordering(38) 00:12:29.530 fused_ordering(39) 00:12:29.530 fused_ordering(40) 00:12:29.530 fused_ordering(41) 00:12:29.530 fused_ordering(42) 00:12:29.530 fused_ordering(43) 00:12:29.530 fused_ordering(44) 00:12:29.530 fused_ordering(45) 00:12:29.530 fused_ordering(46) 00:12:29.530 fused_ordering(47) 00:12:29.530 fused_ordering(48) 00:12:29.530 fused_ordering(49) 00:12:29.530 fused_ordering(50) 00:12:29.530 fused_ordering(51) 00:12:29.530 fused_ordering(52) 00:12:29.530 fused_ordering(53) 00:12:29.530 fused_ordering(54) 00:12:29.530 fused_ordering(55) 00:12:29.530 fused_ordering(56) 00:12:29.530 fused_ordering(57) 00:12:29.530 fused_ordering(58) 00:12:29.530 fused_ordering(59) 00:12:29.530 fused_ordering(60) 00:12:29.530 fused_ordering(61) 00:12:29.530 fused_ordering(62) 00:12:29.530 fused_ordering(63) 00:12:29.530 fused_ordering(64) 00:12:29.530 fused_ordering(65) 00:12:29.530 fused_ordering(66) 00:12:29.530 fused_ordering(67) 00:12:29.530 fused_ordering(68) 00:12:29.530 fused_ordering(69) 00:12:29.530 fused_ordering(70) 00:12:29.530 fused_ordering(71) 00:12:29.530 fused_ordering(72) 00:12:29.530 fused_ordering(73) 00:12:29.530 fused_ordering(74) 00:12:29.530 fused_ordering(75) 00:12:29.530 fused_ordering(76) 00:12:29.530 fused_ordering(77) 00:12:29.531 fused_ordering(78) 00:12:29.531 fused_ordering(79) 00:12:29.531 fused_ordering(80) 00:12:29.531 fused_ordering(81) 00:12:29.531 fused_ordering(82) 00:12:29.531 fused_ordering(83) 00:12:29.531 fused_ordering(84) 00:12:29.531 fused_ordering(85) 00:12:29.531 fused_ordering(86) 00:12:29.531 fused_ordering(87) 00:12:29.531 fused_ordering(88) 00:12:29.531 fused_ordering(89) 00:12:29.531 fused_ordering(90) 00:12:29.531 fused_ordering(91) 00:12:29.531 fused_ordering(92) 00:12:29.531 fused_ordering(93) 00:12:29.531 fused_ordering(94) 00:12:29.531 fused_ordering(95) 00:12:29.531 fused_ordering(96) 00:12:29.531 fused_ordering(97) 00:12:29.531 fused_ordering(98) 00:12:29.531 fused_ordering(99) 00:12:29.531 fused_ordering(100) 00:12:29.531 fused_ordering(101) 00:12:29.531 fused_ordering(102) 00:12:29.531 fused_ordering(103) 00:12:29.531 fused_ordering(104) 00:12:29.531 fused_ordering(105) 00:12:29.531 fused_ordering(106) 00:12:29.531 fused_ordering(107) 00:12:29.531 fused_ordering(108) 00:12:29.531 fused_ordering(109) 00:12:29.531 fused_ordering(110) 00:12:29.531 fused_ordering(111) 00:12:29.531 fused_ordering(112) 00:12:29.531 fused_ordering(113) 00:12:29.531 fused_ordering(114) 00:12:29.531 fused_ordering(115) 00:12:29.531 fused_ordering(116) 00:12:29.531 fused_ordering(117) 00:12:29.531 fused_ordering(118) 00:12:29.531 fused_ordering(119) 00:12:29.531 fused_ordering(120) 00:12:29.531 fused_ordering(121) 00:12:29.531 fused_ordering(122) 00:12:29.531 fused_ordering(123) 00:12:29.531 fused_ordering(124) 00:12:29.531 fused_ordering(125) 00:12:29.531 fused_ordering(126) 00:12:29.531 fused_ordering(127) 00:12:29.531 fused_ordering(128) 00:12:29.531 fused_ordering(129) 00:12:29.531 fused_ordering(130) 00:12:29.531 fused_ordering(131) 00:12:29.531 fused_ordering(132) 00:12:29.531 fused_ordering(133) 00:12:29.531 fused_ordering(134) 00:12:29.531 fused_ordering(135) 00:12:29.531 fused_ordering(136) 00:12:29.531 fused_ordering(137) 00:12:29.531 fused_ordering(138) 00:12:29.531 fused_ordering(139) 00:12:29.531 fused_ordering(140) 00:12:29.531 fused_ordering(141) 00:12:29.531 fused_ordering(142) 00:12:29.531 fused_ordering(143) 00:12:29.531 fused_ordering(144) 00:12:29.531 fused_ordering(145) 00:12:29.531 fused_ordering(146) 00:12:29.531 fused_ordering(147) 00:12:29.531 fused_ordering(148) 00:12:29.531 fused_ordering(149) 00:12:29.531 fused_ordering(150) 00:12:29.531 fused_ordering(151) 00:12:29.531 fused_ordering(152) 00:12:29.531 fused_ordering(153) 00:12:29.531 fused_ordering(154) 00:12:29.531 fused_ordering(155) 00:12:29.531 fused_ordering(156) 00:12:29.531 fused_ordering(157) 00:12:29.531 fused_ordering(158) 00:12:29.531 fused_ordering(159) 00:12:29.531 fused_ordering(160) 00:12:29.531 fused_ordering(161) 00:12:29.531 fused_ordering(162) 00:12:29.531 fused_ordering(163) 00:12:29.531 fused_ordering(164) 00:12:29.531 fused_ordering(165) 00:12:29.531 fused_ordering(166) 00:12:29.531 fused_ordering(167) 00:12:29.531 fused_ordering(168) 00:12:29.531 fused_ordering(169) 00:12:29.531 fused_ordering(170) 00:12:29.531 fused_ordering(171) 00:12:29.531 fused_ordering(172) 00:12:29.531 fused_ordering(173) 00:12:29.531 fused_ordering(174) 00:12:29.531 fused_ordering(175) 00:12:29.531 fused_ordering(176) 00:12:29.531 fused_ordering(177) 00:12:29.531 fused_ordering(178) 00:12:29.531 fused_ordering(179) 00:12:29.531 fused_ordering(180) 00:12:29.531 fused_ordering(181) 00:12:29.531 fused_ordering(182) 00:12:29.531 fused_ordering(183) 00:12:29.531 fused_ordering(184) 00:12:29.531 fused_ordering(185) 00:12:29.531 fused_ordering(186) 00:12:29.531 fused_ordering(187) 00:12:29.531 fused_ordering(188) 00:12:29.531 fused_ordering(189) 00:12:29.531 fused_ordering(190) 00:12:29.531 fused_ordering(191) 00:12:29.531 fused_ordering(192) 00:12:29.531 fused_ordering(193) 00:12:29.531 fused_ordering(194) 00:12:29.531 fused_ordering(195) 00:12:29.531 fused_ordering(196) 00:12:29.531 fused_ordering(197) 00:12:29.531 fused_ordering(198) 00:12:29.531 fused_ordering(199) 00:12:29.531 fused_ordering(200) 00:12:29.531 fused_ordering(201) 00:12:29.531 fused_ordering(202) 00:12:29.531 fused_ordering(203) 00:12:29.531 fused_ordering(204) 00:12:29.531 fused_ordering(205) 00:12:29.531 fused_ordering(206) 00:12:29.531 fused_ordering(207) 00:12:29.531 fused_ordering(208) 00:12:29.531 fused_ordering(209) 00:12:29.531 fused_ordering(210) 00:12:29.531 fused_ordering(211) 00:12:29.531 fused_ordering(212) 00:12:29.531 fused_ordering(213) 00:12:29.531 fused_ordering(214) 00:12:29.531 fused_ordering(215) 00:12:29.531 fused_ordering(216) 00:12:29.531 fused_ordering(217) 00:12:29.531 fused_ordering(218) 00:12:29.531 fused_ordering(219) 00:12:29.531 fused_ordering(220) 00:12:29.531 fused_ordering(221) 00:12:29.531 fused_ordering(222) 00:12:29.531 fused_ordering(223) 00:12:29.531 fused_ordering(224) 00:12:29.531 fused_ordering(225) 00:12:29.531 fused_ordering(226) 00:12:29.531 fused_ordering(227) 00:12:29.531 fused_ordering(228) 00:12:29.531 fused_ordering(229) 00:12:29.531 fused_ordering(230) 00:12:29.531 fused_ordering(231) 00:12:29.531 fused_ordering(232) 00:12:29.531 fused_ordering(233) 00:12:29.531 fused_ordering(234) 00:12:29.531 fused_ordering(235) 00:12:29.531 fused_ordering(236) 00:12:29.531 fused_ordering(237) 00:12:29.531 fused_ordering(238) 00:12:29.531 fused_ordering(239) 00:12:29.531 fused_ordering(240) 00:12:29.531 fused_ordering(241) 00:12:29.531 fused_ordering(242) 00:12:29.531 fused_ordering(243) 00:12:29.531 fused_ordering(244) 00:12:29.531 fused_ordering(245) 00:12:29.531 fused_ordering(246) 00:12:29.531 fused_ordering(247) 00:12:29.531 fused_ordering(248) 00:12:29.531 fused_ordering(249) 00:12:29.531 fused_ordering(250) 00:12:29.531 fused_ordering(251) 00:12:29.531 fused_ordering(252) 00:12:29.531 fused_ordering(253) 00:12:29.531 fused_ordering(254) 00:12:29.531 fused_ordering(255) 00:12:29.531 fused_ordering(256) 00:12:29.531 fused_ordering(257) 00:12:29.531 fused_ordering(258) 00:12:29.531 fused_ordering(259) 00:12:29.531 fused_ordering(260) 00:12:29.531 fused_ordering(261) 00:12:29.531 fused_ordering(262) 00:12:29.531 fused_ordering(263) 00:12:29.531 fused_ordering(264) 00:12:29.531 fused_ordering(265) 00:12:29.531 fused_ordering(266) 00:12:29.531 fused_ordering(267) 00:12:29.531 fused_ordering(268) 00:12:29.531 fused_ordering(269) 00:12:29.531 fused_ordering(270) 00:12:29.531 fused_ordering(271) 00:12:29.531 fused_ordering(272) 00:12:29.531 fused_ordering(273) 00:12:29.531 fused_ordering(274) 00:12:29.531 fused_ordering(275) 00:12:29.531 fused_ordering(276) 00:12:29.531 fused_ordering(277) 00:12:29.531 fused_ordering(278) 00:12:29.531 fused_ordering(279) 00:12:29.531 fused_ordering(280) 00:12:29.531 fused_ordering(281) 00:12:29.531 fused_ordering(282) 00:12:29.531 fused_ordering(283) 00:12:29.531 fused_ordering(284) 00:12:29.531 fused_ordering(285) 00:12:29.531 fused_ordering(286) 00:12:29.531 fused_ordering(287) 00:12:29.531 fused_ordering(288) 00:12:29.531 fused_ordering(289) 00:12:29.531 fused_ordering(290) 00:12:29.531 fused_ordering(291) 00:12:29.531 fused_ordering(292) 00:12:29.531 fused_ordering(293) 00:12:29.531 fused_ordering(294) 00:12:29.531 fused_ordering(295) 00:12:29.531 fused_ordering(296) 00:12:29.531 fused_ordering(297) 00:12:29.531 fused_ordering(298) 00:12:29.531 fused_ordering(299) 00:12:29.531 fused_ordering(300) 00:12:29.531 fused_ordering(301) 00:12:29.531 fused_ordering(302) 00:12:29.531 fused_ordering(303) 00:12:29.531 fused_ordering(304) 00:12:29.531 fused_ordering(305) 00:12:29.531 fused_ordering(306) 00:12:29.531 fused_ordering(307) 00:12:29.531 fused_ordering(308) 00:12:29.531 fused_ordering(309) 00:12:29.531 fused_ordering(310) 00:12:29.531 fused_ordering(311) 00:12:29.531 fused_ordering(312) 00:12:29.531 fused_ordering(313) 00:12:29.531 fused_ordering(314) 00:12:29.531 fused_ordering(315) 00:12:29.531 fused_ordering(316) 00:12:29.531 fused_ordering(317) 00:12:29.531 fused_ordering(318) 00:12:29.531 fused_ordering(319) 00:12:29.531 fused_ordering(320) 00:12:29.531 fused_ordering(321) 00:12:29.531 fused_ordering(322) 00:12:29.531 fused_ordering(323) 00:12:29.531 fused_ordering(324) 00:12:29.531 fused_ordering(325) 00:12:29.531 fused_ordering(326) 00:12:29.531 fused_ordering(327) 00:12:29.531 fused_ordering(328) 00:12:29.531 fused_ordering(329) 00:12:29.531 fused_ordering(330) 00:12:29.531 fused_ordering(331) 00:12:29.531 fused_ordering(332) 00:12:29.531 fused_ordering(333) 00:12:29.531 fused_ordering(334) 00:12:29.531 fused_ordering(335) 00:12:29.531 fused_ordering(336) 00:12:29.531 fused_ordering(337) 00:12:29.531 fused_ordering(338) 00:12:29.531 fused_ordering(339) 00:12:29.531 fused_ordering(340) 00:12:29.531 fused_ordering(341) 00:12:29.531 fused_ordering(342) 00:12:29.531 fused_ordering(343) 00:12:29.531 fused_ordering(344) 00:12:29.531 fused_ordering(345) 00:12:29.531 fused_ordering(346) 00:12:29.531 fused_ordering(347) 00:12:29.531 fused_ordering(348) 00:12:29.531 fused_ordering(349) 00:12:29.531 fused_ordering(350) 00:12:29.531 fused_ordering(351) 00:12:29.531 fused_ordering(352) 00:12:29.532 fused_ordering(353) 00:12:29.532 fused_ordering(354) 00:12:29.532 fused_ordering(355) 00:12:29.532 fused_ordering(356) 00:12:29.532 fused_ordering(357) 00:12:29.532 fused_ordering(358) 00:12:29.532 fused_ordering(359) 00:12:29.532 fused_ordering(360) 00:12:29.532 fused_ordering(361) 00:12:29.532 fused_ordering(362) 00:12:29.532 fused_ordering(363) 00:12:29.532 fused_ordering(364) 00:12:29.532 fused_ordering(365) 00:12:29.532 fused_ordering(366) 00:12:29.532 fused_ordering(367) 00:12:29.532 fused_ordering(368) 00:12:29.532 fused_ordering(369) 00:12:29.532 fused_ordering(370) 00:12:29.532 fused_ordering(371) 00:12:29.532 fused_ordering(372) 00:12:29.532 fused_ordering(373) 00:12:29.532 fused_ordering(374) 00:12:29.532 fused_ordering(375) 00:12:29.532 fused_ordering(376) 00:12:29.532 fused_ordering(377) 00:12:29.532 fused_ordering(378) 00:12:29.532 fused_ordering(379) 00:12:29.532 fused_ordering(380) 00:12:29.532 fused_ordering(381) 00:12:29.532 fused_ordering(382) 00:12:29.532 fused_ordering(383) 00:12:29.532 fused_ordering(384) 00:12:29.532 fused_ordering(385) 00:12:29.532 fused_ordering(386) 00:12:29.532 fused_ordering(387) 00:12:29.532 fused_ordering(388) 00:12:29.532 fused_ordering(389) 00:12:29.532 fused_ordering(390) 00:12:29.532 fused_ordering(391) 00:12:29.532 fused_ordering(392) 00:12:29.532 fused_ordering(393) 00:12:29.532 fused_ordering(394) 00:12:29.532 fused_ordering(395) 00:12:29.532 fused_ordering(396) 00:12:29.532 fused_ordering(397) 00:12:29.532 fused_ordering(398) 00:12:29.532 fused_ordering(399) 00:12:29.532 fused_ordering(400) 00:12:29.532 fused_ordering(401) 00:12:29.532 fused_ordering(402) 00:12:29.532 fused_ordering(403) 00:12:29.532 fused_ordering(404) 00:12:29.532 fused_ordering(405) 00:12:29.532 fused_ordering(406) 00:12:29.532 fused_ordering(407) 00:12:29.532 fused_ordering(408) 00:12:29.532 fused_ordering(409) 00:12:29.532 fused_ordering(410) 00:12:30.100 fused_ordering(411) 00:12:30.100 fused_ordering(412) 00:12:30.100 fused_ordering(413) 00:12:30.101 fused_ordering(414) 00:12:30.101 fused_ordering(415) 00:12:30.101 fused_ordering(416) 00:12:30.101 fused_ordering(417) 00:12:30.101 fused_ordering(418) 00:12:30.101 fused_ordering(419) 00:12:30.101 fused_ordering(420) 00:12:30.101 fused_ordering(421) 00:12:30.101 fused_ordering(422) 00:12:30.101 fused_ordering(423) 00:12:30.101 fused_ordering(424) 00:12:30.101 fused_ordering(425) 00:12:30.101 fused_ordering(426) 00:12:30.101 fused_ordering(427) 00:12:30.101 fused_ordering(428) 00:12:30.101 fused_ordering(429) 00:12:30.101 fused_ordering(430) 00:12:30.101 fused_ordering(431) 00:12:30.101 fused_ordering(432) 00:12:30.101 fused_ordering(433) 00:12:30.101 fused_ordering(434) 00:12:30.101 fused_ordering(435) 00:12:30.101 fused_ordering(436) 00:12:30.101 fused_ordering(437) 00:12:30.101 fused_ordering(438) 00:12:30.101 fused_ordering(439) 00:12:30.101 fused_ordering(440) 00:12:30.101 fused_ordering(441) 00:12:30.101 fused_ordering(442) 00:12:30.101 fused_ordering(443) 00:12:30.101 fused_ordering(444) 00:12:30.101 fused_ordering(445) 00:12:30.101 fused_ordering(446) 00:12:30.101 fused_ordering(447) 00:12:30.101 fused_ordering(448) 00:12:30.101 fused_ordering(449) 00:12:30.101 fused_ordering(450) 00:12:30.101 fused_ordering(451) 00:12:30.101 fused_ordering(452) 00:12:30.101 fused_ordering(453) 00:12:30.101 fused_ordering(454) 00:12:30.101 fused_ordering(455) 00:12:30.101 fused_ordering(456) 00:12:30.101 fused_ordering(457) 00:12:30.101 fused_ordering(458) 00:12:30.101 fused_ordering(459) 00:12:30.101 fused_ordering(460) 00:12:30.101 fused_ordering(461) 00:12:30.101 fused_ordering(462) 00:12:30.101 fused_ordering(463) 00:12:30.101 fused_ordering(464) 00:12:30.101 fused_ordering(465) 00:12:30.101 fused_ordering(466) 00:12:30.101 fused_ordering(467) 00:12:30.101 fused_ordering(468) 00:12:30.101 fused_ordering(469) 00:12:30.101 fused_ordering(470) 00:12:30.101 fused_ordering(471) 00:12:30.101 fused_ordering(472) 00:12:30.101 fused_ordering(473) 00:12:30.101 fused_ordering(474) 00:12:30.101 fused_ordering(475) 00:12:30.101 fused_ordering(476) 00:12:30.101 fused_ordering(477) 00:12:30.101 fused_ordering(478) 00:12:30.101 fused_ordering(479) 00:12:30.101 fused_ordering(480) 00:12:30.101 fused_ordering(481) 00:12:30.101 fused_ordering(482) 00:12:30.101 fused_ordering(483) 00:12:30.101 fused_ordering(484) 00:12:30.101 fused_ordering(485) 00:12:30.101 fused_ordering(486) 00:12:30.101 fused_ordering(487) 00:12:30.101 fused_ordering(488) 00:12:30.101 fused_ordering(489) 00:12:30.101 fused_ordering(490) 00:12:30.101 fused_ordering(491) 00:12:30.101 fused_ordering(492) 00:12:30.101 fused_ordering(493) 00:12:30.101 fused_ordering(494) 00:12:30.101 fused_ordering(495) 00:12:30.101 fused_ordering(496) 00:12:30.101 fused_ordering(497) 00:12:30.101 fused_ordering(498) 00:12:30.101 fused_ordering(499) 00:12:30.101 fused_ordering(500) 00:12:30.101 fused_ordering(501) 00:12:30.101 fused_ordering(502) 00:12:30.101 fused_ordering(503) 00:12:30.101 fused_ordering(504) 00:12:30.101 fused_ordering(505) 00:12:30.101 fused_ordering(506) 00:12:30.101 fused_ordering(507) 00:12:30.101 fused_ordering(508) 00:12:30.101 fused_ordering(509) 00:12:30.101 fused_ordering(510) 00:12:30.101 fused_ordering(511) 00:12:30.101 fused_ordering(512) 00:12:30.101 fused_ordering(513) 00:12:30.101 fused_ordering(514) 00:12:30.101 fused_ordering(515) 00:12:30.101 fused_ordering(516) 00:12:30.101 fused_ordering(517) 00:12:30.101 fused_ordering(518) 00:12:30.101 fused_ordering(519) 00:12:30.101 fused_ordering(520) 00:12:30.101 fused_ordering(521) 00:12:30.101 fused_ordering(522) 00:12:30.101 fused_ordering(523) 00:12:30.101 fused_ordering(524) 00:12:30.101 fused_ordering(525) 00:12:30.101 fused_ordering(526) 00:12:30.101 fused_ordering(527) 00:12:30.101 fused_ordering(528) 00:12:30.101 fused_ordering(529) 00:12:30.101 fused_ordering(530) 00:12:30.101 fused_ordering(531) 00:12:30.101 fused_ordering(532) 00:12:30.101 fused_ordering(533) 00:12:30.101 fused_ordering(534) 00:12:30.101 fused_ordering(535) 00:12:30.101 fused_ordering(536) 00:12:30.101 fused_ordering(537) 00:12:30.101 fused_ordering(538) 00:12:30.101 fused_ordering(539) 00:12:30.101 fused_ordering(540) 00:12:30.101 fused_ordering(541) 00:12:30.101 fused_ordering(542) 00:12:30.101 fused_ordering(543) 00:12:30.101 fused_ordering(544) 00:12:30.101 fused_ordering(545) 00:12:30.101 fused_ordering(546) 00:12:30.101 fused_ordering(547) 00:12:30.101 fused_ordering(548) 00:12:30.101 fused_ordering(549) 00:12:30.101 fused_ordering(550) 00:12:30.101 fused_ordering(551) 00:12:30.101 fused_ordering(552) 00:12:30.101 fused_ordering(553) 00:12:30.101 fused_ordering(554) 00:12:30.101 fused_ordering(555) 00:12:30.101 fused_ordering(556) 00:12:30.101 fused_ordering(557) 00:12:30.101 fused_ordering(558) 00:12:30.101 fused_ordering(559) 00:12:30.101 fused_ordering(560) 00:12:30.101 fused_ordering(561) 00:12:30.101 fused_ordering(562) 00:12:30.101 fused_ordering(563) 00:12:30.101 fused_ordering(564) 00:12:30.101 fused_ordering(565) 00:12:30.101 fused_ordering(566) 00:12:30.101 fused_ordering(567) 00:12:30.101 fused_ordering(568) 00:12:30.101 fused_ordering(569) 00:12:30.101 fused_ordering(570) 00:12:30.101 fused_ordering(571) 00:12:30.101 fused_ordering(572) 00:12:30.101 fused_ordering(573) 00:12:30.101 fused_ordering(574) 00:12:30.101 fused_ordering(575) 00:12:30.101 fused_ordering(576) 00:12:30.101 fused_ordering(577) 00:12:30.101 fused_ordering(578) 00:12:30.101 fused_ordering(579) 00:12:30.101 fused_ordering(580) 00:12:30.101 fused_ordering(581) 00:12:30.101 fused_ordering(582) 00:12:30.101 fused_ordering(583) 00:12:30.101 fused_ordering(584) 00:12:30.101 fused_ordering(585) 00:12:30.101 fused_ordering(586) 00:12:30.101 fused_ordering(587) 00:12:30.101 fused_ordering(588) 00:12:30.101 fused_ordering(589) 00:12:30.101 fused_ordering(590) 00:12:30.101 fused_ordering(591) 00:12:30.101 fused_ordering(592) 00:12:30.101 fused_ordering(593) 00:12:30.101 fused_ordering(594) 00:12:30.101 fused_ordering(595) 00:12:30.101 fused_ordering(596) 00:12:30.101 fused_ordering(597) 00:12:30.101 fused_ordering(598) 00:12:30.101 fused_ordering(599) 00:12:30.101 fused_ordering(600) 00:12:30.101 fused_ordering(601) 00:12:30.101 fused_ordering(602) 00:12:30.101 fused_ordering(603) 00:12:30.101 fused_ordering(604) 00:12:30.101 fused_ordering(605) 00:12:30.101 fused_ordering(606) 00:12:30.101 fused_ordering(607) 00:12:30.101 fused_ordering(608) 00:12:30.101 fused_ordering(609) 00:12:30.101 fused_ordering(610) 00:12:30.101 fused_ordering(611) 00:12:30.101 fused_ordering(612) 00:12:30.101 fused_ordering(613) 00:12:30.101 fused_ordering(614) 00:12:30.101 fused_ordering(615) 00:12:30.360 fused_ordering(616) 00:12:30.360 fused_ordering(617) 00:12:30.360 fused_ordering(618) 00:12:30.360 fused_ordering(619) 00:12:30.360 fused_ordering(620) 00:12:30.360 fused_ordering(621) 00:12:30.361 fused_ordering(622) 00:12:30.361 fused_ordering(623) 00:12:30.361 fused_ordering(624) 00:12:30.361 fused_ordering(625) 00:12:30.361 fused_ordering(626) 00:12:30.361 fused_ordering(627) 00:12:30.361 fused_ordering(628) 00:12:30.361 fused_ordering(629) 00:12:30.361 fused_ordering(630) 00:12:30.361 fused_ordering(631) 00:12:30.361 fused_ordering(632) 00:12:30.361 fused_ordering(633) 00:12:30.361 fused_ordering(634) 00:12:30.361 fused_ordering(635) 00:12:30.361 fused_ordering(636) 00:12:30.361 fused_ordering(637) 00:12:30.361 fused_ordering(638) 00:12:30.361 fused_ordering(639) 00:12:30.361 fused_ordering(640) 00:12:30.361 fused_ordering(641) 00:12:30.361 fused_ordering(642) 00:12:30.361 fused_ordering(643) 00:12:30.361 fused_ordering(644) 00:12:30.361 fused_ordering(645) 00:12:30.361 fused_ordering(646) 00:12:30.361 fused_ordering(647) 00:12:30.361 fused_ordering(648) 00:12:30.361 fused_ordering(649) 00:12:30.361 fused_ordering(650) 00:12:30.361 fused_ordering(651) 00:12:30.361 fused_ordering(652) 00:12:30.361 fused_ordering(653) 00:12:30.361 fused_ordering(654) 00:12:30.361 fused_ordering(655) 00:12:30.361 fused_ordering(656) 00:12:30.361 fused_ordering(657) 00:12:30.361 fused_ordering(658) 00:12:30.361 fused_ordering(659) 00:12:30.361 fused_ordering(660) 00:12:30.361 fused_ordering(661) 00:12:30.361 fused_ordering(662) 00:12:30.361 fused_ordering(663) 00:12:30.361 fused_ordering(664) 00:12:30.361 fused_ordering(665) 00:12:30.361 fused_ordering(666) 00:12:30.361 fused_ordering(667) 00:12:30.361 fused_ordering(668) 00:12:30.361 fused_ordering(669) 00:12:30.361 fused_ordering(670) 00:12:30.361 fused_ordering(671) 00:12:30.361 fused_ordering(672) 00:12:30.361 fused_ordering(673) 00:12:30.361 fused_ordering(674) 00:12:30.361 fused_ordering(675) 00:12:30.361 fused_ordering(676) 00:12:30.361 fused_ordering(677) 00:12:30.361 fused_ordering(678) 00:12:30.361 fused_ordering(679) 00:12:30.361 fused_ordering(680) 00:12:30.361 fused_ordering(681) 00:12:30.361 fused_ordering(682) 00:12:30.361 fused_ordering(683) 00:12:30.361 fused_ordering(684) 00:12:30.361 fused_ordering(685) 00:12:30.361 fused_ordering(686) 00:12:30.361 fused_ordering(687) 00:12:30.361 fused_ordering(688) 00:12:30.361 fused_ordering(689) 00:12:30.361 fused_ordering(690) 00:12:30.361 fused_ordering(691) 00:12:30.361 fused_ordering(692) 00:12:30.361 fused_ordering(693) 00:12:30.361 fused_ordering(694) 00:12:30.361 fused_ordering(695) 00:12:30.361 fused_ordering(696) 00:12:30.361 fused_ordering(697) 00:12:30.361 fused_ordering(698) 00:12:30.361 fused_ordering(699) 00:12:30.361 fused_ordering(700) 00:12:30.361 fused_ordering(701) 00:12:30.361 fused_ordering(702) 00:12:30.361 fused_ordering(703) 00:12:30.361 fused_ordering(704) 00:12:30.361 fused_ordering(705) 00:12:30.361 fused_ordering(706) 00:12:30.361 fused_ordering(707) 00:12:30.361 fused_ordering(708) 00:12:30.361 fused_ordering(709) 00:12:30.361 fused_ordering(710) 00:12:30.361 fused_ordering(711) 00:12:30.361 fused_ordering(712) 00:12:30.361 fused_ordering(713) 00:12:30.361 fused_ordering(714) 00:12:30.361 fused_ordering(715) 00:12:30.361 fused_ordering(716) 00:12:30.361 fused_ordering(717) 00:12:30.361 fused_ordering(718) 00:12:30.361 fused_ordering(719) 00:12:30.361 fused_ordering(720) 00:12:30.361 fused_ordering(721) 00:12:30.361 fused_ordering(722) 00:12:30.361 fused_ordering(723) 00:12:30.361 fused_ordering(724) 00:12:30.361 fused_ordering(725) 00:12:30.361 fused_ordering(726) 00:12:30.361 fused_ordering(727) 00:12:30.361 fused_ordering(728) 00:12:30.361 fused_ordering(729) 00:12:30.361 fused_ordering(730) 00:12:30.361 fused_ordering(731) 00:12:30.361 fused_ordering(732) 00:12:30.361 fused_ordering(733) 00:12:30.361 fused_ordering(734) 00:12:30.361 fused_ordering(735) 00:12:30.361 fused_ordering(736) 00:12:30.361 fused_ordering(737) 00:12:30.361 fused_ordering(738) 00:12:30.361 fused_ordering(739) 00:12:30.361 fused_ordering(740) 00:12:30.361 fused_ordering(741) 00:12:30.361 fused_ordering(742) 00:12:30.361 fused_ordering(743) 00:12:30.361 fused_ordering(744) 00:12:30.361 fused_ordering(745) 00:12:30.361 fused_ordering(746) 00:12:30.361 fused_ordering(747) 00:12:30.361 fused_ordering(748) 00:12:30.361 fused_ordering(749) 00:12:30.361 fused_ordering(750) 00:12:30.361 fused_ordering(751) 00:12:30.361 fused_ordering(752) 00:12:30.361 fused_ordering(753) 00:12:30.361 fused_ordering(754) 00:12:30.361 fused_ordering(755) 00:12:30.361 fused_ordering(756) 00:12:30.361 fused_ordering(757) 00:12:30.361 fused_ordering(758) 00:12:30.361 fused_ordering(759) 00:12:30.361 fused_ordering(760) 00:12:30.361 fused_ordering(761) 00:12:30.361 fused_ordering(762) 00:12:30.361 fused_ordering(763) 00:12:30.361 fused_ordering(764) 00:12:30.361 fused_ordering(765) 00:12:30.361 fused_ordering(766) 00:12:30.361 fused_ordering(767) 00:12:30.361 fused_ordering(768) 00:12:30.361 fused_ordering(769) 00:12:30.361 fused_ordering(770) 00:12:30.361 fused_ordering(771) 00:12:30.361 fused_ordering(772) 00:12:30.361 fused_ordering(773) 00:12:30.361 fused_ordering(774) 00:12:30.361 fused_ordering(775) 00:12:30.361 fused_ordering(776) 00:12:30.361 fused_ordering(777) 00:12:30.361 fused_ordering(778) 00:12:30.361 fused_ordering(779) 00:12:30.361 fused_ordering(780) 00:12:30.361 fused_ordering(781) 00:12:30.361 fused_ordering(782) 00:12:30.361 fused_ordering(783) 00:12:30.361 fused_ordering(784) 00:12:30.361 fused_ordering(785) 00:12:30.361 fused_ordering(786) 00:12:30.361 fused_ordering(787) 00:12:30.361 fused_ordering(788) 00:12:30.361 fused_ordering(789) 00:12:30.361 fused_ordering(790) 00:12:30.361 fused_ordering(791) 00:12:30.361 fused_ordering(792) 00:12:30.361 fused_ordering(793) 00:12:30.361 fused_ordering(794) 00:12:30.361 fused_ordering(795) 00:12:30.361 fused_ordering(796) 00:12:30.361 fused_ordering(797) 00:12:30.361 fused_ordering(798) 00:12:30.361 fused_ordering(799) 00:12:30.361 fused_ordering(800) 00:12:30.361 fused_ordering(801) 00:12:30.361 fused_ordering(802) 00:12:30.361 fused_ordering(803) 00:12:30.361 fused_ordering(804) 00:12:30.361 fused_ordering(805) 00:12:30.361 fused_ordering(806) 00:12:30.361 fused_ordering(807) 00:12:30.361 fused_ordering(808) 00:12:30.361 fused_ordering(809) 00:12:30.361 fused_ordering(810) 00:12:30.361 fused_ordering(811) 00:12:30.361 fused_ordering(812) 00:12:30.361 fused_ordering(813) 00:12:30.361 fused_ordering(814) 00:12:30.361 fused_ordering(815) 00:12:30.361 fused_ordering(816) 00:12:30.361 fused_ordering(817) 00:12:30.361 fused_ordering(818) 00:12:30.361 fused_ordering(819) 00:12:30.361 fused_ordering(820) 00:12:30.929 fused_ordering(821) 00:12:30.929 fused_ordering(822) 00:12:30.929 fused_ordering(823) 00:12:30.929 fused_ordering(824) 00:12:30.929 fused_ordering(825) 00:12:30.929 fused_ordering(826) 00:12:30.929 fused_ordering(827) 00:12:30.929 fused_ordering(828) 00:12:30.929 fused_ordering(829) 00:12:30.929 fused_ordering(830) 00:12:30.929 fused_ordering(831) 00:12:30.929 fused_ordering(832) 00:12:30.929 fused_ordering(833) 00:12:30.929 fused_ordering(834) 00:12:30.929 fused_ordering(835) 00:12:30.929 fused_ordering(836) 00:12:30.929 fused_ordering(837) 00:12:30.929 fused_ordering(838) 00:12:30.929 fused_ordering(839) 00:12:30.929 fused_ordering(840) 00:12:30.929 fused_ordering(841) 00:12:30.929 fused_ordering(842) 00:12:30.929 fused_ordering(843) 00:12:30.929 fused_ordering(844) 00:12:30.929 fused_ordering(845) 00:12:30.929 fused_ordering(846) 00:12:30.929 fused_ordering(847) 00:12:30.929 fused_ordering(848) 00:12:30.929 fused_ordering(849) 00:12:30.929 fused_ordering(850) 00:12:30.929 fused_ordering(851) 00:12:30.929 fused_ordering(852) 00:12:30.929 fused_ordering(853) 00:12:30.929 fused_ordering(854) 00:12:30.929 fused_ordering(855) 00:12:30.929 fused_ordering(856) 00:12:30.929 fused_ordering(857) 00:12:30.929 fused_ordering(858) 00:12:30.929 fused_ordering(859) 00:12:30.929 fused_ordering(860) 00:12:30.929 fused_ordering(861) 00:12:30.929 fused_ordering(862) 00:12:30.929 fused_ordering(863) 00:12:30.929 fused_ordering(864) 00:12:30.929 fused_ordering(865) 00:12:30.929 fused_ordering(866) 00:12:30.929 fused_ordering(867) 00:12:30.929 fused_ordering(868) 00:12:30.929 fused_ordering(869) 00:12:30.929 fused_ordering(870) 00:12:30.929 fused_ordering(871) 00:12:30.929 fused_ordering(872) 00:12:30.929 fused_ordering(873) 00:12:30.929 fused_ordering(874) 00:12:30.929 fused_ordering(875) 00:12:30.929 fused_ordering(876) 00:12:30.929 fused_ordering(877) 00:12:30.929 fused_ordering(878) 00:12:30.929 fused_ordering(879) 00:12:30.929 fused_ordering(880) 00:12:30.929 fused_ordering(881) 00:12:30.929 fused_ordering(882) 00:12:30.929 fused_ordering(883) 00:12:30.929 fused_ordering(884) 00:12:30.929 fused_ordering(885) 00:12:30.929 fused_ordering(886) 00:12:30.929 fused_ordering(887) 00:12:30.929 fused_ordering(888) 00:12:30.929 fused_ordering(889) 00:12:30.929 fused_ordering(890) 00:12:30.929 fused_ordering(891) 00:12:30.929 fused_ordering(892) 00:12:30.929 fused_ordering(893) 00:12:30.929 fused_ordering(894) 00:12:30.929 fused_ordering(895) 00:12:30.929 fused_ordering(896) 00:12:30.929 fused_ordering(897) 00:12:30.929 fused_ordering(898) 00:12:30.929 fused_ordering(899) 00:12:30.929 fused_ordering(900) 00:12:30.929 fused_ordering(901) 00:12:30.929 fused_ordering(902) 00:12:30.929 fused_ordering(903) 00:12:30.929 fused_ordering(904) 00:12:30.929 fused_ordering(905) 00:12:30.929 fused_ordering(906) 00:12:30.929 fused_ordering(907) 00:12:30.929 fused_ordering(908) 00:12:30.929 fused_ordering(909) 00:12:30.930 fused_ordering(910) 00:12:30.930 fused_ordering(911) 00:12:30.930 fused_ordering(912) 00:12:30.930 fused_ordering(913) 00:12:30.930 fused_ordering(914) 00:12:30.930 fused_ordering(915) 00:12:30.930 fused_ordering(916) 00:12:30.930 fused_ordering(917) 00:12:30.930 fused_ordering(918) 00:12:30.930 fused_ordering(919) 00:12:30.930 fused_ordering(920) 00:12:30.930 fused_ordering(921) 00:12:30.930 fused_ordering(922) 00:12:30.930 fused_ordering(923) 00:12:30.930 fused_ordering(924) 00:12:30.930 fused_ordering(925) 00:12:30.930 fused_ordering(926) 00:12:30.930 fused_ordering(927) 00:12:30.930 fused_ordering(928) 00:12:30.930 fused_ordering(929) 00:12:30.930 fused_ordering(930) 00:12:30.930 fused_ordering(931) 00:12:30.930 fused_ordering(932) 00:12:30.930 fused_ordering(933) 00:12:30.930 fused_ordering(934) 00:12:30.930 fused_ordering(935) 00:12:30.930 fused_ordering(936) 00:12:30.930 fused_ordering(937) 00:12:30.930 fused_ordering(938) 00:12:30.930 fused_ordering(939) 00:12:30.930 fused_ordering(940) 00:12:30.930 fused_ordering(941) 00:12:30.930 fused_ordering(942) 00:12:30.930 fused_ordering(943) 00:12:30.930 fused_ordering(944) 00:12:30.930 fused_ordering(945) 00:12:30.930 fused_ordering(946) 00:12:30.930 fused_ordering(947) 00:12:30.930 fused_ordering(948) 00:12:30.930 fused_ordering(949) 00:12:30.930 fused_ordering(950) 00:12:30.930 fused_ordering(951) 00:12:30.930 fused_ordering(952) 00:12:30.930 fused_ordering(953) 00:12:30.930 fused_ordering(954) 00:12:30.930 fused_ordering(955) 00:12:30.930 fused_ordering(956) 00:12:30.930 fused_ordering(957) 00:12:30.930 fused_ordering(958) 00:12:30.930 fused_ordering(959) 00:12:30.930 fused_ordering(960) 00:12:30.930 fused_ordering(961) 00:12:30.930 fused_ordering(962) 00:12:30.930 fused_ordering(963) 00:12:30.930 fused_ordering(964) 00:12:30.930 fused_ordering(965) 00:12:30.930 fused_ordering(966) 00:12:30.930 fused_ordering(967) 00:12:30.930 fused_ordering(968) 00:12:30.930 fused_ordering(969) 00:12:30.930 fused_ordering(970) 00:12:30.930 fused_ordering(971) 00:12:30.930 fused_ordering(972) 00:12:30.930 fused_ordering(973) 00:12:30.930 fused_ordering(974) 00:12:30.930 fused_ordering(975) 00:12:30.930 fused_ordering(976) 00:12:30.930 fused_ordering(977) 00:12:30.930 fused_ordering(978) 00:12:30.930 fused_ordering(979) 00:12:30.930 fused_ordering(980) 00:12:30.930 fused_ordering(981) 00:12:30.930 fused_ordering(982) 00:12:30.930 fused_ordering(983) 00:12:30.930 fused_ordering(984) 00:12:30.930 fused_ordering(985) 00:12:30.930 fused_ordering(986) 00:12:30.930 fused_ordering(987) 00:12:30.930 fused_ordering(988) 00:12:30.930 fused_ordering(989) 00:12:30.930 fused_ordering(990) 00:12:30.930 fused_ordering(991) 00:12:30.930 fused_ordering(992) 00:12:30.930 fused_ordering(993) 00:12:30.930 fused_ordering(994) 00:12:30.930 fused_ordering(995) 00:12:30.930 fused_ordering(996) 00:12:30.930 fused_ordering(997) 00:12:30.930 fused_ordering(998) 00:12:30.930 fused_ordering(999) 00:12:30.930 fused_ordering(1000) 00:12:30.930 fused_ordering(1001) 00:12:30.930 fused_ordering(1002) 00:12:30.930 fused_ordering(1003) 00:12:30.930 fused_ordering(1004) 00:12:30.930 fused_ordering(1005) 00:12:30.930 fused_ordering(1006) 00:12:30.930 fused_ordering(1007) 00:12:30.930 fused_ordering(1008) 00:12:30.930 fused_ordering(1009) 00:12:30.930 fused_ordering(1010) 00:12:30.930 fused_ordering(1011) 00:12:30.930 fused_ordering(1012) 00:12:30.930 fused_ordering(1013) 00:12:30.930 fused_ordering(1014) 00:12:30.930 fused_ordering(1015) 00:12:30.930 fused_ordering(1016) 00:12:30.930 fused_ordering(1017) 00:12:30.930 fused_ordering(1018) 00:12:30.930 fused_ordering(1019) 00:12:30.930 fused_ordering(1020) 00:12:30.930 fused_ordering(1021) 00:12:30.930 fused_ordering(1022) 00:12:30.930 fused_ordering(1023) 00:12:30.930 03:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:30.930 03:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:30.930 03:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:30.930 03:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:12:30.930 03:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:30.930 03:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:12:30.930 03:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:30.930 03:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:30.930 rmmod nvme_tcp 00:12:30.930 rmmod nvme_fabrics 00:12:30.930 rmmod nvme_keyring 00:12:30.930 03:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:30.930 03:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:12:30.930 03:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:12:30.930 03:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2563885 ']' 00:12:30.930 03:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2563885 00:12:30.930 03:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2563885 ']' 00:12:30.930 03:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2563885 00:12:30.930 03:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:12:30.930 03:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:30.930 03:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2563885 00:12:30.930 03:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:30.930 03:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:30.930 03:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2563885' 00:12:30.930 killing process with pid 2563885 00:12:30.930 03:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2563885 00:12:30.930 03:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2563885 00:12:31.190 03:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:31.190 03:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:31.190 03:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:31.190 03:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:12:31.190 03:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:12:31.190 03:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:31.190 03:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:12:31.190 03:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:31.190 03:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:31.190 03:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.190 03:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:31.190 03:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.093 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:33.093 00:12:33.093 real 0m10.147s 00:12:33.093 user 0m4.878s 00:12:33.093 sys 0m5.474s 00:12:33.093 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:33.093 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:33.093 ************************************ 00:12:33.093 END TEST nvmf_fused_ordering 00:12:33.093 ************************************ 00:12:33.353 03:20:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:33.353 03:20:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:33.353 03:20:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:33.353 03:20:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:33.353 ************************************ 00:12:33.353 START TEST nvmf_ns_masking 00:12:33.353 ************************************ 00:12:33.353 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:33.353 * Looking for test storage... 00:12:33.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:33.353 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:33.353 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:12:33.353 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:33.353 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:33.353 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:33.353 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:33.353 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:33.353 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:12:33.353 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:12:33.353 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:12:33.353 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:12:33.353 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:12:33.353 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:12:33.353 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:12:33.353 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:33.353 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:12:33.353 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:12:33.353 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:33.353 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:33.353 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:12:33.353 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:12:33.353 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:33.353 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:12:33.353 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:12:33.353 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:12:33.353 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:33.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.354 --rc genhtml_branch_coverage=1 00:12:33.354 --rc genhtml_function_coverage=1 00:12:33.354 --rc genhtml_legend=1 00:12:33.354 --rc geninfo_all_blocks=1 00:12:33.354 --rc geninfo_unexecuted_blocks=1 00:12:33.354 00:12:33.354 ' 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:33.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.354 --rc genhtml_branch_coverage=1 00:12:33.354 --rc genhtml_function_coverage=1 00:12:33.354 --rc genhtml_legend=1 00:12:33.354 --rc geninfo_all_blocks=1 00:12:33.354 --rc geninfo_unexecuted_blocks=1 00:12:33.354 00:12:33.354 ' 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:33.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.354 --rc genhtml_branch_coverage=1 00:12:33.354 --rc genhtml_function_coverage=1 00:12:33.354 --rc genhtml_legend=1 00:12:33.354 --rc geninfo_all_blocks=1 00:12:33.354 --rc geninfo_unexecuted_blocks=1 00:12:33.354 00:12:33.354 ' 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:33.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.354 --rc genhtml_branch_coverage=1 00:12:33.354 --rc genhtml_function_coverage=1 00:12:33.354 --rc genhtml_legend=1 00:12:33.354 --rc geninfo_all_blocks=1 00:12:33.354 --rc geninfo_unexecuted_blocks=1 00:12:33.354 00:12:33.354 ' 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:33.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:33.354 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:33.355 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:33.355 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:33.355 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:33.355 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:33.613 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:33.613 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=43598d8e-f671-4087-8ff1-896596f379b7 00:12:33.613 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:33.613 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=7b41a49f-70b2-489f-b6d8-a28a302d016e 00:12:33.613 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:33.613 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:33.613 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:33.613 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:33.613 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=ebdaeed7-a968-41e1-ac6c-ca8d87ad1221 00:12:33.613 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:33.613 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:33.613 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:33.614 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:33.614 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:33.614 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:33.614 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.614 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:33.614 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.614 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:33.614 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:33.614 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:12:33.614 03:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:40.177 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:40.177 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:40.177 Found net devices under 0000:86:00.0: cvl_0_0 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:40.177 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:40.178 Found net devices under 0000:86:00.1: cvl_0_1 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:40.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:40.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.404 ms 00:12:40.178 00:12:40.178 --- 10.0.0.2 ping statistics --- 00:12:40.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.178 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:40.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:40.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:12:40.178 00:12:40.178 --- 10.0.0.1 ping statistics --- 00:12:40.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.178 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2567701 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2567701 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2567701 ']' 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:40.178 [2024-12-06 03:20:59.506245] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:12:40.178 [2024-12-06 03:20:59.506292] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:40.178 [2024-12-06 03:20:59.573957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.178 [2024-12-06 03:20:59.613237] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:40.178 [2024-12-06 03:20:59.613275] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:40.178 [2024-12-06 03:20:59.613283] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:40.178 [2024-12-06 03:20:59.613289] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:40.178 [2024-12-06 03:20:59.613293] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:40.178 [2024-12-06 03:20:59.613882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:40.178 [2024-12-06 03:20:59.919363] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:40.178 03:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:40.178 Malloc1 00:12:40.178 03:21:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:40.437 Malloc2 00:12:40.437 03:21:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:40.437 03:21:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:40.694 03:21:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.951 [2024-12-06 03:21:00.917163] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.951 03:21:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:40.951 03:21:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ebdaeed7-a968-41e1-ac6c-ca8d87ad1221 -a 10.0.0.2 -s 4420 -i 4 00:12:40.951 03:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:40.951 03:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:40.951 03:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:40.951 03:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:40.951 03:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:43.483 03:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:43.483 03:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:43.483 03:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:43.483 03:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:43.483 03:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.483 03:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:43.483 03:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:43.483 03:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:43.483 03:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:43.483 03:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:43.483 03:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:43.483 03:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:43.483 03:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:43.483 [ 0]:0x1 00:12:43.483 03:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:43.483 03:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:43.483 03:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0e619f77b8b149daba7b297eee4118a8 00:12:43.483 03:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0e619f77b8b149daba7b297eee4118a8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:43.483 03:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:43.483 03:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:43.483 03:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:43.483 03:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:43.483 [ 0]:0x1 00:12:43.483 03:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:43.484 03:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:43.484 03:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0e619f77b8b149daba7b297eee4118a8 00:12:43.484 03:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0e619f77b8b149daba7b297eee4118a8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:43.484 03:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:43.484 03:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:43.484 03:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:43.484 [ 1]:0x2 00:12:43.484 03:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:43.484 03:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:43.484 03:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d76c500c46e245438d9c4edf43242cea 00:12:43.484 03:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d76c500c46e245438d9c4edf43242cea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:43.484 03:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:43.484 03:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:43.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.484 03:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.742 03:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:44.000 03:21:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:44.000 03:21:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ebdaeed7-a968-41e1-ac6c-ca8d87ad1221 -a 10.0.0.2 -s 4420 -i 4 00:12:44.259 03:21:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:44.259 03:21:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:44.259 03:21:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:44.259 03:21:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:12:44.259 03:21:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:12:44.259 03:21:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:46.160 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:46.160 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:46.160 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:46.160 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:46.160 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:46.160 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:46.160 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:46.160 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:46.160 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:46.160 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:46.160 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:46.160 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:46.160 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:46.160 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:46.160 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:46.160 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:46.160 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:46.160 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:46.160 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:46.160 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:46.160 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:46.160 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:46.419 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:46.419 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:46.419 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:46.419 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:46.419 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:46.419 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:46.419 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:46.419 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:46.419 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:46.419 [ 0]:0x2 00:12:46.419 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:46.419 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:46.419 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d76c500c46e245438d9c4edf43242cea 00:12:46.419 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d76c500c46e245438d9c4edf43242cea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:46.419 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:46.676 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:46.676 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:46.676 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:46.676 [ 0]:0x1 00:12:46.676 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:46.676 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:46.676 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0e619f77b8b149daba7b297eee4118a8 00:12:46.676 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0e619f77b8b149daba7b297eee4118a8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:46.676 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:46.676 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:46.676 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:46.676 [ 1]:0x2 00:12:46.676 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:46.676 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:46.677 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d76c500c46e245438d9c4edf43242cea 00:12:46.677 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d76c500c46e245438d9c4edf43242cea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:46.677 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:46.935 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:46.935 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:46.935 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:46.935 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:46.935 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:46.935 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:46.935 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:46.935 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:46.935 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:46.935 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:46.935 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:46.935 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:46.935 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:46.935 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:46.935 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:46.935 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:46.935 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:46.935 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:46.935 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:46.935 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:46.935 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:46.935 [ 0]:0x2 00:12:46.935 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:46.935 03:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:46.935 03:21:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d76c500c46e245438d9c4edf43242cea 00:12:46.935 03:21:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d76c500c46e245438d9c4edf43242cea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:46.935 03:21:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:46.935 03:21:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:46.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.935 03:21:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:47.193 03:21:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:47.193 03:21:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ebdaeed7-a968-41e1-ac6c-ca8d87ad1221 -a 10.0.0.2 -s 4420 -i 4 00:12:47.450 03:21:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:47.450 03:21:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:47.450 03:21:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:47.450 03:21:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:12:47.450 03:21:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:12:47.450 03:21:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:49.372 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:49.372 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:49.372 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:49.372 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:12:49.372 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:49.372 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:49.372 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:49.372 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:49.372 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:49.372 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:49.372 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:49.372 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:49.372 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:49.372 [ 0]:0x1 00:12:49.372 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:49.372 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:49.372 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0e619f77b8b149daba7b297eee4118a8 00:12:49.372 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0e619f77b8b149daba7b297eee4118a8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:49.372 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:49.373 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:49.373 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:49.373 [ 1]:0x2 00:12:49.373 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:49.373 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:49.630 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d76c500c46e245438d9c4edf43242cea 00:12:49.630 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d76c500c46e245438d9c4edf43242cea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:49.630 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:49.630 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:49.630 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:49.630 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:49.630 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:49.630 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:49.630 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:49.630 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:49.630 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:49.630 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:49.630 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:49.630 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:49.630 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:49.888 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:49.888 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:49.888 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:49.888 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:49.888 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:49.888 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:49.888 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:49.888 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:49.888 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:49.888 [ 0]:0x2 00:12:49.888 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:49.888 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:49.888 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d76c500c46e245438d9c4edf43242cea 00:12:49.888 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d76c500c46e245438d9c4edf43242cea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:49.888 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:49.888 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:49.889 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:49.889 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:49.889 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:49.889 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:49.889 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:49.889 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:49.889 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:49.889 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:49.889 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:49.889 03:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:49.889 [2024-12-06 03:21:10.011653] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:49.889 request: 00:12:49.889 { 00:12:49.889 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:49.889 "nsid": 2, 00:12:49.889 "host": "nqn.2016-06.io.spdk:host1", 00:12:49.889 "method": "nvmf_ns_remove_host", 00:12:49.889 "req_id": 1 00:12:49.889 } 00:12:49.889 Got JSON-RPC error response 00:12:49.889 response: 00:12:49.889 { 00:12:49.889 "code": -32602, 00:12:49.889 "message": "Invalid parameters" 00:12:49.889 } 00:12:50.146 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:50.146 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:50.146 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:50.146 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:50.146 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:50.146 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:50.146 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:50.146 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:50.146 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:50.146 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:50.146 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:50.146 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:50.146 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:50.146 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:50.146 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:50.146 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:50.146 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:50.146 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:50.146 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:50.146 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:50.146 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:50.146 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:50.146 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:50.146 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:50.146 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:50.146 [ 0]:0x2 00:12:50.146 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:50.147 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:50.147 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d76c500c46e245438d9c4edf43242cea 00:12:50.147 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d76c500c46e245438d9c4edf43242cea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:50.147 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:50.147 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:50.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.405 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2570205 00:12:50.405 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:50.405 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:50.405 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2570205 /var/tmp/host.sock 00:12:50.405 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2570205 ']' 00:12:50.405 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:12:50.405 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:50.405 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:50.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:50.405 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:50.405 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:50.405 [2024-12-06 03:21:10.374765] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:12:50.405 [2024-12-06 03:21:10.374813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2570205 ] 00:12:50.405 [2024-12-06 03:21:10.434677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.405 [2024-12-06 03:21:10.477737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:50.662 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:50.662 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:12:50.662 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:50.919 03:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:51.176 03:21:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 43598d8e-f671-4087-8ff1-896596f379b7 00:12:51.176 03:21:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:51.176 03:21:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 43598D8EF67140878FF1896596F379B7 -i 00:12:51.176 03:21:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 7b41a49f-70b2-489f-b6d8-a28a302d016e 00:12:51.176 03:21:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:51.176 03:21:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 7B41A49F70B2489FB6D8A28A302D016E -i 00:12:51.433 03:21:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:51.690 03:21:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:51.690 03:21:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:51.691 03:21:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:51.948 nvme0n1 00:12:51.948 03:21:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:51.948 03:21:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:52.204 nvme1n2 00:12:52.204 03:21:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:52.204 03:21:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:52.204 03:21:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:52.205 03:21:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:52.205 03:21:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:52.462 03:21:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:52.462 03:21:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:52.462 03:21:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:52.462 03:21:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:52.719 03:21:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 43598d8e-f671-4087-8ff1-896596f379b7 == \4\3\5\9\8\d\8\e\-\f\6\7\1\-\4\0\8\7\-\8\f\f\1\-\8\9\6\5\9\6\f\3\7\9\b\7 ]] 00:12:52.719 03:21:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:52.719 03:21:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:52.719 03:21:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:52.991 03:21:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 7b41a49f-70b2-489f-b6d8-a28a302d016e == \7\b\4\1\a\4\9\f\-\7\0\b\2\-\4\8\9\f\-\b\6\d\8\-\a\2\8\a\3\0\2\d\0\1\6\e ]] 00:12:52.991 03:21:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.991 03:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:53.254 03:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 43598d8e-f671-4087-8ff1-896596f379b7 00:12:53.254 03:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:53.254 03:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 43598D8EF67140878FF1896596F379B7 00:12:53.254 03:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:53.254 03:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 43598D8EF67140878FF1896596F379B7 00:12:53.254 03:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:53.254 03:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:53.254 03:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:53.254 03:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:53.254 03:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:53.254 03:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:53.254 03:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:53.254 03:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:53.254 03:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 43598D8EF67140878FF1896596F379B7 00:12:53.510 [2024-12-06 03:21:13.493128] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:12:53.510 [2024-12-06 03:21:13.493174] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:12:53.510 [2024-12-06 03:21:13.493184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.510 request: 00:12:53.510 { 00:12:53.510 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:53.510 "namespace": { 00:12:53.510 "bdev_name": "invalid", 00:12:53.510 "nsid": 1, 00:12:53.510 "nguid": "43598D8EF67140878FF1896596F379B7", 00:12:53.510 "no_auto_visible": false, 00:12:53.510 "hide_metadata": false 00:12:53.510 }, 00:12:53.510 "method": "nvmf_subsystem_add_ns", 00:12:53.510 "req_id": 1 00:12:53.510 } 00:12:53.510 Got JSON-RPC error response 00:12:53.510 response: 00:12:53.510 { 00:12:53.511 "code": -32602, 00:12:53.511 "message": "Invalid parameters" 00:12:53.511 } 00:12:53.511 03:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:53.511 03:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:53.511 03:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:53.511 03:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:53.511 03:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 43598d8e-f671-4087-8ff1-896596f379b7 00:12:53.511 03:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:53.511 03:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 43598D8EF67140878FF1896596F379B7 -i 00:12:53.768 03:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:12:55.665 03:21:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:12:55.665 03:21:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:12:55.665 03:21:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:55.923 03:21:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:12:55.923 03:21:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2570205 00:12:55.923 03:21:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2570205 ']' 00:12:55.923 03:21:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2570205 00:12:55.923 03:21:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:55.923 03:21:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:55.923 03:21:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2570205 00:12:55.923 03:21:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:55.923 03:21:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:55.923 03:21:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2570205' 00:12:55.923 killing process with pid 2570205 00:12:55.923 03:21:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2570205 00:12:55.923 03:21:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2570205 00:12:56.180 03:21:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:56.437 03:21:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:12:56.437 03:21:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:12:56.437 03:21:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:56.438 03:21:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:12:56.438 03:21:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:56.438 03:21:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:12:56.438 03:21:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:56.438 03:21:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:56.438 rmmod nvme_tcp 00:12:56.438 rmmod nvme_fabrics 00:12:56.438 rmmod nvme_keyring 00:12:56.438 03:21:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:56.438 03:21:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:12:56.438 03:21:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:12:56.438 03:21:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2567701 ']' 00:12:56.438 03:21:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2567701 00:12:56.438 03:21:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2567701 ']' 00:12:56.438 03:21:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2567701 00:12:56.438 03:21:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:56.438 03:21:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:56.438 03:21:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2567701 00:12:56.438 03:21:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:56.438 03:21:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:56.438 03:21:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2567701' 00:12:56.438 killing process with pid 2567701 00:12:56.438 03:21:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2567701 00:12:56.438 03:21:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2567701 00:12:56.696 03:21:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:56.696 03:21:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:56.696 03:21:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:56.696 03:21:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:12:56.696 03:21:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:56.696 03:21:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:12:56.696 03:21:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:12:56.696 03:21:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:56.696 03:21:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:56.696 03:21:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.696 03:21:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:56.696 03:21:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.232 03:21:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:59.232 00:12:59.232 real 0m25.541s 00:12:59.232 user 0m30.222s 00:12:59.232 sys 0m7.045s 00:12:59.232 03:21:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:59.232 03:21:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:59.232 ************************************ 00:12:59.232 END TEST nvmf_ns_masking 00:12:59.232 ************************************ 00:12:59.232 03:21:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:12:59.232 03:21:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:59.232 03:21:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:59.232 03:21:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:59.232 03:21:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:59.232 ************************************ 00:12:59.232 START TEST nvmf_nvme_cli 00:12:59.232 ************************************ 00:12:59.232 03:21:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:59.232 * Looking for test storage... 00:12:59.232 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:59.232 03:21:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:59.232 03:21:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:12:59.232 03:21:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:59.232 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:59.232 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:59.232 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:59.232 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:59.232 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:12:59.232 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:12:59.232 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:12:59.232 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:12:59.232 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:12:59.232 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:12:59.232 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:12:59.232 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:59.232 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:12:59.232 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:12:59.232 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:59.232 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:59.232 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:12:59.232 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:12:59.232 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:59.232 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:12:59.232 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:12:59.232 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:12:59.232 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:12:59.232 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:59.232 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:12:59.232 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:59.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.233 --rc genhtml_branch_coverage=1 00:12:59.233 --rc genhtml_function_coverage=1 00:12:59.233 --rc genhtml_legend=1 00:12:59.233 --rc geninfo_all_blocks=1 00:12:59.233 --rc geninfo_unexecuted_blocks=1 00:12:59.233 00:12:59.233 ' 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:59.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.233 --rc genhtml_branch_coverage=1 00:12:59.233 --rc genhtml_function_coverage=1 00:12:59.233 --rc genhtml_legend=1 00:12:59.233 --rc geninfo_all_blocks=1 00:12:59.233 --rc geninfo_unexecuted_blocks=1 00:12:59.233 00:12:59.233 ' 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:59.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.233 --rc genhtml_branch_coverage=1 00:12:59.233 --rc genhtml_function_coverage=1 00:12:59.233 --rc genhtml_legend=1 00:12:59.233 --rc geninfo_all_blocks=1 00:12:59.233 --rc geninfo_unexecuted_blocks=1 00:12:59.233 00:12:59.233 ' 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:59.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.233 --rc genhtml_branch_coverage=1 00:12:59.233 --rc genhtml_function_coverage=1 00:12:59.233 --rc genhtml_legend=1 00:12:59.233 --rc geninfo_all_blocks=1 00:12:59.233 --rc geninfo_unexecuted_blocks=1 00:12:59.233 00:12:59.233 ' 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:59.233 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:59.233 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:59.234 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:59.234 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:59.234 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:59.234 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:59.234 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:59.234 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.234 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:59.234 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.234 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:59.234 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:59.234 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:12:59.234 03:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:04.497 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:04.497 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:04.497 Found net devices under 0000:86:00.0: cvl_0_0 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:04.497 Found net devices under 0000:86:00.1: cvl_0_1 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:04.497 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:04.756 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:04.756 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:04.756 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:04.756 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:04.756 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:04.756 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:04.756 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:04.756 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:04.756 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:04.756 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:13:04.756 00:13:04.756 --- 10.0.0.2 ping statistics --- 00:13:04.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.756 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:13:04.756 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:04.756 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:04.756 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:13:04.756 00:13:04.756 --- 10.0.0.1 ping statistics --- 00:13:04.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.756 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:13:04.756 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:04.756 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:13:04.756 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:04.756 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:04.756 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:04.756 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:04.756 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:04.756 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:04.756 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:04.756 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:04.757 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:04.757 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:04.757 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:04.757 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2574851 00:13:04.757 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2574851 00:13:04.757 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:04.757 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2574851 ']' 00:13:04.757 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.757 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:04.757 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.757 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:04.757 03:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:05.015 [2024-12-06 03:21:24.930120] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:13:05.015 [2024-12-06 03:21:24.930171] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:05.015 [2024-12-06 03:21:24.998336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:05.015 [2024-12-06 03:21:25.040104] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:05.015 [2024-12-06 03:21:25.040144] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:05.015 [2024-12-06 03:21:25.040152] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:05.015 [2024-12-06 03:21:25.040158] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:05.015 [2024-12-06 03:21:25.040164] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:05.015 [2024-12-06 03:21:25.041792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:05.015 [2024-12-06 03:21:25.041890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:05.015 [2024-12-06 03:21:25.041975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:05.015 [2024-12-06 03:21:25.041992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.015 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:05.015 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:13:05.015 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:05.015 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:05.015 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:05.272 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:05.272 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:05.272 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.272 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:05.272 [2024-12-06 03:21:25.192839] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:05.272 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.272 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:05.272 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.272 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:05.272 Malloc0 00:13:05.272 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.272 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:05.272 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.272 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:05.272 Malloc1 00:13:05.272 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.272 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:05.272 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.272 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:05.272 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.272 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:05.272 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.272 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:05.272 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.272 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:05.272 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.272 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:05.272 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.272 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:05.272 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.272 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:05.272 [2024-12-06 03:21:25.288091] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:05.272 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.272 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:05.272 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.272 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:05.272 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.272 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:05.530 00:13:05.530 Discovery Log Number of Records 2, Generation counter 2 00:13:05.530 =====Discovery Log Entry 0====== 00:13:05.530 trtype: tcp 00:13:05.530 adrfam: ipv4 00:13:05.530 subtype: current discovery subsystem 00:13:05.530 treq: not required 00:13:05.530 portid: 0 00:13:05.530 trsvcid: 4420 00:13:05.530 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:05.530 traddr: 10.0.0.2 00:13:05.530 eflags: explicit discovery connections, duplicate discovery information 00:13:05.530 sectype: none 00:13:05.530 =====Discovery Log Entry 1====== 00:13:05.530 trtype: tcp 00:13:05.530 adrfam: ipv4 00:13:05.530 subtype: nvme subsystem 00:13:05.530 treq: not required 00:13:05.530 portid: 0 00:13:05.530 trsvcid: 4420 00:13:05.530 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:05.530 traddr: 10.0.0.2 00:13:05.530 eflags: none 00:13:05.530 sectype: none 00:13:05.530 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:05.530 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:05.530 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:05.530 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:05.530 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:05.530 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:05.530 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:05.530 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:05.530 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:05.530 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:05.530 03:21:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:06.901 03:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:06.901 03:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:13:06.901 03:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:06.901 03:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:06.901 03:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:06.901 03:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:08.813 /dev/nvme0n2 ]] 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:08.813 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:08.813 rmmod nvme_tcp 00:13:08.813 rmmod nvme_fabrics 00:13:08.813 rmmod nvme_keyring 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2574851 ']' 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2574851 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2574851 ']' 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2574851 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:08.813 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2574851 00:13:09.073 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:09.073 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:09.073 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2574851' 00:13:09.073 killing process with pid 2574851 00:13:09.073 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2574851 00:13:09.073 03:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2574851 00:13:09.073 03:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:09.073 03:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:09.073 03:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:09.073 03:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:13:09.073 03:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:13:09.073 03:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:13:09.073 03:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:09.331 03:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:09.331 03:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:09.331 03:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.331 03:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:09.331 03:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.235 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:11.235 00:13:11.235 real 0m12.387s 00:13:11.235 user 0m18.253s 00:13:11.235 sys 0m4.949s 00:13:11.235 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:11.235 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:11.235 ************************************ 00:13:11.235 END TEST nvmf_nvme_cli 00:13:11.235 ************************************ 00:13:11.235 03:21:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:13:11.235 03:21:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:11.235 03:21:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:11.235 03:21:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:11.235 03:21:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:11.235 ************************************ 00:13:11.235 START TEST nvmf_vfio_user 00:13:11.235 ************************************ 00:13:11.235 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:11.496 * Looking for test storage... 00:13:11.496 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:11.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.496 --rc genhtml_branch_coverage=1 00:13:11.496 --rc genhtml_function_coverage=1 00:13:11.496 --rc genhtml_legend=1 00:13:11.496 --rc geninfo_all_blocks=1 00:13:11.496 --rc geninfo_unexecuted_blocks=1 00:13:11.496 00:13:11.496 ' 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:11.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.496 --rc genhtml_branch_coverage=1 00:13:11.496 --rc genhtml_function_coverage=1 00:13:11.496 --rc genhtml_legend=1 00:13:11.496 --rc geninfo_all_blocks=1 00:13:11.496 --rc geninfo_unexecuted_blocks=1 00:13:11.496 00:13:11.496 ' 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:11.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.496 --rc genhtml_branch_coverage=1 00:13:11.496 --rc genhtml_function_coverage=1 00:13:11.496 --rc genhtml_legend=1 00:13:11.496 --rc geninfo_all_blocks=1 00:13:11.496 --rc geninfo_unexecuted_blocks=1 00:13:11.496 00:13:11.496 ' 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:11.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.496 --rc genhtml_branch_coverage=1 00:13:11.496 --rc genhtml_function_coverage=1 00:13:11.496 --rc genhtml_legend=1 00:13:11.496 --rc geninfo_all_blocks=1 00:13:11.496 --rc geninfo_unexecuted_blocks=1 00:13:11.496 00:13:11.496 ' 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.496 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:11.497 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.497 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:13:11.497 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:11.497 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:11.497 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:11.497 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:11.497 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:11.497 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:11.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:11.497 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:11.497 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:11.497 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:11.497 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:11.497 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:11.497 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:11.497 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:11.497 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:11.497 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:11.497 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:11.497 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:11.497 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:11.497 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:11.497 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2575985 00:13:11.497 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2575985' 00:13:11.497 Process pid: 2575985 00:13:11.497 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:11.497 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2575985 00:13:11.497 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:11.497 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2575985 ']' 00:13:11.497 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.497 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:11.497 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.497 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:11.497 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:11.497 [2024-12-06 03:21:31.613689] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:13:11.497 [2024-12-06 03:21:31.613736] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:11.756 [2024-12-06 03:21:31.675892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:11.756 [2024-12-06 03:21:31.719268] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:11.756 [2024-12-06 03:21:31.719303] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:11.756 [2024-12-06 03:21:31.719311] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:11.756 [2024-12-06 03:21:31.719317] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:11.756 [2024-12-06 03:21:31.719322] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:11.756 [2024-12-06 03:21:31.720806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:11.756 [2024-12-06 03:21:31.720823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:11.756 [2024-12-06 03:21:31.720912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:11.756 [2024-12-06 03:21:31.720914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.756 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:11.756 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:13:11.756 03:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:12.690 03:21:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:12.949 03:21:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:12.949 03:21:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:12.949 03:21:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:12.949 03:21:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:12.949 03:21:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:13.208 Malloc1 00:13:13.208 03:21:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:13.466 03:21:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:13.724 03:21:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:13.982 03:21:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:13.982 03:21:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:13.982 03:21:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:13.982 Malloc2 00:13:13.982 03:21:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:14.240 03:21:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:14.497 03:21:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:14.759 03:21:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:14.759 03:21:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:14.759 03:21:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:14.759 03:21:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:14.759 03:21:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:14.759 03:21:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:14.759 [2024-12-06 03:21:34.701477] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:13:14.759 [2024-12-06 03:21:34.701509] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2576595 ] 00:13:14.759 [2024-12-06 03:21:34.740887] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:14.759 [2024-12-06 03:21:34.749326] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:14.759 [2024-12-06 03:21:34.749351] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f0cd6c4f000 00:13:14.759 [2024-12-06 03:21:34.750322] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:14.759 [2024-12-06 03:21:34.751324] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:14.759 [2024-12-06 03:21:34.752327] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:14.759 [2024-12-06 03:21:34.753334] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:14.759 [2024-12-06 03:21:34.754337] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:14.759 [2024-12-06 03:21:34.755345] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:14.759 [2024-12-06 03:21:34.756355] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:14.759 [2024-12-06 03:21:34.757362] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:14.759 [2024-12-06 03:21:34.761956] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:14.759 [2024-12-06 03:21:34.761966] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f0cd6c44000 00:13:14.759 [2024-12-06 03:21:34.762905] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:14.759 [2024-12-06 03:21:34.772963] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:14.759 [2024-12-06 03:21:34.772991] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:13:14.759 [2024-12-06 03:21:34.778503] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:14.760 [2024-12-06 03:21:34.778542] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:14.760 [2024-12-06 03:21:34.778614] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:13:14.760 [2024-12-06 03:21:34.778629] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:13:14.760 [2024-12-06 03:21:34.778635] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:13:14.760 [2024-12-06 03:21:34.779500] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:14.760 [2024-12-06 03:21:34.779510] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:13:14.760 [2024-12-06 03:21:34.779520] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:13:14.760 [2024-12-06 03:21:34.780508] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:14.760 [2024-12-06 03:21:34.780517] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:13:14.760 [2024-12-06 03:21:34.780524] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:14.760 [2024-12-06 03:21:34.781516] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:14.760 [2024-12-06 03:21:34.781525] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:14.760 [2024-12-06 03:21:34.782518] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:14.760 [2024-12-06 03:21:34.782526] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:14.760 [2024-12-06 03:21:34.782531] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:14.760 [2024-12-06 03:21:34.782537] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:14.760 [2024-12-06 03:21:34.782644] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:13:14.760 [2024-12-06 03:21:34.782649] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:14.760 [2024-12-06 03:21:34.782654] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:14.760 [2024-12-06 03:21:34.783524] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:14.760 [2024-12-06 03:21:34.784525] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:14.760 [2024-12-06 03:21:34.785532] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:14.760 [2024-12-06 03:21:34.786530] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:14.760 [2024-12-06 03:21:34.786595] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:14.760 [2024-12-06 03:21:34.787542] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:14.760 [2024-12-06 03:21:34.787550] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:14.760 [2024-12-06 03:21:34.787555] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:14.760 [2024-12-06 03:21:34.787573] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:13:14.760 [2024-12-06 03:21:34.787579] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:14.760 [2024-12-06 03:21:34.787599] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:14.760 [2024-12-06 03:21:34.787604] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:14.760 [2024-12-06 03:21:34.787609] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:14.760 [2024-12-06 03:21:34.787623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:14.760 [2024-12-06 03:21:34.787674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:14.760 [2024-12-06 03:21:34.787684] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:13:14.760 [2024-12-06 03:21:34.787690] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:13:14.760 [2024-12-06 03:21:34.787694] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:13:14.760 [2024-12-06 03:21:34.787699] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:14.760 [2024-12-06 03:21:34.787703] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:13:14.760 [2024-12-06 03:21:34.787707] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:13:14.760 [2024-12-06 03:21:34.787711] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:13:14.760 [2024-12-06 03:21:34.787718] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:14.760 [2024-12-06 03:21:34.787728] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:14.760 [2024-12-06 03:21:34.787741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:14.760 [2024-12-06 03:21:34.787751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:14.760 [2024-12-06 03:21:34.787759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:14.760 [2024-12-06 03:21:34.787767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:14.760 [2024-12-06 03:21:34.787774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:14.760 [2024-12-06 03:21:34.787778] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:14.760 [2024-12-06 03:21:34.787786] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:14.760 [2024-12-06 03:21:34.787794] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:14.760 [2024-12-06 03:21:34.787804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:14.760 [2024-12-06 03:21:34.787809] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:13:14.760 [2024-12-06 03:21:34.787814] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:14.760 [2024-12-06 03:21:34.787820] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:13:14.760 [2024-12-06 03:21:34.787826] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:14.760 [2024-12-06 03:21:34.787835] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:14.760 [2024-12-06 03:21:34.787843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:14.760 [2024-12-06 03:21:34.787895] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:13:14.761 [2024-12-06 03:21:34.787902] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:14.761 [2024-12-06 03:21:34.787909] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:14.761 [2024-12-06 03:21:34.787913] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:14.761 [2024-12-06 03:21:34.787917] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:14.761 [2024-12-06 03:21:34.787922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:14.761 [2024-12-06 03:21:34.787937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:14.761 [2024-12-06 03:21:34.787946] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:13:14.761 [2024-12-06 03:21:34.787965] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:13:14.761 [2024-12-06 03:21:34.787972] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:14.761 [2024-12-06 03:21:34.787978] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:14.761 [2024-12-06 03:21:34.787982] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:14.761 [2024-12-06 03:21:34.787985] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:14.761 [2024-12-06 03:21:34.787990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:14.761 [2024-12-06 03:21:34.788012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:14.761 [2024-12-06 03:21:34.788023] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:14.761 [2024-12-06 03:21:34.788030] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:14.761 [2024-12-06 03:21:34.788036] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:14.761 [2024-12-06 03:21:34.788040] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:14.761 [2024-12-06 03:21:34.788043] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:14.761 [2024-12-06 03:21:34.788048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:14.761 [2024-12-06 03:21:34.788060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:14.761 [2024-12-06 03:21:34.788067] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:14.761 [2024-12-06 03:21:34.788073] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:14.761 [2024-12-06 03:21:34.788081] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:13:14.761 [2024-12-06 03:21:34.788088] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:14.761 [2024-12-06 03:21:34.788093] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:14.761 [2024-12-06 03:21:34.788098] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:13:14.761 [2024-12-06 03:21:34.788102] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:14.761 [2024-12-06 03:21:34.788106] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:13:14.761 [2024-12-06 03:21:34.788111] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:13:14.761 [2024-12-06 03:21:34.788127] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:14.761 [2024-12-06 03:21:34.788136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:14.761 [2024-12-06 03:21:34.788146] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:14.761 [2024-12-06 03:21:34.788156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:14.761 [2024-12-06 03:21:34.788166] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:14.761 [2024-12-06 03:21:34.788177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:14.761 [2024-12-06 03:21:34.788186] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:14.761 [2024-12-06 03:21:34.788195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:14.761 [2024-12-06 03:21:34.788208] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:14.761 [2024-12-06 03:21:34.788212] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:14.761 [2024-12-06 03:21:34.788215] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:14.761 [2024-12-06 03:21:34.788218] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:14.761 [2024-12-06 03:21:34.788221] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:14.761 [2024-12-06 03:21:34.788227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:14.761 [2024-12-06 03:21:34.788233] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:14.761 [2024-12-06 03:21:34.788237] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:14.761 [2024-12-06 03:21:34.788240] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:14.761 [2024-12-06 03:21:34.788245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:14.761 [2024-12-06 03:21:34.788252] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:14.761 [2024-12-06 03:21:34.788255] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:14.761 [2024-12-06 03:21:34.788260] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:14.761 [2024-12-06 03:21:34.788265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:14.761 [2024-12-06 03:21:34.788272] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:14.761 [2024-12-06 03:21:34.788276] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:14.761 [2024-12-06 03:21:34.788279] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:14.761 [2024-12-06 03:21:34.788284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:14.761 [2024-12-06 03:21:34.788290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:14.761 [2024-12-06 03:21:34.788301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:14.761 [2024-12-06 03:21:34.788310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:14.761 [2024-12-06 03:21:34.788316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:14.761 ===================================================== 00:13:14.761 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:14.761 ===================================================== 00:13:14.761 Controller Capabilities/Features 00:13:14.761 ================================ 00:13:14.761 Vendor ID: 4e58 00:13:14.761 Subsystem Vendor ID: 4e58 00:13:14.761 Serial Number: SPDK1 00:13:14.761 Model Number: SPDK bdev Controller 00:13:14.761 Firmware Version: 25.01 00:13:14.761 Recommended Arb Burst: 6 00:13:14.762 IEEE OUI Identifier: 8d 6b 50 00:13:14.762 Multi-path I/O 00:13:14.762 May have multiple subsystem ports: Yes 00:13:14.762 May have multiple controllers: Yes 00:13:14.762 Associated with SR-IOV VF: No 00:13:14.762 Max Data Transfer Size: 131072 00:13:14.762 Max Number of Namespaces: 32 00:13:14.762 Max Number of I/O Queues: 127 00:13:14.762 NVMe Specification Version (VS): 1.3 00:13:14.762 NVMe Specification Version (Identify): 1.3 00:13:14.762 Maximum Queue Entries: 256 00:13:14.762 Contiguous Queues Required: Yes 00:13:14.762 Arbitration Mechanisms Supported 00:13:14.762 Weighted Round Robin: Not Supported 00:13:14.762 Vendor Specific: Not Supported 00:13:14.762 Reset Timeout: 15000 ms 00:13:14.762 Doorbell Stride: 4 bytes 00:13:14.762 NVM Subsystem Reset: Not Supported 00:13:14.762 Command Sets Supported 00:13:14.762 NVM Command Set: Supported 00:13:14.762 Boot Partition: Not Supported 00:13:14.762 Memory Page Size Minimum: 4096 bytes 00:13:14.762 Memory Page Size Maximum: 4096 bytes 00:13:14.762 Persistent Memory Region: Not Supported 00:13:14.762 Optional Asynchronous Events Supported 00:13:14.762 Namespace Attribute Notices: Supported 00:13:14.762 Firmware Activation Notices: Not Supported 00:13:14.762 ANA Change Notices: Not Supported 00:13:14.762 PLE Aggregate Log Change Notices: Not Supported 00:13:14.762 LBA Status Info Alert Notices: Not Supported 00:13:14.762 EGE Aggregate Log Change Notices: Not Supported 00:13:14.762 Normal NVM Subsystem Shutdown event: Not Supported 00:13:14.762 Zone Descriptor Change Notices: Not Supported 00:13:14.762 Discovery Log Change Notices: Not Supported 00:13:14.762 Controller Attributes 00:13:14.762 128-bit Host Identifier: Supported 00:13:14.762 Non-Operational Permissive Mode: Not Supported 00:13:14.762 NVM Sets: Not Supported 00:13:14.762 Read Recovery Levels: Not Supported 00:13:14.762 Endurance Groups: Not Supported 00:13:14.762 Predictable Latency Mode: Not Supported 00:13:14.762 Traffic Based Keep ALive: Not Supported 00:13:14.762 Namespace Granularity: Not Supported 00:13:14.762 SQ Associations: Not Supported 00:13:14.762 UUID List: Not Supported 00:13:14.762 Multi-Domain Subsystem: Not Supported 00:13:14.762 Fixed Capacity Management: Not Supported 00:13:14.762 Variable Capacity Management: Not Supported 00:13:14.762 Delete Endurance Group: Not Supported 00:13:14.762 Delete NVM Set: Not Supported 00:13:14.762 Extended LBA Formats Supported: Not Supported 00:13:14.762 Flexible Data Placement Supported: Not Supported 00:13:14.762 00:13:14.762 Controller Memory Buffer Support 00:13:14.762 ================================ 00:13:14.762 Supported: No 00:13:14.762 00:13:14.762 Persistent Memory Region Support 00:13:14.762 ================================ 00:13:14.762 Supported: No 00:13:14.762 00:13:14.762 Admin Command Set Attributes 00:13:14.762 ============================ 00:13:14.762 Security Send/Receive: Not Supported 00:13:14.762 Format NVM: Not Supported 00:13:14.762 Firmware Activate/Download: Not Supported 00:13:14.762 Namespace Management: Not Supported 00:13:14.762 Device Self-Test: Not Supported 00:13:14.762 Directives: Not Supported 00:13:14.762 NVMe-MI: Not Supported 00:13:14.762 Virtualization Management: Not Supported 00:13:14.762 Doorbell Buffer Config: Not Supported 00:13:14.762 Get LBA Status Capability: Not Supported 00:13:14.762 Command & Feature Lockdown Capability: Not Supported 00:13:14.762 Abort Command Limit: 4 00:13:14.762 Async Event Request Limit: 4 00:13:14.762 Number of Firmware Slots: N/A 00:13:14.762 Firmware Slot 1 Read-Only: N/A 00:13:14.762 Firmware Activation Without Reset: N/A 00:13:14.762 Multiple Update Detection Support: N/A 00:13:14.762 Firmware Update Granularity: No Information Provided 00:13:14.762 Per-Namespace SMART Log: No 00:13:14.762 Asymmetric Namespace Access Log Page: Not Supported 00:13:14.762 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:14.762 Command Effects Log Page: Supported 00:13:14.762 Get Log Page Extended Data: Supported 00:13:14.762 Telemetry Log Pages: Not Supported 00:13:14.762 Persistent Event Log Pages: Not Supported 00:13:14.762 Supported Log Pages Log Page: May Support 00:13:14.762 Commands Supported & Effects Log Page: Not Supported 00:13:14.762 Feature Identifiers & Effects Log Page:May Support 00:13:14.762 NVMe-MI Commands & Effects Log Page: May Support 00:13:14.762 Data Area 4 for Telemetry Log: Not Supported 00:13:14.762 Error Log Page Entries Supported: 128 00:13:14.762 Keep Alive: Supported 00:13:14.762 Keep Alive Granularity: 10000 ms 00:13:14.762 00:13:14.762 NVM Command Set Attributes 00:13:14.762 ========================== 00:13:14.762 Submission Queue Entry Size 00:13:14.762 Max: 64 00:13:14.762 Min: 64 00:13:14.762 Completion Queue Entry Size 00:13:14.762 Max: 16 00:13:14.762 Min: 16 00:13:14.762 Number of Namespaces: 32 00:13:14.762 Compare Command: Supported 00:13:14.762 Write Uncorrectable Command: Not Supported 00:13:14.762 Dataset Management Command: Supported 00:13:14.762 Write Zeroes Command: Supported 00:13:14.762 Set Features Save Field: Not Supported 00:13:14.762 Reservations: Not Supported 00:13:14.762 Timestamp: Not Supported 00:13:14.762 Copy: Supported 00:13:14.762 Volatile Write Cache: Present 00:13:14.762 Atomic Write Unit (Normal): 1 00:13:14.762 Atomic Write Unit (PFail): 1 00:13:14.762 Atomic Compare & Write Unit: 1 00:13:14.762 Fused Compare & Write: Supported 00:13:14.762 Scatter-Gather List 00:13:14.762 SGL Command Set: Supported (Dword aligned) 00:13:14.762 SGL Keyed: Not Supported 00:13:14.762 SGL Bit Bucket Descriptor: Not Supported 00:13:14.762 SGL Metadata Pointer: Not Supported 00:13:14.762 Oversized SGL: Not Supported 00:13:14.762 SGL Metadata Address: Not Supported 00:13:14.762 SGL Offset: Not Supported 00:13:14.762 Transport SGL Data Block: Not Supported 00:13:14.762 Replay Protected Memory Block: Not Supported 00:13:14.762 00:13:14.762 Firmware Slot Information 00:13:14.762 ========================= 00:13:14.762 Active slot: 1 00:13:14.762 Slot 1 Firmware Revision: 25.01 00:13:14.762 00:13:14.762 00:13:14.762 Commands Supported and Effects 00:13:14.762 ============================== 00:13:14.762 Admin Commands 00:13:14.762 -------------- 00:13:14.762 Get Log Page (02h): Supported 00:13:14.762 Identify (06h): Supported 00:13:14.762 Abort (08h): Supported 00:13:14.763 Set Features (09h): Supported 00:13:14.763 Get Features (0Ah): Supported 00:13:14.763 Asynchronous Event Request (0Ch): Supported 00:13:14.763 Keep Alive (18h): Supported 00:13:14.763 I/O Commands 00:13:14.763 ------------ 00:13:14.763 Flush (00h): Supported LBA-Change 00:13:14.763 Write (01h): Supported LBA-Change 00:13:14.763 Read (02h): Supported 00:13:14.763 Compare (05h): Supported 00:13:14.763 Write Zeroes (08h): Supported LBA-Change 00:13:14.763 Dataset Management (09h): Supported LBA-Change 00:13:14.763 Copy (19h): Supported LBA-Change 00:13:14.763 00:13:14.763 Error Log 00:13:14.763 ========= 00:13:14.763 00:13:14.763 Arbitration 00:13:14.763 =========== 00:13:14.763 Arbitration Burst: 1 00:13:14.763 00:13:14.763 Power Management 00:13:14.763 ================ 00:13:14.763 Number of Power States: 1 00:13:14.763 Current Power State: Power State #0 00:13:14.763 Power State #0: 00:13:14.763 Max Power: 0.00 W 00:13:14.763 Non-Operational State: Operational 00:13:14.763 Entry Latency: Not Reported 00:13:14.763 Exit Latency: Not Reported 00:13:14.763 Relative Read Throughput: 0 00:13:14.763 Relative Read Latency: 0 00:13:14.763 Relative Write Throughput: 0 00:13:14.763 Relative Write Latency: 0 00:13:14.763 Idle Power: Not Reported 00:13:14.763 Active Power: Not Reported 00:13:14.763 Non-Operational Permissive Mode: Not Supported 00:13:14.763 00:13:14.763 Health Information 00:13:14.763 ================== 00:13:14.763 Critical Warnings: 00:13:14.763 Available Spare Space: OK 00:13:14.763 Temperature: OK 00:13:14.763 Device Reliability: OK 00:13:14.763 Read Only: No 00:13:14.763 Volatile Memory Backup: OK 00:13:14.763 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:14.763 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:14.763 Available Spare: 0% 00:13:14.763 Available Sp[2024-12-06 03:21:34.788403] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:14.763 [2024-12-06 03:21:34.788413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:14.763 [2024-12-06 03:21:34.788439] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:13:14.763 [2024-12-06 03:21:34.788449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:14.763 [2024-12-06 03:21:34.788454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:14.763 [2024-12-06 03:21:34.788460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:14.763 [2024-12-06 03:21:34.788465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:14.763 [2024-12-06 03:21:34.788545] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:14.763 [2024-12-06 03:21:34.788554] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:14.763 [2024-12-06 03:21:34.789552] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:14.763 [2024-12-06 03:21:34.789605] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:13:14.763 [2024-12-06 03:21:34.789612] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:13:14.763 [2024-12-06 03:21:34.790554] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:14.763 [2024-12-06 03:21:34.790564] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:13:14.763 [2024-12-06 03:21:34.790610] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:14.763 [2024-12-06 03:21:34.791588] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:14.763 are Threshold: 0% 00:13:14.763 Life Percentage Used: 0% 00:13:14.763 Data Units Read: 0 00:13:14.763 Data Units Written: 0 00:13:14.763 Host Read Commands: 0 00:13:14.763 Host Write Commands: 0 00:13:14.763 Controller Busy Time: 0 minutes 00:13:14.763 Power Cycles: 0 00:13:14.763 Power On Hours: 0 hours 00:13:14.763 Unsafe Shutdowns: 0 00:13:14.763 Unrecoverable Media Errors: 0 00:13:14.763 Lifetime Error Log Entries: 0 00:13:14.763 Warning Temperature Time: 0 minutes 00:13:14.763 Critical Temperature Time: 0 minutes 00:13:14.763 00:13:14.763 Number of Queues 00:13:14.763 ================ 00:13:14.763 Number of I/O Submission Queues: 127 00:13:14.763 Number of I/O Completion Queues: 127 00:13:14.763 00:13:14.763 Active Namespaces 00:13:14.763 ================= 00:13:14.763 Namespace ID:1 00:13:14.763 Error Recovery Timeout: Unlimited 00:13:14.763 Command Set Identifier: NVM (00h) 00:13:14.763 Deallocate: Supported 00:13:14.763 Deallocated/Unwritten Error: Not Supported 00:13:14.763 Deallocated Read Value: Unknown 00:13:14.763 Deallocate in Write Zeroes: Not Supported 00:13:14.763 Deallocated Guard Field: 0xFFFF 00:13:14.763 Flush: Supported 00:13:14.763 Reservation: Supported 00:13:14.763 Namespace Sharing Capabilities: Multiple Controllers 00:13:14.763 Size (in LBAs): 131072 (0GiB) 00:13:14.763 Capacity (in LBAs): 131072 (0GiB) 00:13:14.763 Utilization (in LBAs): 131072 (0GiB) 00:13:14.763 NGUID: 4B7520F6A446461B9361987872DEC65A 00:13:14.763 UUID: 4b7520f6-a446-461b-9361-987872dec65a 00:13:14.763 Thin Provisioning: Not Supported 00:13:14.763 Per-NS Atomic Units: Yes 00:13:14.763 Atomic Boundary Size (Normal): 0 00:13:14.763 Atomic Boundary Size (PFail): 0 00:13:14.763 Atomic Boundary Offset: 0 00:13:14.763 Maximum Single Source Range Length: 65535 00:13:14.763 Maximum Copy Length: 65535 00:13:14.763 Maximum Source Range Count: 1 00:13:14.763 NGUID/EUI64 Never Reused: No 00:13:14.763 Namespace Write Protected: No 00:13:14.763 Number of LBA Formats: 1 00:13:14.763 Current LBA Format: LBA Format #00 00:13:14.763 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:14.763 00:13:14.763 03:21:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:15.026 [2024-12-06 03:21:35.019770] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:20.499 Initializing NVMe Controllers 00:13:20.499 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:20.499 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:20.499 Initialization complete. Launching workers. 00:13:20.499 ======================================================== 00:13:20.499 Latency(us) 00:13:20.499 Device Information : IOPS MiB/s Average min max 00:13:20.499 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39917.37 155.93 3207.50 996.95 7967.37 00:13:20.499 ======================================================== 00:13:20.499 Total : 39917.37 155.93 3207.50 996.95 7967.37 00:13:20.499 00:13:20.499 [2024-12-06 03:21:40.040742] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:20.499 03:21:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:20.499 [2024-12-06 03:21:40.276852] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:25.768 Initializing NVMe Controllers 00:13:25.768 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:25.768 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:25.768 Initialization complete. Launching workers. 00:13:25.768 ======================================================== 00:13:25.768 Latency(us) 00:13:25.768 Device Information : IOPS MiB/s Average min max 00:13:25.768 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16054.20 62.71 7978.34 6461.57 8502.33 00:13:25.768 ======================================================== 00:13:25.768 Total : 16054.20 62.71 7978.34 6461.57 8502.33 00:13:25.768 00:13:25.768 [2024-12-06 03:21:45.318387] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:25.768 03:21:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:25.768 [2024-12-06 03:21:45.523382] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:31.029 [2024-12-06 03:21:50.598231] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:31.029 Initializing NVMe Controllers 00:13:31.029 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:31.029 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:31.029 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:31.029 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:31.029 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:31.029 Initialization complete. Launching workers. 00:13:31.029 Starting thread on core 2 00:13:31.029 Starting thread on core 3 00:13:31.029 Starting thread on core 1 00:13:31.029 03:21:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:31.029 [2024-12-06 03:21:50.889339] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:34.313 [2024-12-06 03:21:53.942080] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:34.313 Initializing NVMe Controllers 00:13:34.313 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:34.313 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:34.313 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:34.313 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:34.313 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:34.313 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:34.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:34.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:34.313 Initialization complete. Launching workers. 00:13:34.313 Starting thread on core 1 with urgent priority queue 00:13:34.313 Starting thread on core 2 with urgent priority queue 00:13:34.313 Starting thread on core 3 with urgent priority queue 00:13:34.313 Starting thread on core 0 with urgent priority queue 00:13:34.313 SPDK bdev Controller (SPDK1 ) core 0: 8140.67 IO/s 12.28 secs/100000 ios 00:13:34.313 SPDK bdev Controller (SPDK1 ) core 1: 8282.67 IO/s 12.07 secs/100000 ios 00:13:34.313 SPDK bdev Controller (SPDK1 ) core 2: 7519.33 IO/s 13.30 secs/100000 ios 00:13:34.313 SPDK bdev Controller (SPDK1 ) core 3: 8067.00 IO/s 12.40 secs/100000 ios 00:13:34.313 ======================================================== 00:13:34.313 00:13:34.313 03:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:34.313 [2024-12-06 03:21:54.237435] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:34.313 Initializing NVMe Controllers 00:13:34.313 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:34.313 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:34.313 Namespace ID: 1 size: 0GB 00:13:34.313 Initialization complete. 00:13:34.313 INFO: using host memory buffer for IO 00:13:34.313 Hello world! 00:13:34.313 [2024-12-06 03:21:54.271663] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:34.313 03:21:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:34.571 [2024-12-06 03:21:54.557373] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:35.517 Initializing NVMe Controllers 00:13:35.517 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:35.517 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:35.517 Initialization complete. Launching workers. 00:13:35.517 submit (in ns) avg, min, max = 6666.6, 3239.1, 4000537.4 00:13:35.517 complete (in ns) avg, min, max = 19467.9, 1762.6, 3999309.6 00:13:35.517 00:13:35.517 Submit histogram 00:13:35.517 ================ 00:13:35.517 Range in us Cumulative Count 00:13:35.517 3.228 - 3.242: 0.0061% ( 1) 00:13:35.517 3.242 - 3.256: 0.0307% ( 4) 00:13:35.517 3.256 - 3.270: 0.0430% ( 2) 00:13:35.517 3.270 - 3.283: 0.0676% ( 4) 00:13:35.517 3.283 - 3.297: 0.1351% ( 11) 00:13:35.517 3.297 - 3.311: 0.8352% ( 114) 00:13:35.517 3.311 - 3.325: 3.8384% ( 489) 00:13:35.517 3.325 - 3.339: 8.2294% ( 715) 00:13:35.517 3.339 - 3.353: 13.5294% ( 863) 00:13:35.517 3.353 - 3.367: 19.1242% ( 911) 00:13:35.517 3.367 - 3.381: 25.4806% ( 1035) 00:13:35.517 3.381 - 3.395: 30.6025% ( 834) 00:13:35.518 3.395 - 3.409: 36.2832% ( 925) 00:13:35.518 3.409 - 3.423: 41.3069% ( 818) 00:13:35.518 3.423 - 3.437: 45.4646% ( 677) 00:13:35.518 3.437 - 3.450: 49.5363% ( 663) 00:13:35.518 3.450 - 3.464: 54.8793% ( 870) 00:13:35.518 3.464 - 3.478: 61.9788% ( 1156) 00:13:35.518 3.478 - 3.492: 66.4988% ( 736) 00:13:35.518 3.492 - 3.506: 71.4426% ( 805) 00:13:35.518 3.506 - 3.520: 76.6198% ( 843) 00:13:35.518 3.520 - 3.534: 80.5380% ( 638) 00:13:35.518 3.534 - 3.548: 83.0805% ( 414) 00:13:35.518 3.548 - 3.562: 84.6281% ( 252) 00:13:35.518 3.562 - 3.590: 86.5565% ( 314) 00:13:35.518 3.590 - 3.617: 87.6988% ( 186) 00:13:35.518 3.617 - 3.645: 89.0561% ( 221) 00:13:35.518 3.645 - 3.673: 90.6590% ( 261) 00:13:35.518 3.673 - 3.701: 92.4154% ( 286) 00:13:35.518 3.701 - 3.729: 94.3499% ( 315) 00:13:35.518 3.729 - 3.757: 96.1985% ( 301) 00:13:35.518 3.757 - 3.784: 97.4390% ( 202) 00:13:35.518 3.784 - 3.812: 98.3418% ( 147) 00:13:35.518 3.812 - 3.840: 99.0051% ( 108) 00:13:35.518 3.840 - 3.868: 99.3122% ( 50) 00:13:35.518 3.868 - 3.896: 99.4841% ( 28) 00:13:35.518 3.896 - 3.923: 99.5640% ( 13) 00:13:35.518 3.923 - 3.951: 99.5762% ( 2) 00:13:35.518 3.979 - 4.007: 99.5824% ( 1) 00:13:35.518 5.203 - 5.231: 99.5947% ( 2) 00:13:35.518 5.231 - 5.259: 99.6008% ( 1) 00:13:35.518 5.259 - 5.287: 99.6070% ( 1) 00:13:35.518 5.315 - 5.343: 99.6192% ( 2) 00:13:35.518 5.343 - 5.370: 99.6254% ( 1) 00:13:35.518 5.510 - 5.537: 99.6377% ( 2) 00:13:35.518 5.593 - 5.621: 99.6438% ( 1) 00:13:35.518 5.677 - 5.704: 99.6561% ( 2) 00:13:35.518 5.704 - 5.732: 99.6684% ( 2) 00:13:35.518 5.732 - 5.760: 99.6806% ( 2) 00:13:35.518 5.760 - 5.788: 99.6929% ( 2) 00:13:35.518 5.788 - 5.816: 99.7114% ( 3) 00:13:35.518 5.816 - 5.843: 99.7175% ( 1) 00:13:35.518 5.843 - 5.871: 99.7236% ( 1) 00:13:35.518 5.871 - 5.899: 99.7298% ( 1) 00:13:35.518 5.927 - 5.955: 99.7359% ( 1) 00:13:35.518 5.955 - 5.983: 99.7421% ( 1) 00:13:35.518 6.038 - 6.066: 99.7482% ( 1) 00:13:35.518 6.066 - 6.094: 99.7543% ( 1) 00:13:35.518 6.094 - 6.122: 99.7605% ( 1) 00:13:35.518 6.122 - 6.150: 99.7666% ( 1) 00:13:35.518 6.150 - 6.177: 99.7728% ( 1) 00:13:35.518 6.177 - 6.205: 99.7789% ( 1) 00:13:35.518 6.205 - 6.233: 99.7912% ( 2) 00:13:35.518 6.233 - 6.261: 99.7973% ( 1) 00:13:35.518 6.400 - 6.428: 99.8035% ( 1) 00:13:35.518 6.456 - 6.483: 99.8096% ( 1) 00:13:35.518 6.567 - 6.595: 99.8158% ( 1) 00:13:35.518 6.623 - 6.650: 99.8219% ( 1) 00:13:35.518 6.650 - 6.678: 99.8280% ( 1) 00:13:35.518 6.845 - 6.873: 99.8342% ( 1) 00:13:35.518 6.873 - 6.901: 99.8403% ( 1) 00:13:35.518 6.901 - 6.929: 99.8526% ( 2) 00:13:35.518 6.957 - 6.984: 99.8587% ( 1) 00:13:35.518 7.068 - 7.096: 99.8649% ( 1) 00:13:35.518 7.346 - 7.402: 99.8710% ( 1) 00:13:35.518 7.624 - 7.680: 99.8772% ( 1) 00:13:35.518 7.791 - 7.847: 99.8833% ( 1) 00:13:35.518 7.958 - 8.014: 99.8895% ( 1) 00:13:35.518 8.237 - 8.292: 99.8956% ( 1) 00:13:35.518 [2024-12-06 03:21:55.579367] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:35.518 8.403 - 8.459: 99.9079% ( 2) 00:13:35.518 8.626 - 8.682: 99.9140% ( 1) 00:13:35.518 8.737 - 8.793: 99.9202% ( 1) 00:13:35.518 3989.148 - 4017.642: 100.0000% ( 13) 00:13:35.518 00:13:35.518 Complete histogram 00:13:35.518 ================== 00:13:35.518 Range in us Cumulative Count 00:13:35.518 1.760 - 1.767: 0.0184% ( 3) 00:13:35.518 1.767 - 1.774: 0.0676% ( 8) 00:13:35.518 1.774 - 1.781: 0.0860% ( 3) 00:13:35.518 1.781 - 1.795: 0.1044% ( 3) 00:13:35.518 1.795 - 1.809: 0.1167% ( 2) 00:13:35.518 1.809 - 1.823: 4.7534% ( 755) 00:13:35.518 1.823 - 1.837: 21.1448% ( 2669) 00:13:35.518 1.837 - 1.850: 25.0752% ( 640) 00:13:35.518 1.850 - 1.864: 27.1142% ( 332) 00:13:35.518 1.864 - 1.878: 39.9742% ( 2094) 00:13:35.518 1.878 - 1.892: 79.6291% ( 6457) 00:13:35.518 1.892 - 1.906: 91.7153% ( 1968) 00:13:35.518 1.906 - 1.920: 95.2589% ( 577) 00:13:35.518 1.920 - 1.934: 96.1064% ( 138) 00:13:35.518 1.934 - 1.948: 96.7389% ( 103) 00:13:35.518 1.948 - 1.962: 98.0900% ( 220) 00:13:35.518 1.962 - 1.976: 98.9682% ( 143) 00:13:35.518 1.976 - 1.990: 99.2262% ( 42) 00:13:35.518 1.990 - 2.003: 99.3060% ( 13) 00:13:35.518 2.003 - 2.017: 99.3367% ( 5) 00:13:35.518 2.017 - 2.031: 99.3490% ( 2) 00:13:35.518 2.031 - 2.045: 99.3613% ( 2) 00:13:35.518 2.045 - 2.059: 99.3736% ( 2) 00:13:35.518 2.059 - 2.073: 99.3797% ( 1) 00:13:35.518 2.268 - 2.282: 99.3859% ( 1) 00:13:35.518 3.423 - 3.437: 99.3920% ( 1) 00:13:35.518 3.534 - 3.548: 99.3981% ( 1) 00:13:35.518 3.729 - 3.757: 99.4104% ( 2) 00:13:35.518 3.868 - 3.896: 99.4166% ( 1) 00:13:35.518 3.979 - 4.007: 99.4227% ( 1) 00:13:35.518 4.118 - 4.146: 99.4289% ( 1) 00:13:35.518 4.174 - 4.202: 99.4411% ( 2) 00:13:35.518 4.202 - 4.230: 99.4473% ( 1) 00:13:35.518 4.397 - 4.424: 99.4534% ( 1) 00:13:35.518 4.452 - 4.480: 99.4596% ( 1) 00:13:35.518 4.647 - 4.675: 99.4657% ( 1) 00:13:35.518 4.953 - 4.981: 99.4718% ( 1) 00:13:35.518 5.092 - 5.120: 99.4780% ( 1) 00:13:35.518 5.203 - 5.231: 99.4841% ( 1) 00:13:35.518 5.510 - 5.537: 99.4903% ( 1) 00:13:35.518 5.537 - 5.565: 99.4964% ( 1) 00:13:35.518 5.621 - 5.649: 99.5025% ( 1) 00:13:35.518 6.289 - 6.317: 99.5087% ( 1) 00:13:35.518 6.428 - 6.456: 99.5148% ( 1) 00:13:35.518 7.123 - 7.179: 99.5210% ( 1) 00:13:35.518 7.680 - 7.736: 99.5271% ( 1) 00:13:35.518 8.682 - 8.737: 99.5333% ( 1) 00:13:35.518 9.350 - 9.405: 99.5394% ( 1) 00:13:35.518 9.628 - 9.683: 99.5455% ( 1) 00:13:35.518 14.080 - 14.136: 99.5517% ( 1) 00:13:35.518 219.937 - 220.828: 99.5578% ( 1) 00:13:35.518 2991.861 - 3006.108: 99.5640% ( 1) 00:13:35.518 3575.986 - 3590.233: 99.5701% ( 1) 00:13:35.518 3989.148 - 4017.642: 100.0000% ( 70) 00:13:35.518 00:13:35.518 03:21:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:35.518 03:21:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:35.518 03:21:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:35.518 03:21:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:35.518 03:21:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:35.775 [ 00:13:35.775 { 00:13:35.775 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:35.775 "subtype": "Discovery", 00:13:35.775 "listen_addresses": [], 00:13:35.775 "allow_any_host": true, 00:13:35.775 "hosts": [] 00:13:35.775 }, 00:13:35.775 { 00:13:35.775 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:35.775 "subtype": "NVMe", 00:13:35.775 "listen_addresses": [ 00:13:35.775 { 00:13:35.775 "trtype": "VFIOUSER", 00:13:35.775 "adrfam": "IPv4", 00:13:35.775 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:35.775 "trsvcid": "0" 00:13:35.775 } 00:13:35.775 ], 00:13:35.775 "allow_any_host": true, 00:13:35.775 "hosts": [], 00:13:35.775 "serial_number": "SPDK1", 00:13:35.775 "model_number": "SPDK bdev Controller", 00:13:35.775 "max_namespaces": 32, 00:13:35.775 "min_cntlid": 1, 00:13:35.775 "max_cntlid": 65519, 00:13:35.775 "namespaces": [ 00:13:35.775 { 00:13:35.775 "nsid": 1, 00:13:35.775 "bdev_name": "Malloc1", 00:13:35.775 "name": "Malloc1", 00:13:35.775 "nguid": "4B7520F6A446461B9361987872DEC65A", 00:13:35.775 "uuid": "4b7520f6-a446-461b-9361-987872dec65a" 00:13:35.775 } 00:13:35.775 ] 00:13:35.775 }, 00:13:35.775 { 00:13:35.775 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:35.775 "subtype": "NVMe", 00:13:35.775 "listen_addresses": [ 00:13:35.775 { 00:13:35.775 "trtype": "VFIOUSER", 00:13:35.775 "adrfam": "IPv4", 00:13:35.775 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:35.775 "trsvcid": "0" 00:13:35.775 } 00:13:35.775 ], 00:13:35.775 "allow_any_host": true, 00:13:35.775 "hosts": [], 00:13:35.775 "serial_number": "SPDK2", 00:13:35.775 "model_number": "SPDK bdev Controller", 00:13:35.775 "max_namespaces": 32, 00:13:35.775 "min_cntlid": 1, 00:13:35.775 "max_cntlid": 65519, 00:13:35.775 "namespaces": [ 00:13:35.775 { 00:13:35.775 "nsid": 1, 00:13:35.775 "bdev_name": "Malloc2", 00:13:35.775 "name": "Malloc2", 00:13:35.775 "nguid": "39429F25221146D28551B6CC188526E5", 00:13:35.775 "uuid": "39429f25-2211-46d2-8551-b6cc188526e5" 00:13:35.775 } 00:13:35.775 ] 00:13:35.775 } 00:13:35.775 ] 00:13:35.775 03:21:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:35.775 03:21:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:35.776 03:21:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2580147 00:13:35.776 03:21:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:35.776 03:21:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:13:35.776 03:21:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:35.776 03:21:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:35.776 03:21:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:13:35.776 03:21:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:35.776 03:21:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:36.033 [2024-12-06 03:21:55.978278] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:36.033 Malloc3 00:13:36.033 03:21:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:36.291 [2024-12-06 03:21:56.235299] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:36.291 03:21:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:36.291 Asynchronous Event Request test 00:13:36.291 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:36.291 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:36.291 Registering asynchronous event callbacks... 00:13:36.291 Starting namespace attribute notice tests for all controllers... 00:13:36.291 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:36.291 aer_cb - Changed Namespace 00:13:36.291 Cleaning up... 00:13:36.291 [ 00:13:36.291 { 00:13:36.291 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:36.291 "subtype": "Discovery", 00:13:36.291 "listen_addresses": [], 00:13:36.291 "allow_any_host": true, 00:13:36.291 "hosts": [] 00:13:36.291 }, 00:13:36.291 { 00:13:36.291 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:36.291 "subtype": "NVMe", 00:13:36.291 "listen_addresses": [ 00:13:36.291 { 00:13:36.291 "trtype": "VFIOUSER", 00:13:36.291 "adrfam": "IPv4", 00:13:36.291 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:36.291 "trsvcid": "0" 00:13:36.291 } 00:13:36.291 ], 00:13:36.291 "allow_any_host": true, 00:13:36.291 "hosts": [], 00:13:36.291 "serial_number": "SPDK1", 00:13:36.291 "model_number": "SPDK bdev Controller", 00:13:36.291 "max_namespaces": 32, 00:13:36.291 "min_cntlid": 1, 00:13:36.291 "max_cntlid": 65519, 00:13:36.291 "namespaces": [ 00:13:36.291 { 00:13:36.291 "nsid": 1, 00:13:36.291 "bdev_name": "Malloc1", 00:13:36.291 "name": "Malloc1", 00:13:36.291 "nguid": "4B7520F6A446461B9361987872DEC65A", 00:13:36.291 "uuid": "4b7520f6-a446-461b-9361-987872dec65a" 00:13:36.291 }, 00:13:36.291 { 00:13:36.291 "nsid": 2, 00:13:36.291 "bdev_name": "Malloc3", 00:13:36.291 "name": "Malloc3", 00:13:36.291 "nguid": "1082082D121C468E9A30AFBD3E5F430B", 00:13:36.291 "uuid": "1082082d-121c-468e-9a30-afbd3e5f430b" 00:13:36.291 } 00:13:36.291 ] 00:13:36.291 }, 00:13:36.291 { 00:13:36.291 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:36.291 "subtype": "NVMe", 00:13:36.291 "listen_addresses": [ 00:13:36.291 { 00:13:36.291 "trtype": "VFIOUSER", 00:13:36.291 "adrfam": "IPv4", 00:13:36.291 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:36.291 "trsvcid": "0" 00:13:36.291 } 00:13:36.291 ], 00:13:36.291 "allow_any_host": true, 00:13:36.291 "hosts": [], 00:13:36.291 "serial_number": "SPDK2", 00:13:36.291 "model_number": "SPDK bdev Controller", 00:13:36.291 "max_namespaces": 32, 00:13:36.291 "min_cntlid": 1, 00:13:36.291 "max_cntlid": 65519, 00:13:36.291 "namespaces": [ 00:13:36.291 { 00:13:36.291 "nsid": 1, 00:13:36.291 "bdev_name": "Malloc2", 00:13:36.291 "name": "Malloc2", 00:13:36.291 "nguid": "39429F25221146D28551B6CC188526E5", 00:13:36.291 "uuid": "39429f25-2211-46d2-8551-b6cc188526e5" 00:13:36.291 } 00:13:36.291 ] 00:13:36.291 } 00:13:36.291 ] 00:13:36.551 03:21:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2580147 00:13:36.551 03:21:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:36.551 03:21:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:36.551 03:21:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:36.551 03:21:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:36.551 [2024-12-06 03:21:56.479728] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:13:36.551 [2024-12-06 03:21:56.479776] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2580165 ] 00:13:36.551 [2024-12-06 03:21:56.519732] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:36.551 [2024-12-06 03:21:56.528192] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:36.551 [2024-12-06 03:21:56.528217] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9670172000 00:13:36.551 [2024-12-06 03:21:56.529193] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:36.551 [2024-12-06 03:21:56.530200] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:36.551 [2024-12-06 03:21:56.531208] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:36.551 [2024-12-06 03:21:56.532213] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:36.551 [2024-12-06 03:21:56.533220] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:36.551 [2024-12-06 03:21:56.534225] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:36.551 [2024-12-06 03:21:56.535225] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:36.551 [2024-12-06 03:21:56.536235] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:36.551 [2024-12-06 03:21:56.537250] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:36.551 [2024-12-06 03:21:56.537260] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9670167000 00:13:36.552 [2024-12-06 03:21:56.538198] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:36.552 [2024-12-06 03:21:56.547725] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:36.552 [2024-12-06 03:21:56.547750] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:13:36.552 [2024-12-06 03:21:56.552829] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:36.552 [2024-12-06 03:21:56.552868] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:36.552 [2024-12-06 03:21:56.552939] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:13:36.552 [2024-12-06 03:21:56.552953] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:13:36.552 [2024-12-06 03:21:56.552958] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:13:36.552 [2024-12-06 03:21:56.553842] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:36.552 [2024-12-06 03:21:56.553851] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:13:36.552 [2024-12-06 03:21:56.553858] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:13:36.552 [2024-12-06 03:21:56.554843] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:36.552 [2024-12-06 03:21:56.554852] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:13:36.552 [2024-12-06 03:21:56.554859] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:36.552 [2024-12-06 03:21:56.555849] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:36.552 [2024-12-06 03:21:56.555858] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:36.552 [2024-12-06 03:21:56.556855] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:36.552 [2024-12-06 03:21:56.556864] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:36.552 [2024-12-06 03:21:56.556871] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:36.552 [2024-12-06 03:21:56.556878] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:36.552 [2024-12-06 03:21:56.556985] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:13:36.552 [2024-12-06 03:21:56.556990] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:36.552 [2024-12-06 03:21:56.556994] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:36.552 [2024-12-06 03:21:56.557870] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:36.552 [2024-12-06 03:21:56.558881] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:36.552 [2024-12-06 03:21:56.559884] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:36.552 [2024-12-06 03:21:56.560887] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:36.552 [2024-12-06 03:21:56.560924] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:36.552 [2024-12-06 03:21:56.561898] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:36.552 [2024-12-06 03:21:56.561907] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:36.552 [2024-12-06 03:21:56.561912] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:36.552 [2024-12-06 03:21:56.561929] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:13:36.552 [2024-12-06 03:21:56.561935] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:36.552 [2024-12-06 03:21:56.561952] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:36.552 [2024-12-06 03:21:56.561957] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:36.552 [2024-12-06 03:21:56.561960] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:36.552 [2024-12-06 03:21:56.561971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:36.552 [2024-12-06 03:21:56.569954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:36.552 [2024-12-06 03:21:56.569965] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:13:36.552 [2024-12-06 03:21:56.569972] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:13:36.552 [2024-12-06 03:21:56.569976] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:13:36.552 [2024-12-06 03:21:56.569980] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:36.552 [2024-12-06 03:21:56.569985] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:13:36.552 [2024-12-06 03:21:56.569991] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:13:36.552 [2024-12-06 03:21:56.569995] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:13:36.552 [2024-12-06 03:21:56.570002] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:36.552 [2024-12-06 03:21:56.570012] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:36.552 [2024-12-06 03:21:56.577953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:36.552 [2024-12-06 03:21:56.577965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:36.552 [2024-12-06 03:21:56.577973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:36.552 [2024-12-06 03:21:56.577980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:36.552 [2024-12-06 03:21:56.577988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:36.552 [2024-12-06 03:21:56.577992] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:36.552 [2024-12-06 03:21:56.578000] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:36.552 [2024-12-06 03:21:56.578009] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:36.552 [2024-12-06 03:21:56.585953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:36.552 [2024-12-06 03:21:56.585961] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:13:36.552 [2024-12-06 03:21:56.585966] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:36.552 [2024-12-06 03:21:56.585971] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:13:36.552 [2024-12-06 03:21:56.585977] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:36.552 [2024-12-06 03:21:56.585985] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:36.552 [2024-12-06 03:21:56.593953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:36.552 [2024-12-06 03:21:56.594009] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:13:36.552 [2024-12-06 03:21:56.594017] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:36.552 [2024-12-06 03:21:56.594024] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:36.552 [2024-12-06 03:21:56.594028] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:36.552 [2024-12-06 03:21:56.594031] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:36.552 [2024-12-06 03:21:56.594037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:36.552 [2024-12-06 03:21:56.601954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:36.552 [2024-12-06 03:21:56.601971] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:13:36.552 [2024-12-06 03:21:56.601985] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:13:36.552 [2024-12-06 03:21:56.601992] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:36.552 [2024-12-06 03:21:56.601998] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:36.552 [2024-12-06 03:21:56.602002] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:36.552 [2024-12-06 03:21:56.602005] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:36.552 [2024-12-06 03:21:56.602011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:36.552 [2024-12-06 03:21:56.609952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:36.552 [2024-12-06 03:21:56.609965] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:36.552 [2024-12-06 03:21:56.609973] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:36.553 [2024-12-06 03:21:56.609979] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:36.553 [2024-12-06 03:21:56.609983] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:36.553 [2024-12-06 03:21:56.609986] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:36.553 [2024-12-06 03:21:56.609992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:36.553 [2024-12-06 03:21:56.617951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:36.553 [2024-12-06 03:21:56.617960] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:36.553 [2024-12-06 03:21:56.617966] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:36.553 [2024-12-06 03:21:56.617974] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:13:36.553 [2024-12-06 03:21:56.617980] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:36.553 [2024-12-06 03:21:56.617985] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:36.553 [2024-12-06 03:21:56.617990] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:13:36.553 [2024-12-06 03:21:56.617994] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:36.553 [2024-12-06 03:21:56.617998] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:13:36.553 [2024-12-06 03:21:56.618003] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:13:36.553 [2024-12-06 03:21:56.618021] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:36.553 [2024-12-06 03:21:56.625953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:36.553 [2024-12-06 03:21:56.625966] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:36.553 [2024-12-06 03:21:56.633953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:36.553 [2024-12-06 03:21:56.633965] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:36.553 [2024-12-06 03:21:56.641954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:36.553 [2024-12-06 03:21:56.641966] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:36.553 [2024-12-06 03:21:56.649953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:36.553 [2024-12-06 03:21:56.649968] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:36.553 [2024-12-06 03:21:56.649972] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:36.553 [2024-12-06 03:21:56.649975] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:36.553 [2024-12-06 03:21:56.649978] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:36.553 [2024-12-06 03:21:56.649981] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:36.553 [2024-12-06 03:21:56.649987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:36.553 [2024-12-06 03:21:56.649993] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:36.553 [2024-12-06 03:21:56.649997] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:36.553 [2024-12-06 03:21:56.650000] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:36.553 [2024-12-06 03:21:56.650006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:36.553 [2024-12-06 03:21:56.650012] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:36.553 [2024-12-06 03:21:56.650016] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:36.553 [2024-12-06 03:21:56.650019] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:36.553 [2024-12-06 03:21:56.650024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:36.553 [2024-12-06 03:21:56.650031] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:36.553 [2024-12-06 03:21:56.650035] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:36.553 [2024-12-06 03:21:56.650038] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:36.553 [2024-12-06 03:21:56.650043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:36.553 [2024-12-06 03:21:56.657955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:36.553 [2024-12-06 03:21:56.657969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:36.553 [2024-12-06 03:21:56.657979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:36.553 [2024-12-06 03:21:56.657987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:36.553 ===================================================== 00:13:36.553 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:36.553 ===================================================== 00:13:36.553 Controller Capabilities/Features 00:13:36.553 ================================ 00:13:36.553 Vendor ID: 4e58 00:13:36.553 Subsystem Vendor ID: 4e58 00:13:36.553 Serial Number: SPDK2 00:13:36.553 Model Number: SPDK bdev Controller 00:13:36.553 Firmware Version: 25.01 00:13:36.553 Recommended Arb Burst: 6 00:13:36.553 IEEE OUI Identifier: 8d 6b 50 00:13:36.553 Multi-path I/O 00:13:36.553 May have multiple subsystem ports: Yes 00:13:36.553 May have multiple controllers: Yes 00:13:36.553 Associated with SR-IOV VF: No 00:13:36.553 Max Data Transfer Size: 131072 00:13:36.553 Max Number of Namespaces: 32 00:13:36.553 Max Number of I/O Queues: 127 00:13:36.553 NVMe Specification Version (VS): 1.3 00:13:36.553 NVMe Specification Version (Identify): 1.3 00:13:36.553 Maximum Queue Entries: 256 00:13:36.553 Contiguous Queues Required: Yes 00:13:36.553 Arbitration Mechanisms Supported 00:13:36.553 Weighted Round Robin: Not Supported 00:13:36.553 Vendor Specific: Not Supported 00:13:36.553 Reset Timeout: 15000 ms 00:13:36.553 Doorbell Stride: 4 bytes 00:13:36.553 NVM Subsystem Reset: Not Supported 00:13:36.553 Command Sets Supported 00:13:36.553 NVM Command Set: Supported 00:13:36.553 Boot Partition: Not Supported 00:13:36.553 Memory Page Size Minimum: 4096 bytes 00:13:36.553 Memory Page Size Maximum: 4096 bytes 00:13:36.553 Persistent Memory Region: Not Supported 00:13:36.553 Optional Asynchronous Events Supported 00:13:36.553 Namespace Attribute Notices: Supported 00:13:36.553 Firmware Activation Notices: Not Supported 00:13:36.553 ANA Change Notices: Not Supported 00:13:36.553 PLE Aggregate Log Change Notices: Not Supported 00:13:36.553 LBA Status Info Alert Notices: Not Supported 00:13:36.553 EGE Aggregate Log Change Notices: Not Supported 00:13:36.553 Normal NVM Subsystem Shutdown event: Not Supported 00:13:36.553 Zone Descriptor Change Notices: Not Supported 00:13:36.553 Discovery Log Change Notices: Not Supported 00:13:36.553 Controller Attributes 00:13:36.553 128-bit Host Identifier: Supported 00:13:36.553 Non-Operational Permissive Mode: Not Supported 00:13:36.553 NVM Sets: Not Supported 00:13:36.553 Read Recovery Levels: Not Supported 00:13:36.553 Endurance Groups: Not Supported 00:13:36.553 Predictable Latency Mode: Not Supported 00:13:36.553 Traffic Based Keep ALive: Not Supported 00:13:36.553 Namespace Granularity: Not Supported 00:13:36.553 SQ Associations: Not Supported 00:13:36.553 UUID List: Not Supported 00:13:36.553 Multi-Domain Subsystem: Not Supported 00:13:36.553 Fixed Capacity Management: Not Supported 00:13:36.553 Variable Capacity Management: Not Supported 00:13:36.553 Delete Endurance Group: Not Supported 00:13:36.553 Delete NVM Set: Not Supported 00:13:36.553 Extended LBA Formats Supported: Not Supported 00:13:36.553 Flexible Data Placement Supported: Not Supported 00:13:36.553 00:13:36.553 Controller Memory Buffer Support 00:13:36.553 ================================ 00:13:36.553 Supported: No 00:13:36.553 00:13:36.553 Persistent Memory Region Support 00:13:36.553 ================================ 00:13:36.553 Supported: No 00:13:36.553 00:13:36.553 Admin Command Set Attributes 00:13:36.553 ============================ 00:13:36.553 Security Send/Receive: Not Supported 00:13:36.553 Format NVM: Not Supported 00:13:36.553 Firmware Activate/Download: Not Supported 00:13:36.553 Namespace Management: Not Supported 00:13:36.553 Device Self-Test: Not Supported 00:13:36.553 Directives: Not Supported 00:13:36.553 NVMe-MI: Not Supported 00:13:36.553 Virtualization Management: Not Supported 00:13:36.553 Doorbell Buffer Config: Not Supported 00:13:36.553 Get LBA Status Capability: Not Supported 00:13:36.553 Command & Feature Lockdown Capability: Not Supported 00:13:36.553 Abort Command Limit: 4 00:13:36.553 Async Event Request Limit: 4 00:13:36.554 Number of Firmware Slots: N/A 00:13:36.554 Firmware Slot 1 Read-Only: N/A 00:13:36.554 Firmware Activation Without Reset: N/A 00:13:36.554 Multiple Update Detection Support: N/A 00:13:36.554 Firmware Update Granularity: No Information Provided 00:13:36.554 Per-Namespace SMART Log: No 00:13:36.554 Asymmetric Namespace Access Log Page: Not Supported 00:13:36.554 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:36.554 Command Effects Log Page: Supported 00:13:36.554 Get Log Page Extended Data: Supported 00:13:36.554 Telemetry Log Pages: Not Supported 00:13:36.554 Persistent Event Log Pages: Not Supported 00:13:36.554 Supported Log Pages Log Page: May Support 00:13:36.554 Commands Supported & Effects Log Page: Not Supported 00:13:36.554 Feature Identifiers & Effects Log Page:May Support 00:13:36.554 NVMe-MI Commands & Effects Log Page: May Support 00:13:36.554 Data Area 4 for Telemetry Log: Not Supported 00:13:36.554 Error Log Page Entries Supported: 128 00:13:36.554 Keep Alive: Supported 00:13:36.554 Keep Alive Granularity: 10000 ms 00:13:36.554 00:13:36.554 NVM Command Set Attributes 00:13:36.554 ========================== 00:13:36.554 Submission Queue Entry Size 00:13:36.554 Max: 64 00:13:36.554 Min: 64 00:13:36.554 Completion Queue Entry Size 00:13:36.554 Max: 16 00:13:36.554 Min: 16 00:13:36.554 Number of Namespaces: 32 00:13:36.554 Compare Command: Supported 00:13:36.554 Write Uncorrectable Command: Not Supported 00:13:36.554 Dataset Management Command: Supported 00:13:36.554 Write Zeroes Command: Supported 00:13:36.554 Set Features Save Field: Not Supported 00:13:36.554 Reservations: Not Supported 00:13:36.554 Timestamp: Not Supported 00:13:36.554 Copy: Supported 00:13:36.554 Volatile Write Cache: Present 00:13:36.554 Atomic Write Unit (Normal): 1 00:13:36.554 Atomic Write Unit (PFail): 1 00:13:36.554 Atomic Compare & Write Unit: 1 00:13:36.554 Fused Compare & Write: Supported 00:13:36.554 Scatter-Gather List 00:13:36.554 SGL Command Set: Supported (Dword aligned) 00:13:36.554 SGL Keyed: Not Supported 00:13:36.554 SGL Bit Bucket Descriptor: Not Supported 00:13:36.554 SGL Metadata Pointer: Not Supported 00:13:36.554 Oversized SGL: Not Supported 00:13:36.554 SGL Metadata Address: Not Supported 00:13:36.554 SGL Offset: Not Supported 00:13:36.554 Transport SGL Data Block: Not Supported 00:13:36.554 Replay Protected Memory Block: Not Supported 00:13:36.554 00:13:36.554 Firmware Slot Information 00:13:36.554 ========================= 00:13:36.554 Active slot: 1 00:13:36.554 Slot 1 Firmware Revision: 25.01 00:13:36.554 00:13:36.554 00:13:36.554 Commands Supported and Effects 00:13:36.554 ============================== 00:13:36.554 Admin Commands 00:13:36.554 -------------- 00:13:36.554 Get Log Page (02h): Supported 00:13:36.554 Identify (06h): Supported 00:13:36.554 Abort (08h): Supported 00:13:36.554 Set Features (09h): Supported 00:13:36.554 Get Features (0Ah): Supported 00:13:36.554 Asynchronous Event Request (0Ch): Supported 00:13:36.554 Keep Alive (18h): Supported 00:13:36.554 I/O Commands 00:13:36.554 ------------ 00:13:36.554 Flush (00h): Supported LBA-Change 00:13:36.554 Write (01h): Supported LBA-Change 00:13:36.554 Read (02h): Supported 00:13:36.554 Compare (05h): Supported 00:13:36.554 Write Zeroes (08h): Supported LBA-Change 00:13:36.554 Dataset Management (09h): Supported LBA-Change 00:13:36.554 Copy (19h): Supported LBA-Change 00:13:36.554 00:13:36.554 Error Log 00:13:36.554 ========= 00:13:36.554 00:13:36.554 Arbitration 00:13:36.554 =========== 00:13:36.554 Arbitration Burst: 1 00:13:36.554 00:13:36.554 Power Management 00:13:36.554 ================ 00:13:36.554 Number of Power States: 1 00:13:36.554 Current Power State: Power State #0 00:13:36.554 Power State #0: 00:13:36.554 Max Power: 0.00 W 00:13:36.554 Non-Operational State: Operational 00:13:36.554 Entry Latency: Not Reported 00:13:36.554 Exit Latency: Not Reported 00:13:36.554 Relative Read Throughput: 0 00:13:36.554 Relative Read Latency: 0 00:13:36.554 Relative Write Throughput: 0 00:13:36.554 Relative Write Latency: 0 00:13:36.554 Idle Power: Not Reported 00:13:36.554 Active Power: Not Reported 00:13:36.554 Non-Operational Permissive Mode: Not Supported 00:13:36.554 00:13:36.554 Health Information 00:13:36.554 ================== 00:13:36.554 Critical Warnings: 00:13:36.554 Available Spare Space: OK 00:13:36.554 Temperature: OK 00:13:36.554 Device Reliability: OK 00:13:36.554 Read Only: No 00:13:36.554 Volatile Memory Backup: OK 00:13:36.554 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:36.554 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:36.554 Available Spare: 0% 00:13:36.554 Available Sp[2024-12-06 03:21:56.658080] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:36.554 [2024-12-06 03:21:56.665953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:36.554 [2024-12-06 03:21:56.665982] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:13:36.554 [2024-12-06 03:21:56.665991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.554 [2024-12-06 03:21:56.665997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.554 [2024-12-06 03:21:56.666002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.554 [2024-12-06 03:21:56.666007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.554 [2024-12-06 03:21:56.666044] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:36.554 [2024-12-06 03:21:56.666055] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:36.554 [2024-12-06 03:21:56.667057] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:36.554 [2024-12-06 03:21:56.667101] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:13:36.554 [2024-12-06 03:21:56.667107] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:13:36.554 [2024-12-06 03:21:56.668063] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:36.554 [2024-12-06 03:21:56.668075] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:13:36.554 [2024-12-06 03:21:56.668119] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:36.554 [2024-12-06 03:21:56.669098] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:36.813 are Threshold: 0% 00:13:36.813 Life Percentage Used: 0% 00:13:36.813 Data Units Read: 0 00:13:36.813 Data Units Written: 0 00:13:36.813 Host Read Commands: 0 00:13:36.813 Host Write Commands: 0 00:13:36.813 Controller Busy Time: 0 minutes 00:13:36.813 Power Cycles: 0 00:13:36.813 Power On Hours: 0 hours 00:13:36.813 Unsafe Shutdowns: 0 00:13:36.813 Unrecoverable Media Errors: 0 00:13:36.813 Lifetime Error Log Entries: 0 00:13:36.813 Warning Temperature Time: 0 minutes 00:13:36.813 Critical Temperature Time: 0 minutes 00:13:36.813 00:13:36.813 Number of Queues 00:13:36.813 ================ 00:13:36.813 Number of I/O Submission Queues: 127 00:13:36.813 Number of I/O Completion Queues: 127 00:13:36.813 00:13:36.813 Active Namespaces 00:13:36.813 ================= 00:13:36.813 Namespace ID:1 00:13:36.813 Error Recovery Timeout: Unlimited 00:13:36.813 Command Set Identifier: NVM (00h) 00:13:36.813 Deallocate: Supported 00:13:36.813 Deallocated/Unwritten Error: Not Supported 00:13:36.813 Deallocated Read Value: Unknown 00:13:36.813 Deallocate in Write Zeroes: Not Supported 00:13:36.813 Deallocated Guard Field: 0xFFFF 00:13:36.813 Flush: Supported 00:13:36.813 Reservation: Supported 00:13:36.813 Namespace Sharing Capabilities: Multiple Controllers 00:13:36.813 Size (in LBAs): 131072 (0GiB) 00:13:36.813 Capacity (in LBAs): 131072 (0GiB) 00:13:36.813 Utilization (in LBAs): 131072 (0GiB) 00:13:36.813 NGUID: 39429F25221146D28551B6CC188526E5 00:13:36.813 UUID: 39429f25-2211-46d2-8551-b6cc188526e5 00:13:36.813 Thin Provisioning: Not Supported 00:13:36.813 Per-NS Atomic Units: Yes 00:13:36.813 Atomic Boundary Size (Normal): 0 00:13:36.813 Atomic Boundary Size (PFail): 0 00:13:36.813 Atomic Boundary Offset: 0 00:13:36.813 Maximum Single Source Range Length: 65535 00:13:36.813 Maximum Copy Length: 65535 00:13:36.813 Maximum Source Range Count: 1 00:13:36.813 NGUID/EUI64 Never Reused: No 00:13:36.813 Namespace Write Protected: No 00:13:36.813 Number of LBA Formats: 1 00:13:36.813 Current LBA Format: LBA Format #00 00:13:36.813 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:36.813 00:13:36.813 03:21:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:36.813 [2024-12-06 03:21:56.906529] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:42.077 Initializing NVMe Controllers 00:13:42.077 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:42.077 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:42.077 Initialization complete. Launching workers. 00:13:42.077 ======================================================== 00:13:42.077 Latency(us) 00:13:42.077 Device Information : IOPS MiB/s Average min max 00:13:42.077 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39953.21 156.07 3203.35 1008.30 6593.00 00:13:42.077 ======================================================== 00:13:42.077 Total : 39953.21 156.07 3203.35 1008.30 6593.00 00:13:42.077 00:13:42.077 [2024-12-06 03:22:02.010223] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:42.077 03:22:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:42.336 [2024-12-06 03:22:02.257910] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:47.600 Initializing NVMe Controllers 00:13:47.600 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:47.600 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:47.600 Initialization complete. Launching workers. 00:13:47.600 ======================================================== 00:13:47.600 Latency(us) 00:13:47.600 Device Information : IOPS MiB/s Average min max 00:13:47.600 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39946.55 156.04 3204.11 1004.05 9562.89 00:13:47.600 ======================================================== 00:13:47.600 Total : 39946.55 156.04 3204.11 1004.05 9562.89 00:13:47.600 00:13:47.600 [2024-12-06 03:22:07.277398] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:47.600 03:22:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:47.600 [2024-12-06 03:22:07.493515] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:52.863 [2024-12-06 03:22:12.622039] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:52.863 Initializing NVMe Controllers 00:13:52.863 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:52.863 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:52.863 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:52.863 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:52.863 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:52.863 Initialization complete. Launching workers. 00:13:52.863 Starting thread on core 2 00:13:52.863 Starting thread on core 3 00:13:52.863 Starting thread on core 1 00:13:52.863 03:22:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:52.863 [2024-12-06 03:22:12.915346] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:56.149 [2024-12-06 03:22:15.969780] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:56.149 Initializing NVMe Controllers 00:13:56.149 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:56.149 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:56.149 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:56.149 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:56.149 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:56.149 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:56.149 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:56.149 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:56.149 Initialization complete. Launching workers. 00:13:56.149 Starting thread on core 1 with urgent priority queue 00:13:56.149 Starting thread on core 2 with urgent priority queue 00:13:56.149 Starting thread on core 3 with urgent priority queue 00:13:56.150 Starting thread on core 0 with urgent priority queue 00:13:56.150 SPDK bdev Controller (SPDK2 ) core 0: 10289.67 IO/s 9.72 secs/100000 ios 00:13:56.150 SPDK bdev Controller (SPDK2 ) core 1: 9131.67 IO/s 10.95 secs/100000 ios 00:13:56.150 SPDK bdev Controller (SPDK2 ) core 2: 9964.67 IO/s 10.04 secs/100000 ios 00:13:56.150 SPDK bdev Controller (SPDK2 ) core 3: 7900.67 IO/s 12.66 secs/100000 ios 00:13:56.150 ======================================================== 00:13:56.150 00:13:56.150 03:22:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:56.150 [2024-12-06 03:22:16.259111] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:56.150 Initializing NVMe Controllers 00:13:56.150 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:56.150 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:56.150 Namespace ID: 1 size: 0GB 00:13:56.150 Initialization complete. 00:13:56.150 INFO: using host memory buffer for IO 00:13:56.150 Hello world! 00:13:56.150 [2024-12-06 03:22:16.272207] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:56.408 03:22:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:56.667 [2024-12-06 03:22:16.558812] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:57.603 Initializing NVMe Controllers 00:13:57.603 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:57.603 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:57.603 Initialization complete. Launching workers. 00:13:57.603 submit (in ns) avg, min, max = 7847.8, 3199.1, 4007561.7 00:13:57.603 complete (in ns) avg, min, max = 17378.0, 1759.1, 3999869.6 00:13:57.603 00:13:57.603 Submit histogram 00:13:57.603 ================ 00:13:57.603 Range in us Cumulative Count 00:13:57.603 3.186 - 3.200: 0.0061% ( 1) 00:13:57.603 3.200 - 3.214: 0.0244% ( 3) 00:13:57.603 3.214 - 3.228: 0.1161% ( 15) 00:13:57.603 3.228 - 3.242: 0.3115% ( 32) 00:13:57.603 3.242 - 3.256: 0.5864% ( 45) 00:13:57.603 3.256 - 3.270: 0.8797% ( 48) 00:13:57.603 3.270 - 3.283: 1.3256% ( 73) 00:13:57.603 3.283 - 3.297: 2.7367% ( 231) 00:13:57.603 3.297 - 3.311: 6.2798% ( 580) 00:13:57.603 3.311 - 3.325: 11.1057% ( 790) 00:13:57.603 3.325 - 3.339: 16.8235% ( 936) 00:13:57.603 3.339 - 3.353: 23.0666% ( 1022) 00:13:57.603 3.353 - 3.367: 28.7172% ( 925) 00:13:57.603 3.367 - 3.381: 33.6530% ( 808) 00:13:57.603 3.381 - 3.395: 39.0409% ( 882) 00:13:57.603 3.395 - 3.409: 44.3128% ( 863) 00:13:57.603 3.409 - 3.423: 48.5950% ( 701) 00:13:57.603 3.423 - 3.437: 52.1258% ( 578) 00:13:57.603 3.437 - 3.450: 57.5687% ( 891) 00:13:57.603 3.450 - 3.464: 63.7202% ( 1007) 00:13:57.603 3.464 - 3.478: 68.0391% ( 707) 00:13:57.603 3.478 - 3.492: 72.6451% ( 754) 00:13:57.603 3.492 - 3.506: 77.8314% ( 849) 00:13:57.603 3.506 - 3.520: 81.3378% ( 574) 00:13:57.603 3.520 - 3.534: 83.7874% ( 401) 00:13:57.603 3.534 - 3.548: 85.4368% ( 270) 00:13:57.603 3.548 - 3.562: 86.3836% ( 155) 00:13:57.603 3.562 - 3.590: 87.5382% ( 189) 00:13:57.603 3.590 - 3.617: 88.9432% ( 230) 00:13:57.603 3.617 - 3.645: 90.5437% ( 262) 00:13:57.603 3.645 - 3.673: 92.3030% ( 288) 00:13:57.603 3.673 - 3.701: 93.9707% ( 273) 00:13:57.603 3.701 - 3.729: 95.6628% ( 277) 00:13:57.603 3.729 - 3.757: 97.2083% ( 253) 00:13:57.603 3.757 - 3.784: 98.2407% ( 169) 00:13:57.603 3.784 - 3.812: 98.8699% ( 103) 00:13:57.603 3.812 - 3.840: 99.2059% ( 55) 00:13:57.603 3.840 - 3.868: 99.4563% ( 41) 00:13:57.603 3.868 - 3.896: 99.5418% ( 14) 00:13:57.603 3.896 - 3.923: 99.5907% ( 8) 00:13:57.603 3.923 - 3.951: 99.6029% ( 2) 00:13:57.603 3.951 - 3.979: 99.6090% ( 1) 00:13:57.603 4.007 - 4.035: 99.6151% ( 1) 00:13:57.603 4.257 - 4.285: 99.6213% ( 1) 00:13:57.603 5.009 - 5.037: 99.6274% ( 1) 00:13:57.603 5.064 - 5.092: 99.6335% ( 1) 00:13:57.603 5.092 - 5.120: 99.6457% ( 2) 00:13:57.603 5.176 - 5.203: 99.6579% ( 2) 00:13:57.603 5.287 - 5.315: 99.6640% ( 1) 00:13:57.603 5.426 - 5.454: 99.6701% ( 1) 00:13:57.603 5.510 - 5.537: 99.6762% ( 1) 00:13:57.603 5.593 - 5.621: 99.6823% ( 1) 00:13:57.603 5.621 - 5.649: 99.6885% ( 1) 00:13:57.603 5.677 - 5.704: 99.6946% ( 1) 00:13:57.603 5.704 - 5.732: 99.7068% ( 2) 00:13:57.603 5.760 - 5.788: 99.7129% ( 1) 00:13:57.603 5.788 - 5.816: 99.7251% ( 2) 00:13:57.603 5.843 - 5.871: 99.7312% ( 1) 00:13:57.603 5.871 - 5.899: 99.7373% ( 1) 00:13:57.603 5.899 - 5.927: 99.7495% ( 2) 00:13:57.603 5.955 - 5.983: 99.7557% ( 1) 00:13:57.603 6.094 - 6.122: 99.7618% ( 1) 00:13:57.603 6.177 - 6.205: 99.7740% ( 2) 00:13:57.603 6.205 - 6.233: 99.7801% ( 1) 00:13:57.603 6.261 - 6.289: 99.7862% ( 1) 00:13:57.603 6.372 - 6.400: 99.7923% ( 1) 00:13:57.603 6.456 - 6.483: 99.7984% ( 1) 00:13:57.603 6.511 - 6.539: 99.8045% ( 1) 00:13:57.603 6.539 - 6.567: 99.8106% ( 1) 00:13:57.603 6.817 - 6.845: 99.8167% ( 1) 00:13:57.603 6.901 - 6.929: 99.8228% ( 1) 00:13:57.603 7.068 - 7.096: 99.8290% ( 1) 00:13:57.603 7.179 - 7.235: 99.8351% ( 1) 00:13:57.603 7.402 - 7.457: 99.8412% ( 1) 00:13:57.603 7.624 - 7.680: 99.8473% ( 1) 00:13:57.603 7.680 - 7.736: 99.8595% ( 2) 00:13:57.603 8.125 - 8.181: 99.8656% ( 1) 00:13:57.603 8.348 - 8.403: 99.8717% ( 1) 00:13:57.603 13.690 - 13.746: 99.8839% ( 2) 00:13:57.603 [2024-12-06 03:22:17.659027] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:57.603 13.857 - 13.913: 99.8900% ( 1) 00:13:57.603 3989.148 - 4017.642: 100.0000% ( 18) 00:13:57.603 00:13:57.603 Complete histogram 00:13:57.603 ================== 00:13:57.603 Range in us Cumulative Count 00:13:57.603 1.753 - 1.760: 0.0061% ( 1) 00:13:57.603 1.760 - 1.767: 0.0428% ( 6) 00:13:57.603 1.767 - 1.774: 0.3665% ( 53) 00:13:57.603 1.774 - 1.781: 1.0263% ( 108) 00:13:57.603 1.781 - 1.795: 1.5394% ( 84) 00:13:57.603 1.795 - 1.809: 1.6188% ( 13) 00:13:57.603 1.809 - 1.823: 2.9139% ( 212) 00:13:57.603 1.823 - 1.837: 18.9249% ( 2621) 00:13:57.603 1.837 - 1.850: 27.4282% ( 1392) 00:13:57.603 1.850 - 1.864: 29.5235% ( 343) 00:13:57.603 1.864 - 1.878: 34.5082% ( 816) 00:13:57.603 1.878 - 1.892: 73.1643% ( 6328) 00:13:57.603 1.892 - 1.906: 92.1686% ( 3111) 00:13:57.603 1.906 - 1.920: 95.9682% ( 622) 00:13:57.603 1.920 - 1.934: 97.2327% ( 207) 00:13:57.603 1.934 - 1.948: 97.8131% ( 95) 00:13:57.603 1.948 - 1.962: 98.4545% ( 105) 00:13:57.603 1.962 - 1.976: 99.0531% ( 98) 00:13:57.603 1.976 - 1.990: 99.2670% ( 35) 00:13:57.603 1.990 - 2.003: 99.3464% ( 13) 00:13:57.603 2.003 - 2.017: 99.3586% ( 2) 00:13:57.603 2.017 - 2.031: 99.3830% ( 4) 00:13:57.603 2.031 - 2.045: 99.3952% ( 2) 00:13:57.603 2.045 - 2.059: 99.4013% ( 1) 00:13:57.603 2.073 - 2.087: 99.4075% ( 1) 00:13:57.603 2.087 - 2.101: 99.4136% ( 1) 00:13:57.603 2.101 - 2.115: 99.4197% ( 1) 00:13:57.603 2.365 - 2.379: 99.4258% ( 1) 00:13:57.603 2.421 - 2.435: 99.4319% ( 1) 00:13:57.603 2.435 - 2.449: 99.4380% ( 1) 00:13:57.603 3.506 - 3.520: 99.4441% ( 1) 00:13:57.603 3.729 - 3.757: 99.4502% ( 1) 00:13:57.603 3.979 - 4.007: 99.4624% ( 2) 00:13:57.603 4.007 - 4.035: 99.4685% ( 1) 00:13:57.603 4.035 - 4.063: 99.4746% ( 1) 00:13:57.603 4.063 - 4.090: 99.4808% ( 1) 00:13:57.603 4.202 - 4.230: 99.4869% ( 1) 00:13:57.603 4.230 - 4.257: 99.4930% ( 1) 00:13:57.603 4.397 - 4.424: 99.4991% ( 1) 00:13:57.603 4.563 - 4.591: 99.5113% ( 2) 00:13:57.603 4.703 - 4.730: 99.5174% ( 1) 00:13:57.603 4.730 - 4.758: 99.5235% ( 1) 00:13:57.603 4.786 - 4.814: 99.5296% ( 1) 00:13:57.604 5.176 - 5.203: 99.5357% ( 1) 00:13:57.604 5.593 - 5.621: 99.5418% ( 1) 00:13:57.604 6.038 - 6.066: 99.5480% ( 1) 00:13:57.604 6.177 - 6.205: 99.5541% ( 1) 00:13:57.604 6.233 - 6.261: 99.5602% ( 1) 00:13:57.604 6.289 - 6.317: 99.5663% ( 1) 00:13:57.604 11.186 - 11.242: 99.5724% ( 1) 00:13:57.604 11.965 - 12.021: 99.5785% ( 1) 00:13:57.604 12.188 - 12.243: 99.5846% ( 1) 00:13:57.604 12.243 - 12.299: 99.5907% ( 1) 00:13:57.604 12.410 - 12.466: 99.5968% ( 1) 00:13:57.604 14.803 - 14.915: 99.6029% ( 1) 00:13:57.604 39.847 - 40.070: 99.6090% ( 1) 00:13:57.604 1994.574 - 2008.821: 99.6151% ( 1) 00:13:57.604 3989.148 - 4017.642: 100.0000% ( 63) 00:13:57.604 00:13:57.604 03:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:57.604 03:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:57.604 03:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:57.604 03:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:57.604 03:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:57.863 [ 00:13:57.863 { 00:13:57.863 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:57.863 "subtype": "Discovery", 00:13:57.863 "listen_addresses": [], 00:13:57.863 "allow_any_host": true, 00:13:57.863 "hosts": [] 00:13:57.863 }, 00:13:57.863 { 00:13:57.863 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:57.863 "subtype": "NVMe", 00:13:57.863 "listen_addresses": [ 00:13:57.863 { 00:13:57.863 "trtype": "VFIOUSER", 00:13:57.863 "adrfam": "IPv4", 00:13:57.863 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:57.863 "trsvcid": "0" 00:13:57.863 } 00:13:57.863 ], 00:13:57.863 "allow_any_host": true, 00:13:57.863 "hosts": [], 00:13:57.863 "serial_number": "SPDK1", 00:13:57.863 "model_number": "SPDK bdev Controller", 00:13:57.863 "max_namespaces": 32, 00:13:57.863 "min_cntlid": 1, 00:13:57.863 "max_cntlid": 65519, 00:13:57.863 "namespaces": [ 00:13:57.863 { 00:13:57.863 "nsid": 1, 00:13:57.863 "bdev_name": "Malloc1", 00:13:57.863 "name": "Malloc1", 00:13:57.863 "nguid": "4B7520F6A446461B9361987872DEC65A", 00:13:57.863 "uuid": "4b7520f6-a446-461b-9361-987872dec65a" 00:13:57.863 }, 00:13:57.863 { 00:13:57.863 "nsid": 2, 00:13:57.863 "bdev_name": "Malloc3", 00:13:57.863 "name": "Malloc3", 00:13:57.863 "nguid": "1082082D121C468E9A30AFBD3E5F430B", 00:13:57.863 "uuid": "1082082d-121c-468e-9a30-afbd3e5f430b" 00:13:57.863 } 00:13:57.863 ] 00:13:57.863 }, 00:13:57.863 { 00:13:57.863 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:57.863 "subtype": "NVMe", 00:13:57.863 "listen_addresses": [ 00:13:57.863 { 00:13:57.863 "trtype": "VFIOUSER", 00:13:57.863 "adrfam": "IPv4", 00:13:57.863 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:57.863 "trsvcid": "0" 00:13:57.863 } 00:13:57.863 ], 00:13:57.863 "allow_any_host": true, 00:13:57.863 "hosts": [], 00:13:57.863 "serial_number": "SPDK2", 00:13:57.863 "model_number": "SPDK bdev Controller", 00:13:57.863 "max_namespaces": 32, 00:13:57.863 "min_cntlid": 1, 00:13:57.863 "max_cntlid": 65519, 00:13:57.863 "namespaces": [ 00:13:57.863 { 00:13:57.863 "nsid": 1, 00:13:57.863 "bdev_name": "Malloc2", 00:13:57.863 "name": "Malloc2", 00:13:57.863 "nguid": "39429F25221146D28551B6CC188526E5", 00:13:57.863 "uuid": "39429f25-2211-46d2-8551-b6cc188526e5" 00:13:57.863 } 00:13:57.863 ] 00:13:57.863 } 00:13:57.863 ] 00:13:57.863 03:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:57.863 03:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:57.863 03:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2583682 00:13:57.863 03:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:57.863 03:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:13:57.863 03:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:57.863 03:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:57.863 03:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:13:57.863 03:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:57.863 03:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:58.121 [2024-12-06 03:22:18.065331] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:58.121 Malloc4 00:13:58.121 03:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:58.379 [2024-12-06 03:22:18.323316] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:58.379 03:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:58.379 Asynchronous Event Request test 00:13:58.379 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:58.379 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:58.379 Registering asynchronous event callbacks... 00:13:58.379 Starting namespace attribute notice tests for all controllers... 00:13:58.379 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:58.379 aer_cb - Changed Namespace 00:13:58.379 Cleaning up... 00:13:58.638 [ 00:13:58.638 { 00:13:58.638 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:58.638 "subtype": "Discovery", 00:13:58.638 "listen_addresses": [], 00:13:58.638 "allow_any_host": true, 00:13:58.638 "hosts": [] 00:13:58.638 }, 00:13:58.638 { 00:13:58.638 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:58.638 "subtype": "NVMe", 00:13:58.638 "listen_addresses": [ 00:13:58.638 { 00:13:58.638 "trtype": "VFIOUSER", 00:13:58.638 "adrfam": "IPv4", 00:13:58.638 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:58.638 "trsvcid": "0" 00:13:58.638 } 00:13:58.638 ], 00:13:58.638 "allow_any_host": true, 00:13:58.638 "hosts": [], 00:13:58.638 "serial_number": "SPDK1", 00:13:58.638 "model_number": "SPDK bdev Controller", 00:13:58.638 "max_namespaces": 32, 00:13:58.638 "min_cntlid": 1, 00:13:58.638 "max_cntlid": 65519, 00:13:58.638 "namespaces": [ 00:13:58.638 { 00:13:58.638 "nsid": 1, 00:13:58.638 "bdev_name": "Malloc1", 00:13:58.638 "name": "Malloc1", 00:13:58.638 "nguid": "4B7520F6A446461B9361987872DEC65A", 00:13:58.638 "uuid": "4b7520f6-a446-461b-9361-987872dec65a" 00:13:58.638 }, 00:13:58.638 { 00:13:58.638 "nsid": 2, 00:13:58.638 "bdev_name": "Malloc3", 00:13:58.638 "name": "Malloc3", 00:13:58.638 "nguid": "1082082D121C468E9A30AFBD3E5F430B", 00:13:58.638 "uuid": "1082082d-121c-468e-9a30-afbd3e5f430b" 00:13:58.638 } 00:13:58.638 ] 00:13:58.638 }, 00:13:58.638 { 00:13:58.638 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:58.638 "subtype": "NVMe", 00:13:58.638 "listen_addresses": [ 00:13:58.638 { 00:13:58.638 "trtype": "VFIOUSER", 00:13:58.638 "adrfam": "IPv4", 00:13:58.638 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:58.638 "trsvcid": "0" 00:13:58.638 } 00:13:58.638 ], 00:13:58.638 "allow_any_host": true, 00:13:58.638 "hosts": [], 00:13:58.638 "serial_number": "SPDK2", 00:13:58.638 "model_number": "SPDK bdev Controller", 00:13:58.638 "max_namespaces": 32, 00:13:58.638 "min_cntlid": 1, 00:13:58.638 "max_cntlid": 65519, 00:13:58.638 "namespaces": [ 00:13:58.638 { 00:13:58.638 "nsid": 1, 00:13:58.638 "bdev_name": "Malloc2", 00:13:58.638 "name": "Malloc2", 00:13:58.638 "nguid": "39429F25221146D28551B6CC188526E5", 00:13:58.638 "uuid": "39429f25-2211-46d2-8551-b6cc188526e5" 00:13:58.638 }, 00:13:58.638 { 00:13:58.638 "nsid": 2, 00:13:58.638 "bdev_name": "Malloc4", 00:13:58.638 "name": "Malloc4", 00:13:58.638 "nguid": "1E904E5D69F84355AB3CA17F109E1AA4", 00:13:58.638 "uuid": "1e904e5d-69f8-4355-ab3c-a17f109e1aa4" 00:13:58.638 } 00:13:58.638 ] 00:13:58.638 } 00:13:58.638 ] 00:13:58.638 03:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2583682 00:13:58.638 03:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:58.638 03:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2575985 00:13:58.638 03:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2575985 ']' 00:13:58.638 03:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2575985 00:13:58.638 03:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:13:58.638 03:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:58.638 03:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2575985 00:13:58.638 03:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:58.638 03:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:58.638 03:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2575985' 00:13:58.638 killing process with pid 2575985 00:13:58.638 03:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2575985 00:13:58.638 03:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2575985 00:13:58.897 03:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:58.897 03:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:58.897 03:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:58.897 03:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:58.897 03:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:58.897 03:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2583849 00:13:58.897 03:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2583849' 00:13:58.897 Process pid: 2583849 00:13:58.897 03:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:58.897 03:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:58.897 03:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2583849 00:13:58.897 03:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2583849 ']' 00:13:58.897 03:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.897 03:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:58.897 03:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.897 03:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:58.897 03:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:58.897 [2024-12-06 03:22:18.888458] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:58.897 [2024-12-06 03:22:18.889390] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:13:58.897 [2024-12-06 03:22:18.889430] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:58.897 [2024-12-06 03:22:18.953325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:58.897 [2024-12-06 03:22:18.995907] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:58.897 [2024-12-06 03:22:18.995946] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:58.897 [2024-12-06 03:22:18.995956] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:58.897 [2024-12-06 03:22:18.995962] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:58.897 [2024-12-06 03:22:18.995967] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:58.897 [2024-12-06 03:22:18.999967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:58.897 [2024-12-06 03:22:18.999987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:58.897 [2024-12-06 03:22:19.000072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:58.897 [2024-12-06 03:22:19.000074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.156 [2024-12-06 03:22:19.069715] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:59.156 [2024-12-06 03:22:19.069793] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:59.156 [2024-12-06 03:22:19.069930] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:59.156 [2024-12-06 03:22:19.070214] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:59.156 [2024-12-06 03:22:19.070379] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:59.156 03:22:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:59.156 03:22:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:13:59.156 03:22:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:00.092 03:22:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:00.352 03:22:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:00.352 03:22:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:00.352 03:22:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:00.352 03:22:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:00.352 03:22:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:00.610 Malloc1 00:14:00.610 03:22:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:00.610 03:22:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:00.869 03:22:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:01.127 03:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:01.127 03:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:01.127 03:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:01.386 Malloc2 00:14:01.386 03:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:01.386 03:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:01.647 03:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:01.906 03:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:01.906 03:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2583849 00:14:01.906 03:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2583849 ']' 00:14:01.906 03:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2583849 00:14:01.906 03:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:01.906 03:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:01.906 03:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2583849 00:14:01.906 03:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:01.906 03:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:01.906 03:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2583849' 00:14:01.906 killing process with pid 2583849 00:14:01.906 03:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2583849 00:14:01.906 03:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2583849 00:14:02.165 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:02.165 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:02.165 00:14:02.165 real 0m50.782s 00:14:02.165 user 3m16.556s 00:14:02.165 sys 0m3.216s 00:14:02.165 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:02.165 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:02.165 ************************************ 00:14:02.165 END TEST nvmf_vfio_user 00:14:02.165 ************************************ 00:14:02.166 03:22:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:02.166 03:22:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:02.166 03:22:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:02.166 03:22:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:02.166 ************************************ 00:14:02.166 START TEST nvmf_vfio_user_nvme_compliance 00:14:02.166 ************************************ 00:14:02.166 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:02.166 * Looking for test storage... 00:14:02.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:02.166 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:02.166 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:14:02.166 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:02.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.426 --rc genhtml_branch_coverage=1 00:14:02.426 --rc genhtml_function_coverage=1 00:14:02.426 --rc genhtml_legend=1 00:14:02.426 --rc geninfo_all_blocks=1 00:14:02.426 --rc geninfo_unexecuted_blocks=1 00:14:02.426 00:14:02.426 ' 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:02.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.426 --rc genhtml_branch_coverage=1 00:14:02.426 --rc genhtml_function_coverage=1 00:14:02.426 --rc genhtml_legend=1 00:14:02.426 --rc geninfo_all_blocks=1 00:14:02.426 --rc geninfo_unexecuted_blocks=1 00:14:02.426 00:14:02.426 ' 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:02.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.426 --rc genhtml_branch_coverage=1 00:14:02.426 --rc genhtml_function_coverage=1 00:14:02.426 --rc genhtml_legend=1 00:14:02.426 --rc geninfo_all_blocks=1 00:14:02.426 --rc geninfo_unexecuted_blocks=1 00:14:02.426 00:14:02.426 ' 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:02.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.426 --rc genhtml_branch_coverage=1 00:14:02.426 --rc genhtml_function_coverage=1 00:14:02.426 --rc genhtml_legend=1 00:14:02.426 --rc geninfo_all_blocks=1 00:14:02.426 --rc geninfo_unexecuted_blocks=1 00:14:02.426 00:14:02.426 ' 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:02.426 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.427 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.427 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.427 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:02.427 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.427 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:14:02.427 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:02.427 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:02.427 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:02.427 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:02.427 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:02.427 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:02.427 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:02.427 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:02.427 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:02.427 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:02.427 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:02.427 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:02.427 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:02.427 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:02.427 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:02.427 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2584611 00:14:02.427 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2584611' 00:14:02.427 Process pid: 2584611 00:14:02.427 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:02.427 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2584611 00:14:02.427 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:02.427 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 2584611 ']' 00:14:02.427 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.427 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:02.427 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.427 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:02.427 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:02.427 [2024-12-06 03:22:22.463444] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:14:02.427 [2024-12-06 03:22:22.463495] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.427 [2024-12-06 03:22:22.526005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:02.686 [2024-12-06 03:22:22.568678] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.686 [2024-12-06 03:22:22.568713] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.686 [2024-12-06 03:22:22.568721] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.686 [2024-12-06 03:22:22.568727] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.686 [2024-12-06 03:22:22.568735] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.686 [2024-12-06 03:22:22.573964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.686 [2024-12-06 03:22:22.573982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:02.686 [2024-12-06 03:22:22.573985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.686 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:02.686 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:14:02.686 03:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:03.621 03:22:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:03.621 03:22:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:03.621 03:22:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:03.621 03:22:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.621 03:22:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:03.621 03:22:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.621 03:22:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:03.621 03:22:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:03.621 03:22:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.621 03:22:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:03.621 malloc0 00:14:03.621 03:22:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.621 03:22:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:03.621 03:22:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.621 03:22:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:03.621 03:22:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.621 03:22:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:03.621 03:22:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.621 03:22:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:03.621 03:22:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.621 03:22:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:03.621 03:22:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.621 03:22:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:03.621 03:22:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.621 03:22:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:03.880 00:14:03.880 00:14:03.880 CUnit - A unit testing framework for C - Version 2.1-3 00:14:03.880 http://cunit.sourceforge.net/ 00:14:03.880 00:14:03.880 00:14:03.880 Suite: nvme_compliance 00:14:03.880 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-06 03:22:23.904422] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:03.880 [2024-12-06 03:22:23.905784] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:03.880 [2024-12-06 03:22:23.905800] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:03.880 [2024-12-06 03:22:23.905807] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:03.880 [2024-12-06 03:22:23.907439] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:03.880 passed 00:14:03.880 Test: admin_identify_ctrlr_verify_fused ...[2024-12-06 03:22:23.987010] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:03.880 [2024-12-06 03:22:23.990025] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:03.880 passed 00:14:04.138 Test: admin_identify_ns ...[2024-12-06 03:22:24.072466] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:04.138 [2024-12-06 03:22:24.132961] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:04.138 [2024-12-06 03:22:24.140960] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:04.138 [2024-12-06 03:22:24.162062] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:04.138 passed 00:14:04.138 Test: admin_get_features_mandatory_features ...[2024-12-06 03:22:24.238227] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:04.138 [2024-12-06 03:22:24.241247] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:04.138 passed 00:14:04.396 Test: admin_get_features_optional_features ...[2024-12-06 03:22:24.319755] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:04.396 [2024-12-06 03:22:24.322774] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:04.396 passed 00:14:04.396 Test: admin_set_features_number_of_queues ...[2024-12-06 03:22:24.400439] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:04.396 [2024-12-06 03:22:24.505045] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:04.396 passed 00:14:04.655 Test: admin_get_log_page_mandatory_logs ...[2024-12-06 03:22:24.583147] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:04.655 [2024-12-06 03:22:24.586166] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:04.655 passed 00:14:04.655 Test: admin_get_log_page_with_lpo ...[2024-12-06 03:22:24.664099] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:04.655 [2024-12-06 03:22:24.732959] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:04.655 [2024-12-06 03:22:24.746043] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:04.655 passed 00:14:04.913 Test: fabric_property_get ...[2024-12-06 03:22:24.823952] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:04.913 [2024-12-06 03:22:24.825204] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:04.913 [2024-12-06 03:22:24.826972] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:04.913 passed 00:14:04.913 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-06 03:22:24.905489] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:04.913 [2024-12-06 03:22:24.906719] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:04.913 [2024-12-06 03:22:24.908507] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:04.913 passed 00:14:04.913 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-06 03:22:24.987455] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:05.172 [2024-12-06 03:22:25.071958] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:05.172 [2024-12-06 03:22:25.087961] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:05.172 [2024-12-06 03:22:25.093030] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:05.172 passed 00:14:05.172 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-06 03:22:25.169127] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:05.173 [2024-12-06 03:22:25.170365] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:05.173 [2024-12-06 03:22:25.172143] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:05.173 passed 00:14:05.173 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-06 03:22:25.252578] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:05.431 [2024-12-06 03:22:25.327960] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:05.431 [2024-12-06 03:22:25.351956] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:05.431 [2024-12-06 03:22:25.357045] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:05.431 passed 00:14:05.431 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-06 03:22:25.432145] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:05.431 [2024-12-06 03:22:25.433376] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:05.431 [2024-12-06 03:22:25.433401] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:05.431 [2024-12-06 03:22:25.436175] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:05.431 passed 00:14:05.431 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-06 03:22:25.514468] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:05.689 [2024-12-06 03:22:25.605955] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:05.689 [2024-12-06 03:22:25.613953] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:05.689 [2024-12-06 03:22:25.621957] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:05.689 [2024-12-06 03:22:25.629953] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:05.689 [2024-12-06 03:22:25.659040] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:05.689 passed 00:14:05.689 Test: admin_create_io_sq_verify_pc ...[2024-12-06 03:22:25.734201] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:05.689 [2024-12-06 03:22:25.750962] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:05.689 [2024-12-06 03:22:25.768377] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:05.689 passed 00:14:05.946 Test: admin_create_io_qp_max_qps ...[2024-12-06 03:22:25.848901] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:06.882 [2024-12-06 03:22:26.946957] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:14:07.443 [2024-12-06 03:22:27.352047] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:07.443 passed 00:14:07.443 Test: admin_create_io_sq_shared_cq ...[2024-12-06 03:22:27.428150] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:07.443 [2024-12-06 03:22:27.559965] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:07.699 [2024-12-06 03:22:27.597016] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:07.699 passed 00:14:07.699 00:14:07.699 Run Summary: Type Total Ran Passed Failed Inactive 00:14:07.699 suites 1 1 n/a 0 0 00:14:07.699 tests 18 18 18 0 0 00:14:07.699 asserts 360 360 360 0 n/a 00:14:07.699 00:14:07.699 Elapsed time = 1.518 seconds 00:14:07.699 03:22:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2584611 00:14:07.699 03:22:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 2584611 ']' 00:14:07.699 03:22:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 2584611 00:14:07.699 03:22:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:14:07.699 03:22:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:07.699 03:22:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2584611 00:14:07.699 03:22:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:07.699 03:22:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:07.699 03:22:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2584611' 00:14:07.699 killing process with pid 2584611 00:14:07.699 03:22:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 2584611 00:14:07.699 03:22:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 2584611 00:14:07.956 03:22:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:07.956 03:22:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:07.956 00:14:07.956 real 0m5.679s 00:14:07.956 user 0m15.880s 00:14:07.956 sys 0m0.517s 00:14:07.956 03:22:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:07.956 03:22:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:07.956 ************************************ 00:14:07.956 END TEST nvmf_vfio_user_nvme_compliance 00:14:07.956 ************************************ 00:14:07.956 03:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:07.956 03:22:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:07.956 03:22:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:07.956 03:22:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:07.956 ************************************ 00:14:07.956 START TEST nvmf_vfio_user_fuzz 00:14:07.956 ************************************ 00:14:07.956 03:22:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:07.956 * Looking for test storage... 00:14:07.956 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:07.956 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:07.956 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:14:07.956 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:08.215 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:08.215 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:08.215 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:08.215 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:08.215 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:14:08.215 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:14:08.215 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:14:08.215 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:14:08.215 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:14:08.215 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:14:08.215 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:14:08.215 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:08.215 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:14:08.215 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:14:08.215 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:08.215 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:08.215 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:14:08.215 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:14:08.215 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:08.215 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:14:08.215 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:14:08.215 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:14:08.215 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:14:08.215 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:08.215 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:14:08.215 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:14:08.215 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:08.215 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:08.215 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:14:08.215 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:08.215 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:08.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.215 --rc genhtml_branch_coverage=1 00:14:08.215 --rc genhtml_function_coverage=1 00:14:08.215 --rc genhtml_legend=1 00:14:08.215 --rc geninfo_all_blocks=1 00:14:08.215 --rc geninfo_unexecuted_blocks=1 00:14:08.215 00:14:08.215 ' 00:14:08.215 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:08.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.215 --rc genhtml_branch_coverage=1 00:14:08.215 --rc genhtml_function_coverage=1 00:14:08.215 --rc genhtml_legend=1 00:14:08.215 --rc geninfo_all_blocks=1 00:14:08.215 --rc geninfo_unexecuted_blocks=1 00:14:08.215 00:14:08.215 ' 00:14:08.215 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:08.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.215 --rc genhtml_branch_coverage=1 00:14:08.215 --rc genhtml_function_coverage=1 00:14:08.215 --rc genhtml_legend=1 00:14:08.215 --rc geninfo_all_blocks=1 00:14:08.215 --rc geninfo_unexecuted_blocks=1 00:14:08.215 00:14:08.215 ' 00:14:08.215 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:08.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.215 --rc genhtml_branch_coverage=1 00:14:08.215 --rc genhtml_function_coverage=1 00:14:08.215 --rc genhtml_legend=1 00:14:08.215 --rc geninfo_all_blocks=1 00:14:08.215 --rc geninfo_unexecuted_blocks=1 00:14:08.215 00:14:08.215 ' 00:14:08.215 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:08.215 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:08.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2585595 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2585595' 00:14:08.216 Process pid: 2585595 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2585595 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2585595 ']' 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:08.216 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:08.473 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:08.473 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:14:08.473 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:09.405 03:22:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:09.405 03:22:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.405 03:22:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:09.405 03:22:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.405 03:22:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:09.405 03:22:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:09.405 03:22:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.405 03:22:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:09.405 malloc0 00:14:09.405 03:22:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.405 03:22:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:09.405 03:22:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.405 03:22:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:09.405 03:22:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.405 03:22:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:09.405 03:22:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.405 03:22:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:09.405 03:22:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.405 03:22:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:09.405 03:22:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.405 03:22:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:09.405 03:22:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.405 03:22:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:09.405 03:22:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:41.468 Fuzzing completed. Shutting down the fuzz application 00:14:41.468 00:14:41.468 Dumping successful admin opcodes: 00:14:41.468 9, 10, 00:14:41.468 Dumping successful io opcodes: 00:14:41.468 0, 00:14:41.468 NS: 0x20000081ef00 I/O qp, Total commands completed: 1033163, total successful commands: 4070, random_seed: 620549888 00:14:41.468 NS: 0x20000081ef00 admin qp, Total commands completed: 255248, total successful commands: 62, random_seed: 3820062656 00:14:41.468 03:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:41.468 03:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.468 03:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:41.468 03:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.468 03:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2585595 00:14:41.468 03:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2585595 ']' 00:14:41.468 03:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 2585595 00:14:41.468 03:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:14:41.468 03:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:41.468 03:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2585595 00:14:41.468 03:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:41.468 03:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:41.468 03:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2585595' 00:14:41.468 killing process with pid 2585595 00:14:41.468 03:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 2585595 00:14:41.468 03:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 2585595 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:41.468 00:14:41.468 real 0m32.175s 00:14:41.468 user 0m29.777s 00:14:41.468 sys 0m31.744s 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:41.468 ************************************ 00:14:41.468 END TEST nvmf_vfio_user_fuzz 00:14:41.468 ************************************ 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:41.468 ************************************ 00:14:41.468 START TEST nvmf_auth_target 00:14:41.468 ************************************ 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:41.468 * Looking for test storage... 00:14:41.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:41.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.468 --rc genhtml_branch_coverage=1 00:14:41.468 --rc genhtml_function_coverage=1 00:14:41.468 --rc genhtml_legend=1 00:14:41.468 --rc geninfo_all_blocks=1 00:14:41.468 --rc geninfo_unexecuted_blocks=1 00:14:41.468 00:14:41.468 ' 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:41.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.468 --rc genhtml_branch_coverage=1 00:14:41.468 --rc genhtml_function_coverage=1 00:14:41.468 --rc genhtml_legend=1 00:14:41.468 --rc geninfo_all_blocks=1 00:14:41.468 --rc geninfo_unexecuted_blocks=1 00:14:41.468 00:14:41.468 ' 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:41.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.468 --rc genhtml_branch_coverage=1 00:14:41.468 --rc genhtml_function_coverage=1 00:14:41.468 --rc genhtml_legend=1 00:14:41.468 --rc geninfo_all_blocks=1 00:14:41.468 --rc geninfo_unexecuted_blocks=1 00:14:41.468 00:14:41.468 ' 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:41.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.468 --rc genhtml_branch_coverage=1 00:14:41.468 --rc genhtml_function_coverage=1 00:14:41.468 --rc genhtml_legend=1 00:14:41.468 --rc geninfo_all_blocks=1 00:14:41.468 --rc geninfo_unexecuted_blocks=1 00:14:41.468 00:14:41.468 ' 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:41.468 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:41.469 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:14:41.469 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:45.662 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:45.662 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:45.662 Found net devices under 0000:86:00.0: cvl_0_0 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:45.662 Found net devices under 0000:86:00.1: cvl_0_1 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:45.662 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:45.663 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:45.663 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:45.663 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:45.663 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:14:45.663 00:14:45.663 --- 10.0.0.2 ping statistics --- 00:14:45.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.663 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:14:45.663 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:45.663 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:45.663 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:14:45.663 00:14:45.663 --- 10.0.0.1 ping statistics --- 00:14:45.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.663 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:14:45.663 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:45.663 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:14:45.663 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:45.663 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:45.663 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:45.663 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:45.663 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:45.663 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:45.663 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:45.663 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:14:45.663 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:45.663 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:45.663 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.663 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2593904 00:14:45.663 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:45.663 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2593904 00:14:45.663 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2593904 ']' 00:14:45.663 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.663 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:45.663 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.663 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:45.663 03:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.920 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:45.920 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:45.920 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:45.920 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:45.920 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.921 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:45.921 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2593925 00:14:45.921 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:45.921 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:45.921 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:14:45.921 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:45.921 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:45.921 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:45.921 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:14:45.921 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:45.921 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:46.178 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=854935b375de715be0b9dfb3669b48f7fde6fd79ff619576 00:14:46.178 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:14:46.178 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.2ZC 00:14:46.178 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 854935b375de715be0b9dfb3669b48f7fde6fd79ff619576 0 00:14:46.178 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 854935b375de715be0b9dfb3669b48f7fde6fd79ff619576 0 00:14:46.178 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:46.178 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:46.178 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=854935b375de715be0b9dfb3669b48f7fde6fd79ff619576 00:14:46.178 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:14:46.178 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:46.178 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.2ZC 00:14:46.178 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.2ZC 00:14:46.178 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.2ZC 00:14:46.178 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:14:46.178 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:46.178 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:46.178 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:46.178 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:46.178 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fee0d3354e415028cca92a7c30c8afc664f151cd695223ebb5479ad15d1e3867 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Kro 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fee0d3354e415028cca92a7c30c8afc664f151cd695223ebb5479ad15d1e3867 3 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fee0d3354e415028cca92a7c30c8afc664f151cd695223ebb5479ad15d1e3867 3 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fee0d3354e415028cca92a7c30c8afc664f151cd695223ebb5479ad15d1e3867 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Kro 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Kro 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.Kro 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f074acb3a4bb8115f9972b2877c23f68 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.oLP 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f074acb3a4bb8115f9972b2877c23f68 1 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f074acb3a4bb8115f9972b2877c23f68 1 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f074acb3a4bb8115f9972b2877c23f68 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.oLP 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.oLP 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.oLP 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cc55eba2185efb73ed475825b11e03dadeb5884ad5852854 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.U87 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cc55eba2185efb73ed475825b11e03dadeb5884ad5852854 2 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cc55eba2185efb73ed475825b11e03dadeb5884ad5852854 2 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cc55eba2185efb73ed475825b11e03dadeb5884ad5852854 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.U87 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.U87 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.U87 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a0ca4a09aef786b77ce75bcfc778debe9fc52ae6d8e744b7 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.pIY 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a0ca4a09aef786b77ce75bcfc778debe9fc52ae6d8e744b7 2 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a0ca4a09aef786b77ce75bcfc778debe9fc52ae6d8e744b7 2 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a0ca4a09aef786b77ce75bcfc778debe9fc52ae6d8e744b7 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:46.179 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:46.437 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.pIY 00:14:46.437 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.pIY 00:14:46.437 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.pIY 00:14:46.437 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:14:46.437 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:46.437 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:46.437 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:46.437 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:46.437 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:46.437 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:46.437 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=35adee76d945d1d2918ad5894599986c 00:14:46.437 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:46.437 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.5eB 00:14:46.437 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 35adee76d945d1d2918ad5894599986c 1 00:14:46.437 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 35adee76d945d1d2918ad5894599986c 1 00:14:46.437 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:46.437 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:46.437 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=35adee76d945d1d2918ad5894599986c 00:14:46.438 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:46.438 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:46.438 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.5eB 00:14:46.438 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.5eB 00:14:46.438 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.5eB 00:14:46.438 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:14:46.438 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:46.438 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:46.438 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:46.438 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:46.438 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:46.438 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:46.438 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d24642741e2b712131624df3f9ff3fd985018e27dfef3729a759fb9ca58a3689 00:14:46.438 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:46.438 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.lkS 00:14:46.438 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d24642741e2b712131624df3f9ff3fd985018e27dfef3729a759fb9ca58a3689 3 00:14:46.438 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d24642741e2b712131624df3f9ff3fd985018e27dfef3729a759fb9ca58a3689 3 00:14:46.438 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:46.438 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:46.438 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d24642741e2b712131624df3f9ff3fd985018e27dfef3729a759fb9ca58a3689 00:14:46.438 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:46.438 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:46.438 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.lkS 00:14:46.438 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.lkS 00:14:46.438 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.lkS 00:14:46.438 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:14:46.438 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2593904 00:14:46.438 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2593904 ']' 00:14:46.438 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.438 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:46.438 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.438 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:46.438 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.696 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:46.696 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:46.696 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2593925 /var/tmp/host.sock 00:14:46.696 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2593925 ']' 00:14:46.696 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:46.696 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:46.696 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:46.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:46.696 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:46.696 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.954 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:46.954 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:46.954 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:14:46.954 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.954 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.954 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.954 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:46.954 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.2ZC 00:14:46.954 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.954 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.954 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.954 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.2ZC 00:14:46.954 03:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.2ZC 00:14:46.954 03:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.Kro ]] 00:14:46.954 03:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Kro 00:14:46.954 03:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.954 03:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.954 03:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.954 03:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Kro 00:14:46.954 03:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Kro 00:14:47.213 03:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:47.213 03:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.oLP 00:14:47.213 03:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.213 03:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.213 03:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.213 03:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.oLP 00:14:47.213 03:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.oLP 00:14:47.472 03:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.U87 ]] 00:14:47.472 03:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.U87 00:14:47.472 03:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.472 03:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.472 03:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.472 03:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.U87 00:14:47.472 03:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.U87 00:14:47.731 03:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:47.731 03:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.pIY 00:14:47.731 03:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.731 03:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.732 03:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.732 03:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.pIY 00:14:47.732 03:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.pIY 00:14:47.732 03:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.5eB ]] 00:14:47.732 03:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.5eB 00:14:47.732 03:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.732 03:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.991 03:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.991 03:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.5eB 00:14:47.991 03:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.5eB 00:14:47.991 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:47.991 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.lkS 00:14:47.991 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.991 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.991 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.991 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.lkS 00:14:47.991 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.lkS 00:14:48.250 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:14:48.251 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:48.251 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:48.251 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:48.251 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:48.251 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:48.509 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:14:48.509 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:48.509 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:48.509 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:48.509 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:48.509 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.509 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.509 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.509 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.509 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.509 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.509 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.509 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.767 00:14:48.767 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:48.767 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.767 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:48.767 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.767 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.767 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.767 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.026 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.026 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:49.026 { 00:14:49.026 "cntlid": 1, 00:14:49.026 "qid": 0, 00:14:49.026 "state": "enabled", 00:14:49.026 "thread": "nvmf_tgt_poll_group_000", 00:14:49.026 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:49.026 "listen_address": { 00:14:49.026 "trtype": "TCP", 00:14:49.026 "adrfam": "IPv4", 00:14:49.026 "traddr": "10.0.0.2", 00:14:49.026 "trsvcid": "4420" 00:14:49.026 }, 00:14:49.026 "peer_address": { 00:14:49.026 "trtype": "TCP", 00:14:49.026 "adrfam": "IPv4", 00:14:49.026 "traddr": "10.0.0.1", 00:14:49.026 "trsvcid": "53102" 00:14:49.026 }, 00:14:49.026 "auth": { 00:14:49.026 "state": "completed", 00:14:49.026 "digest": "sha256", 00:14:49.026 "dhgroup": "null" 00:14:49.026 } 00:14:49.026 } 00:14:49.026 ]' 00:14:49.026 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:49.026 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:49.026 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:49.026 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:49.026 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:49.026 03:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.026 03:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.026 03:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.285 03:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODU0OTM1YjM3NWRlNzE1YmUwYjlkZmIzNjY5YjQ4ZjdmZGU2ZmQ3OWZmNjE5NTc2tqfyww==: --dhchap-ctrl-secret DHHC-1:03:ZmVlMGQzMzU0ZTQxNTAyOGNjYTkyYTdjMzBjOGFmYzY2NGYxNTFjZDY5NTIyM2ViYjU0NzlhZDE1ZDFlMzg2N2asu+M=: 00:14:49.285 03:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODU0OTM1YjM3NWRlNzE1YmUwYjlkZmIzNjY5YjQ4ZjdmZGU2ZmQ3OWZmNjE5NTc2tqfyww==: --dhchap-ctrl-secret DHHC-1:03:ZmVlMGQzMzU0ZTQxNTAyOGNjYTkyYTdjMzBjOGFmYzY2NGYxNTFjZDY5NTIyM2ViYjU0NzlhZDE1ZDFlMzg2N2asu+M=: 00:14:49.851 03:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.851 03:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:49.851 03:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.851 03:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.851 03:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.851 03:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:49.851 03:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:49.851 03:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:50.109 03:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:14:50.110 03:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:50.110 03:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:50.110 03:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:50.110 03:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:50.110 03:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.110 03:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.110 03:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.110 03:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.110 03:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.110 03:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.110 03:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.110 03:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.369 00:14:50.369 03:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:50.369 03:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.369 03:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:50.628 03:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.628 03:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.628 03:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.628 03:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.628 03:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.629 03:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:50.629 { 00:14:50.629 "cntlid": 3, 00:14:50.629 "qid": 0, 00:14:50.629 "state": "enabled", 00:14:50.629 "thread": "nvmf_tgt_poll_group_000", 00:14:50.629 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:50.629 "listen_address": { 00:14:50.629 "trtype": "TCP", 00:14:50.629 "adrfam": "IPv4", 00:14:50.629 "traddr": "10.0.0.2", 00:14:50.629 "trsvcid": "4420" 00:14:50.629 }, 00:14:50.629 "peer_address": { 00:14:50.629 "trtype": "TCP", 00:14:50.629 "adrfam": "IPv4", 00:14:50.629 "traddr": "10.0.0.1", 00:14:50.629 "trsvcid": "53132" 00:14:50.629 }, 00:14:50.629 "auth": { 00:14:50.629 "state": "completed", 00:14:50.629 "digest": "sha256", 00:14:50.629 "dhgroup": "null" 00:14:50.629 } 00:14:50.629 } 00:14:50.629 ]' 00:14:50.629 03:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:50.629 03:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:50.629 03:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:50.629 03:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:50.629 03:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:50.629 03:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.629 03:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.629 03:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.888 03:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjA3NGFjYjNhNGJiODExNWY5OTcyYjI4NzdjMjNmNjihpPHa: --dhchap-ctrl-secret DHHC-1:02:Y2M1NWViYTIxODVlZmI3M2VkNDc1ODI1YjExZTAzZGFkZWI1ODg0YWQ1ODUyODU0rHVN0g==: 00:14:50.888 03:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjA3NGFjYjNhNGJiODExNWY5OTcyYjI4NzdjMjNmNjihpPHa: --dhchap-ctrl-secret DHHC-1:02:Y2M1NWViYTIxODVlZmI3M2VkNDc1ODI1YjExZTAzZGFkZWI1ODg0YWQ1ODUyODU0rHVN0g==: 00:14:51.456 03:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.456 03:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:51.456 03:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.456 03:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.456 03:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.456 03:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:51.456 03:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:51.456 03:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:51.714 03:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:14:51.714 03:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:51.714 03:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:51.714 03:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:51.714 03:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:51.715 03:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.715 03:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:51.715 03:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.715 03:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.715 03:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.715 03:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:51.715 03:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:51.715 03:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:51.974 00:14:51.974 03:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:51.974 03:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.974 03:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:51.974 03:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.974 03:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.974 03:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.974 03:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.974 03:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.974 03:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:51.974 { 00:14:51.974 "cntlid": 5, 00:14:51.974 "qid": 0, 00:14:51.974 "state": "enabled", 00:14:51.974 "thread": "nvmf_tgt_poll_group_000", 00:14:51.974 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:51.974 "listen_address": { 00:14:51.974 "trtype": "TCP", 00:14:51.974 "adrfam": "IPv4", 00:14:51.974 "traddr": "10.0.0.2", 00:14:51.974 "trsvcid": "4420" 00:14:51.974 }, 00:14:51.974 "peer_address": { 00:14:51.974 "trtype": "TCP", 00:14:51.974 "adrfam": "IPv4", 00:14:51.974 "traddr": "10.0.0.1", 00:14:51.974 "trsvcid": "53152" 00:14:51.974 }, 00:14:51.974 "auth": { 00:14:51.974 "state": "completed", 00:14:51.974 "digest": "sha256", 00:14:51.974 "dhgroup": "null" 00:14:51.974 } 00:14:51.974 } 00:14:51.974 ]' 00:14:51.974 03:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:52.233 03:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:52.233 03:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:52.233 03:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:52.233 03:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:52.233 03:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.233 03:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.233 03:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.492 03:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: --dhchap-ctrl-secret DHHC-1:01:MzVhZGVlNzZkOTQ1ZDFkMjkxOGFkNTg5NDU5OTk4NmN4ySi6: 00:14:52.492 03:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: --dhchap-ctrl-secret DHHC-1:01:MzVhZGVlNzZkOTQ1ZDFkMjkxOGFkNTg5NDU5OTk4NmN4ySi6: 00:14:53.061 03:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.061 03:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:53.061 03:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.061 03:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.061 03:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.061 03:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:53.061 03:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:53.061 03:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:53.061 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:14:53.061 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:53.061 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:53.061 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:53.061 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:53.061 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.061 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:14:53.061 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.061 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.061 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.061 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:53.061 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:53.061 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:53.320 00:14:53.320 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:53.320 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:53.320 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.579 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.579 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.579 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.579 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.579 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.579 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:53.579 { 00:14:53.579 "cntlid": 7, 00:14:53.579 "qid": 0, 00:14:53.579 "state": "enabled", 00:14:53.579 "thread": "nvmf_tgt_poll_group_000", 00:14:53.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:53.579 "listen_address": { 00:14:53.579 "trtype": "TCP", 00:14:53.579 "adrfam": "IPv4", 00:14:53.579 "traddr": "10.0.0.2", 00:14:53.579 "trsvcid": "4420" 00:14:53.579 }, 00:14:53.579 "peer_address": { 00:14:53.579 "trtype": "TCP", 00:14:53.579 "adrfam": "IPv4", 00:14:53.579 "traddr": "10.0.0.1", 00:14:53.579 "trsvcid": "53184" 00:14:53.579 }, 00:14:53.579 "auth": { 00:14:53.579 "state": "completed", 00:14:53.579 "digest": "sha256", 00:14:53.579 "dhgroup": "null" 00:14:53.579 } 00:14:53.579 } 00:14:53.579 ]' 00:14:53.579 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:53.579 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:53.579 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:53.579 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:53.838 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:53.838 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.838 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.838 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.838 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI0NjQyNzQxZTJiNzEyMTMxNjI0ZGYzZjlmZjNmZDk4NTAxOGUyN2RmZWYzNzI5YTc1OWZiOWNhNThhMzY4OXDEmbI=: 00:14:53.838 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZDI0NjQyNzQxZTJiNzEyMTMxNjI0ZGYzZjlmZjNmZDk4NTAxOGUyN2RmZWYzNzI5YTc1OWZiOWNhNThhMzY4OXDEmbI=: 00:14:54.405 03:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.405 03:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:54.405 03:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.405 03:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.405 03:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.405 03:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:54.405 03:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:54.405 03:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:54.405 03:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:54.664 03:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:14:54.664 03:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:54.664 03:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:54.664 03:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:54.664 03:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:54.664 03:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.664 03:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.664 03:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.664 03:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.664 03:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.664 03:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.664 03:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.664 03:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.923 00:14:54.923 03:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:54.923 03:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.923 03:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:55.181 03:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.181 03:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.181 03:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.181 03:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.181 03:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.181 03:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:55.181 { 00:14:55.181 "cntlid": 9, 00:14:55.181 "qid": 0, 00:14:55.181 "state": "enabled", 00:14:55.181 "thread": "nvmf_tgt_poll_group_000", 00:14:55.181 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:55.181 "listen_address": { 00:14:55.181 "trtype": "TCP", 00:14:55.181 "adrfam": "IPv4", 00:14:55.181 "traddr": "10.0.0.2", 00:14:55.181 "trsvcid": "4420" 00:14:55.181 }, 00:14:55.181 "peer_address": { 00:14:55.181 "trtype": "TCP", 00:14:55.181 "adrfam": "IPv4", 00:14:55.181 "traddr": "10.0.0.1", 00:14:55.181 "trsvcid": "54540" 00:14:55.181 }, 00:14:55.181 "auth": { 00:14:55.181 "state": "completed", 00:14:55.181 "digest": "sha256", 00:14:55.181 "dhgroup": "ffdhe2048" 00:14:55.181 } 00:14:55.181 } 00:14:55.181 ]' 00:14:55.181 03:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:55.181 03:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:55.181 03:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:55.181 03:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:55.181 03:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:55.181 03:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.181 03:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.181 03:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.439 03:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODU0OTM1YjM3NWRlNzE1YmUwYjlkZmIzNjY5YjQ4ZjdmZGU2ZmQ3OWZmNjE5NTc2tqfyww==: --dhchap-ctrl-secret DHHC-1:03:ZmVlMGQzMzU0ZTQxNTAyOGNjYTkyYTdjMzBjOGFmYzY2NGYxNTFjZDY5NTIyM2ViYjU0NzlhZDE1ZDFlMzg2N2asu+M=: 00:14:55.439 03:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODU0OTM1YjM3NWRlNzE1YmUwYjlkZmIzNjY5YjQ4ZjdmZGU2ZmQ3OWZmNjE5NTc2tqfyww==: --dhchap-ctrl-secret DHHC-1:03:ZmVlMGQzMzU0ZTQxNTAyOGNjYTkyYTdjMzBjOGFmYzY2NGYxNTFjZDY5NTIyM2ViYjU0NzlhZDE1ZDFlMzg2N2asu+M=: 00:14:56.005 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.005 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.005 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:56.005 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.005 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.005 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.005 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:56.005 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:56.005 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:56.264 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:14:56.264 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:56.264 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:56.264 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:56.264 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:56.264 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.264 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.264 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.264 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.264 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.264 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.265 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.265 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.524 00:14:56.524 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:56.524 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.524 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:56.782 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.782 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.782 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.782 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.782 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.783 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:56.783 { 00:14:56.783 "cntlid": 11, 00:14:56.783 "qid": 0, 00:14:56.783 "state": "enabled", 00:14:56.783 "thread": "nvmf_tgt_poll_group_000", 00:14:56.783 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:56.783 "listen_address": { 00:14:56.783 "trtype": "TCP", 00:14:56.783 "adrfam": "IPv4", 00:14:56.783 "traddr": "10.0.0.2", 00:14:56.783 "trsvcid": "4420" 00:14:56.783 }, 00:14:56.783 "peer_address": { 00:14:56.783 "trtype": "TCP", 00:14:56.783 "adrfam": "IPv4", 00:14:56.783 "traddr": "10.0.0.1", 00:14:56.783 "trsvcid": "54562" 00:14:56.783 }, 00:14:56.783 "auth": { 00:14:56.783 "state": "completed", 00:14:56.783 "digest": "sha256", 00:14:56.783 "dhgroup": "ffdhe2048" 00:14:56.783 } 00:14:56.783 } 00:14:56.783 ]' 00:14:56.783 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:56.783 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:56.783 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:56.783 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:56.783 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:56.783 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.783 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.783 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.041 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjA3NGFjYjNhNGJiODExNWY5OTcyYjI4NzdjMjNmNjihpPHa: --dhchap-ctrl-secret DHHC-1:02:Y2M1NWViYTIxODVlZmI3M2VkNDc1ODI1YjExZTAzZGFkZWI1ODg0YWQ1ODUyODU0rHVN0g==: 00:14:57.041 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjA3NGFjYjNhNGJiODExNWY5OTcyYjI4NzdjMjNmNjihpPHa: --dhchap-ctrl-secret DHHC-1:02:Y2M1NWViYTIxODVlZmI3M2VkNDc1ODI1YjExZTAzZGFkZWI1ODg0YWQ1ODUyODU0rHVN0g==: 00:14:57.608 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.608 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:57.608 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.608 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.608 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.608 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:57.608 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:57.608 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:57.866 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:14:57.866 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:57.866 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:57.866 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:57.866 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:57.866 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.866 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.866 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.866 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.866 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.866 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.866 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.866 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:58.124 00:14:58.124 03:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:58.124 03:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.124 03:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:58.382 03:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.382 03:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.382 03:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.382 03:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.382 03:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.382 03:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:58.382 { 00:14:58.382 "cntlid": 13, 00:14:58.382 "qid": 0, 00:14:58.382 "state": "enabled", 00:14:58.382 "thread": "nvmf_tgt_poll_group_000", 00:14:58.382 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:58.382 "listen_address": { 00:14:58.382 "trtype": "TCP", 00:14:58.382 "adrfam": "IPv4", 00:14:58.382 "traddr": "10.0.0.2", 00:14:58.382 "trsvcid": "4420" 00:14:58.382 }, 00:14:58.382 "peer_address": { 00:14:58.382 "trtype": "TCP", 00:14:58.382 "adrfam": "IPv4", 00:14:58.382 "traddr": "10.0.0.1", 00:14:58.382 "trsvcid": "54600" 00:14:58.382 }, 00:14:58.382 "auth": { 00:14:58.382 "state": "completed", 00:14:58.382 "digest": "sha256", 00:14:58.382 "dhgroup": "ffdhe2048" 00:14:58.382 } 00:14:58.382 } 00:14:58.382 ]' 00:14:58.382 03:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:58.382 03:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:58.382 03:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:58.382 03:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:58.382 03:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:58.382 03:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.382 03:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.382 03:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.641 03:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: --dhchap-ctrl-secret DHHC-1:01:MzVhZGVlNzZkOTQ1ZDFkMjkxOGFkNTg5NDU5OTk4NmN4ySi6: 00:14:58.641 03:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: --dhchap-ctrl-secret DHHC-1:01:MzVhZGVlNzZkOTQ1ZDFkMjkxOGFkNTg5NDU5OTk4NmN4ySi6: 00:14:59.209 03:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.209 03:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:59.209 03:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.209 03:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.209 03:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.209 03:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:59.209 03:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:59.209 03:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:59.468 03:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:14:59.468 03:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:59.468 03:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:59.468 03:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:59.468 03:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:59.468 03:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.468 03:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:14:59.468 03:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.468 03:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.468 03:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.468 03:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:59.468 03:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:59.468 03:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:59.727 00:14:59.727 03:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:59.727 03:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:59.727 03:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.987 03:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.987 03:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.987 03:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.987 03:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.987 03:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.987 03:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:59.987 { 00:14:59.987 "cntlid": 15, 00:14:59.987 "qid": 0, 00:14:59.987 "state": "enabled", 00:14:59.987 "thread": "nvmf_tgt_poll_group_000", 00:14:59.987 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:59.987 "listen_address": { 00:14:59.987 "trtype": "TCP", 00:14:59.987 "adrfam": "IPv4", 00:14:59.987 "traddr": "10.0.0.2", 00:14:59.987 "trsvcid": "4420" 00:14:59.987 }, 00:14:59.987 "peer_address": { 00:14:59.987 "trtype": "TCP", 00:14:59.987 "adrfam": "IPv4", 00:14:59.987 "traddr": "10.0.0.1", 00:14:59.987 "trsvcid": "54636" 00:14:59.987 }, 00:14:59.987 "auth": { 00:14:59.987 "state": "completed", 00:14:59.987 "digest": "sha256", 00:14:59.987 "dhgroup": "ffdhe2048" 00:14:59.987 } 00:14:59.987 } 00:14:59.987 ]' 00:14:59.987 03:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:59.987 03:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:59.987 03:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:59.987 03:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:59.987 03:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:59.987 03:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.987 03:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.987 03:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.246 03:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI0NjQyNzQxZTJiNzEyMTMxNjI0ZGYzZjlmZjNmZDk4NTAxOGUyN2RmZWYzNzI5YTc1OWZiOWNhNThhMzY4OXDEmbI=: 00:15:00.246 03:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZDI0NjQyNzQxZTJiNzEyMTMxNjI0ZGYzZjlmZjNmZDk4NTAxOGUyN2RmZWYzNzI5YTc1OWZiOWNhNThhMzY4OXDEmbI=: 00:15:00.888 03:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.888 03:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:00.888 03:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.888 03:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.888 03:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.888 03:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:00.888 03:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:00.888 03:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:00.888 03:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:01.218 03:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:01.218 03:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:01.218 03:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:01.218 03:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:01.218 03:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:01.218 03:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.218 03:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.218 03:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.218 03:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.218 03:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.218 03:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.218 03:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.218 03:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.218 00:15:01.520 03:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:01.520 03:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:01.520 03:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.520 03:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.520 03:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.520 03:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.520 03:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.520 03:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.520 03:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:01.520 { 00:15:01.520 "cntlid": 17, 00:15:01.520 "qid": 0, 00:15:01.520 "state": "enabled", 00:15:01.520 "thread": "nvmf_tgt_poll_group_000", 00:15:01.520 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:01.520 "listen_address": { 00:15:01.520 "trtype": "TCP", 00:15:01.520 "adrfam": "IPv4", 00:15:01.520 "traddr": "10.0.0.2", 00:15:01.520 "trsvcid": "4420" 00:15:01.520 }, 00:15:01.520 "peer_address": { 00:15:01.520 "trtype": "TCP", 00:15:01.520 "adrfam": "IPv4", 00:15:01.520 "traddr": "10.0.0.1", 00:15:01.520 "trsvcid": "54666" 00:15:01.520 }, 00:15:01.520 "auth": { 00:15:01.520 "state": "completed", 00:15:01.520 "digest": "sha256", 00:15:01.520 "dhgroup": "ffdhe3072" 00:15:01.520 } 00:15:01.520 } 00:15:01.520 ]' 00:15:01.520 03:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:01.520 03:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:01.520 03:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:01.520 03:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:01.520 03:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:01.777 03:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.777 03:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.777 03:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.777 03:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODU0OTM1YjM3NWRlNzE1YmUwYjlkZmIzNjY5YjQ4ZjdmZGU2ZmQ3OWZmNjE5NTc2tqfyww==: --dhchap-ctrl-secret DHHC-1:03:ZmVlMGQzMzU0ZTQxNTAyOGNjYTkyYTdjMzBjOGFmYzY2NGYxNTFjZDY5NTIyM2ViYjU0NzlhZDE1ZDFlMzg2N2asu+M=: 00:15:01.777 03:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODU0OTM1YjM3NWRlNzE1YmUwYjlkZmIzNjY5YjQ4ZjdmZGU2ZmQ3OWZmNjE5NTc2tqfyww==: --dhchap-ctrl-secret DHHC-1:03:ZmVlMGQzMzU0ZTQxNTAyOGNjYTkyYTdjMzBjOGFmYzY2NGYxNTFjZDY5NTIyM2ViYjU0NzlhZDE1ZDFlMzg2N2asu+M=: 00:15:02.342 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.342 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:02.342 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.342 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.342 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.342 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:02.342 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:02.342 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:02.600 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:02.600 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:02.600 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:02.600 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:02.600 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:02.600 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.600 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.600 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.600 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.600 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.600 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.600 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.600 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.857 00:15:02.857 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:02.857 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:02.857 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.116 03:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.116 03:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.116 03:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.116 03:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.116 03:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.116 03:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:03.116 { 00:15:03.116 "cntlid": 19, 00:15:03.116 "qid": 0, 00:15:03.116 "state": "enabled", 00:15:03.116 "thread": "nvmf_tgt_poll_group_000", 00:15:03.116 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:03.116 "listen_address": { 00:15:03.116 "trtype": "TCP", 00:15:03.116 "adrfam": "IPv4", 00:15:03.116 "traddr": "10.0.0.2", 00:15:03.116 "trsvcid": "4420" 00:15:03.116 }, 00:15:03.116 "peer_address": { 00:15:03.116 "trtype": "TCP", 00:15:03.116 "adrfam": "IPv4", 00:15:03.116 "traddr": "10.0.0.1", 00:15:03.116 "trsvcid": "54692" 00:15:03.116 }, 00:15:03.116 "auth": { 00:15:03.116 "state": "completed", 00:15:03.116 "digest": "sha256", 00:15:03.116 "dhgroup": "ffdhe3072" 00:15:03.116 } 00:15:03.116 } 00:15:03.116 ]' 00:15:03.116 03:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:03.116 03:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:03.116 03:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:03.116 03:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:03.116 03:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:03.375 03:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.375 03:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.375 03:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.375 03:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjA3NGFjYjNhNGJiODExNWY5OTcyYjI4NzdjMjNmNjihpPHa: --dhchap-ctrl-secret DHHC-1:02:Y2M1NWViYTIxODVlZmI3M2VkNDc1ODI1YjExZTAzZGFkZWI1ODg0YWQ1ODUyODU0rHVN0g==: 00:15:03.375 03:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjA3NGFjYjNhNGJiODExNWY5OTcyYjI4NzdjMjNmNjihpPHa: --dhchap-ctrl-secret DHHC-1:02:Y2M1NWViYTIxODVlZmI3M2VkNDc1ODI1YjExZTAzZGFkZWI1ODg0YWQ1ODUyODU0rHVN0g==: 00:15:03.942 03:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.942 03:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:03.942 03:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.942 03:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.942 03:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.942 03:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:03.942 03:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:03.942 03:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:04.201 03:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:04.201 03:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:04.201 03:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:04.201 03:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:04.201 03:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:04.201 03:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.201 03:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.201 03:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.201 03:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.201 03:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.201 03:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.201 03:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.201 03:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.459 00:15:04.459 03:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:04.459 03:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:04.459 03:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.718 03:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.718 03:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.718 03:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.718 03:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.718 03:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.718 03:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:04.718 { 00:15:04.718 "cntlid": 21, 00:15:04.718 "qid": 0, 00:15:04.718 "state": "enabled", 00:15:04.718 "thread": "nvmf_tgt_poll_group_000", 00:15:04.718 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:04.718 "listen_address": { 00:15:04.718 "trtype": "TCP", 00:15:04.718 "adrfam": "IPv4", 00:15:04.718 "traddr": "10.0.0.2", 00:15:04.718 "trsvcid": "4420" 00:15:04.718 }, 00:15:04.718 "peer_address": { 00:15:04.718 "trtype": "TCP", 00:15:04.718 "adrfam": "IPv4", 00:15:04.718 "traddr": "10.0.0.1", 00:15:04.718 "trsvcid": "48206" 00:15:04.718 }, 00:15:04.718 "auth": { 00:15:04.718 "state": "completed", 00:15:04.718 "digest": "sha256", 00:15:04.718 "dhgroup": "ffdhe3072" 00:15:04.718 } 00:15:04.718 } 00:15:04.718 ]' 00:15:04.718 03:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:04.718 03:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:04.718 03:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:04.718 03:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:04.718 03:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:04.977 03:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.977 03:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.977 03:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.977 03:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: --dhchap-ctrl-secret DHHC-1:01:MzVhZGVlNzZkOTQ1ZDFkMjkxOGFkNTg5NDU5OTk4NmN4ySi6: 00:15:04.977 03:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: --dhchap-ctrl-secret DHHC-1:01:MzVhZGVlNzZkOTQ1ZDFkMjkxOGFkNTg5NDU5OTk4NmN4ySi6: 00:15:05.543 03:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.543 03:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:05.543 03:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.543 03:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.543 03:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.543 03:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:05.543 03:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:05.543 03:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:05.801 03:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:05.801 03:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:05.801 03:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:05.801 03:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:05.801 03:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:05.801 03:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.801 03:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:05.801 03:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.801 03:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.801 03:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.801 03:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:05.801 03:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:05.801 03:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:06.059 00:15:06.059 03:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:06.059 03:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:06.059 03:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.318 03:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.318 03:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.318 03:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.318 03:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.318 03:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.318 03:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:06.318 { 00:15:06.318 "cntlid": 23, 00:15:06.318 "qid": 0, 00:15:06.318 "state": "enabled", 00:15:06.318 "thread": "nvmf_tgt_poll_group_000", 00:15:06.318 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:06.318 "listen_address": { 00:15:06.318 "trtype": "TCP", 00:15:06.318 "adrfam": "IPv4", 00:15:06.318 "traddr": "10.0.0.2", 00:15:06.318 "trsvcid": "4420" 00:15:06.318 }, 00:15:06.318 "peer_address": { 00:15:06.318 "trtype": "TCP", 00:15:06.318 "adrfam": "IPv4", 00:15:06.318 "traddr": "10.0.0.1", 00:15:06.318 "trsvcid": "48226" 00:15:06.318 }, 00:15:06.318 "auth": { 00:15:06.318 "state": "completed", 00:15:06.318 "digest": "sha256", 00:15:06.318 "dhgroup": "ffdhe3072" 00:15:06.318 } 00:15:06.318 } 00:15:06.318 ]' 00:15:06.318 03:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:06.318 03:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:06.318 03:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:06.318 03:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:06.318 03:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:06.576 03:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.576 03:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.576 03:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.576 03:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI0NjQyNzQxZTJiNzEyMTMxNjI0ZGYzZjlmZjNmZDk4NTAxOGUyN2RmZWYzNzI5YTc1OWZiOWNhNThhMzY4OXDEmbI=: 00:15:06.576 03:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZDI0NjQyNzQxZTJiNzEyMTMxNjI0ZGYzZjlmZjNmZDk4NTAxOGUyN2RmZWYzNzI5YTc1OWZiOWNhNThhMzY4OXDEmbI=: 00:15:07.141 03:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.141 03:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:07.141 03:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.141 03:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.141 03:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.141 03:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:07.141 03:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:07.142 03:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:07.142 03:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:07.400 03:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:07.400 03:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:07.400 03:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:07.400 03:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:07.400 03:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:07.400 03:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.400 03:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.400 03:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.400 03:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.400 03:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.400 03:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.400 03:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.400 03:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.657 00:15:07.657 03:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:07.657 03:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:07.657 03:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.915 03:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.915 03:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.915 03:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.915 03:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.915 03:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.915 03:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:07.915 { 00:15:07.915 "cntlid": 25, 00:15:07.915 "qid": 0, 00:15:07.915 "state": "enabled", 00:15:07.915 "thread": "nvmf_tgt_poll_group_000", 00:15:07.915 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:07.915 "listen_address": { 00:15:07.915 "trtype": "TCP", 00:15:07.915 "adrfam": "IPv4", 00:15:07.915 "traddr": "10.0.0.2", 00:15:07.915 "trsvcid": "4420" 00:15:07.915 }, 00:15:07.915 "peer_address": { 00:15:07.915 "trtype": "TCP", 00:15:07.915 "adrfam": "IPv4", 00:15:07.916 "traddr": "10.0.0.1", 00:15:07.916 "trsvcid": "48254" 00:15:07.916 }, 00:15:07.916 "auth": { 00:15:07.916 "state": "completed", 00:15:07.916 "digest": "sha256", 00:15:07.916 "dhgroup": "ffdhe4096" 00:15:07.916 } 00:15:07.916 } 00:15:07.916 ]' 00:15:07.916 03:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:07.916 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:07.916 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:08.174 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:08.174 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:08.174 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.174 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.174 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.174 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODU0OTM1YjM3NWRlNzE1YmUwYjlkZmIzNjY5YjQ4ZjdmZGU2ZmQ3OWZmNjE5NTc2tqfyww==: --dhchap-ctrl-secret DHHC-1:03:ZmVlMGQzMzU0ZTQxNTAyOGNjYTkyYTdjMzBjOGFmYzY2NGYxNTFjZDY5NTIyM2ViYjU0NzlhZDE1ZDFlMzg2N2asu+M=: 00:15:08.174 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODU0OTM1YjM3NWRlNzE1YmUwYjlkZmIzNjY5YjQ4ZjdmZGU2ZmQ3OWZmNjE5NTc2tqfyww==: --dhchap-ctrl-secret DHHC-1:03:ZmVlMGQzMzU0ZTQxNTAyOGNjYTkyYTdjMzBjOGFmYzY2NGYxNTFjZDY5NTIyM2ViYjU0NzlhZDE1ZDFlMzg2N2asu+M=: 00:15:08.739 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.739 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:08.739 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.739 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.996 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.996 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:08.996 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:08.996 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:08.997 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:08.997 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:08.997 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:08.997 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:08.997 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:08.997 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.997 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.997 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.997 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.997 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.997 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.997 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.997 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.255 00:15:09.255 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:09.255 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:09.255 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.514 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.514 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.514 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.514 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.514 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.514 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:09.514 { 00:15:09.514 "cntlid": 27, 00:15:09.514 "qid": 0, 00:15:09.514 "state": "enabled", 00:15:09.514 "thread": "nvmf_tgt_poll_group_000", 00:15:09.514 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:09.514 "listen_address": { 00:15:09.514 "trtype": "TCP", 00:15:09.514 "adrfam": "IPv4", 00:15:09.514 "traddr": "10.0.0.2", 00:15:09.514 "trsvcid": "4420" 00:15:09.514 }, 00:15:09.514 "peer_address": { 00:15:09.514 "trtype": "TCP", 00:15:09.514 "adrfam": "IPv4", 00:15:09.514 "traddr": "10.0.0.1", 00:15:09.514 "trsvcid": "48268" 00:15:09.514 }, 00:15:09.514 "auth": { 00:15:09.514 "state": "completed", 00:15:09.514 "digest": "sha256", 00:15:09.514 "dhgroup": "ffdhe4096" 00:15:09.514 } 00:15:09.514 } 00:15:09.514 ]' 00:15:09.514 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:09.514 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:09.514 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:09.514 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:09.773 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:09.773 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.773 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.773 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.773 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjA3NGFjYjNhNGJiODExNWY5OTcyYjI4NzdjMjNmNjihpPHa: --dhchap-ctrl-secret DHHC-1:02:Y2M1NWViYTIxODVlZmI3M2VkNDc1ODI1YjExZTAzZGFkZWI1ODg0YWQ1ODUyODU0rHVN0g==: 00:15:09.773 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjA3NGFjYjNhNGJiODExNWY5OTcyYjI4NzdjMjNmNjihpPHa: --dhchap-ctrl-secret DHHC-1:02:Y2M1NWViYTIxODVlZmI3M2VkNDc1ODI1YjExZTAzZGFkZWI1ODg0YWQ1ODUyODU0rHVN0g==: 00:15:10.339 03:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.599 03:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:10.599 03:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.599 03:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.599 03:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.599 03:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.599 03:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:10.599 03:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:10.599 03:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:10.599 03:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:10.599 03:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:10.599 03:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:10.599 03:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:10.599 03:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.599 03:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.599 03:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.599 03:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.599 03:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.599 03:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.599 03:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.599 03:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.858 00:15:10.858 03:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:10.858 03:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:10.858 03:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.117 03:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.117 03:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.117 03:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.117 03:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.117 03:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.117 03:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:11.117 { 00:15:11.117 "cntlid": 29, 00:15:11.117 "qid": 0, 00:15:11.117 "state": "enabled", 00:15:11.117 "thread": "nvmf_tgt_poll_group_000", 00:15:11.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:11.117 "listen_address": { 00:15:11.117 "trtype": "TCP", 00:15:11.117 "adrfam": "IPv4", 00:15:11.117 "traddr": "10.0.0.2", 00:15:11.117 "trsvcid": "4420" 00:15:11.117 }, 00:15:11.117 "peer_address": { 00:15:11.117 "trtype": "TCP", 00:15:11.117 "adrfam": "IPv4", 00:15:11.117 "traddr": "10.0.0.1", 00:15:11.117 "trsvcid": "48304" 00:15:11.117 }, 00:15:11.117 "auth": { 00:15:11.117 "state": "completed", 00:15:11.117 "digest": "sha256", 00:15:11.117 "dhgroup": "ffdhe4096" 00:15:11.117 } 00:15:11.117 } 00:15:11.117 ]' 00:15:11.117 03:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:11.117 03:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:11.117 03:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:11.377 03:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:11.377 03:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:11.377 03:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.377 03:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.377 03:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.377 03:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: --dhchap-ctrl-secret DHHC-1:01:MzVhZGVlNzZkOTQ1ZDFkMjkxOGFkNTg5NDU5OTk4NmN4ySi6: 00:15:11.377 03:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: --dhchap-ctrl-secret DHHC-1:01:MzVhZGVlNzZkOTQ1ZDFkMjkxOGFkNTg5NDU5OTk4NmN4ySi6: 00:15:11.945 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.945 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:11.945 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.945 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.204 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.204 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:12.204 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:12.204 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:12.204 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:12.204 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:12.204 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:12.204 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:12.204 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:12.204 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.204 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:12.204 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.204 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.204 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.204 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:12.204 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:12.204 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:12.463 00:15:12.463 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:12.463 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:12.463 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.723 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.723 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.723 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.723 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.723 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.723 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:12.723 { 00:15:12.723 "cntlid": 31, 00:15:12.723 "qid": 0, 00:15:12.723 "state": "enabled", 00:15:12.723 "thread": "nvmf_tgt_poll_group_000", 00:15:12.723 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:12.723 "listen_address": { 00:15:12.723 "trtype": "TCP", 00:15:12.723 "adrfam": "IPv4", 00:15:12.723 "traddr": "10.0.0.2", 00:15:12.723 "trsvcid": "4420" 00:15:12.723 }, 00:15:12.723 "peer_address": { 00:15:12.723 "trtype": "TCP", 00:15:12.723 "adrfam": "IPv4", 00:15:12.723 "traddr": "10.0.0.1", 00:15:12.723 "trsvcid": "48322" 00:15:12.723 }, 00:15:12.723 "auth": { 00:15:12.723 "state": "completed", 00:15:12.723 "digest": "sha256", 00:15:12.723 "dhgroup": "ffdhe4096" 00:15:12.723 } 00:15:12.723 } 00:15:12.723 ]' 00:15:12.723 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:12.723 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:12.723 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:12.723 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:12.723 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:12.982 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.982 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.982 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.982 03:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI0NjQyNzQxZTJiNzEyMTMxNjI0ZGYzZjlmZjNmZDk4NTAxOGUyN2RmZWYzNzI5YTc1OWZiOWNhNThhMzY4OXDEmbI=: 00:15:12.982 03:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZDI0NjQyNzQxZTJiNzEyMTMxNjI0ZGYzZjlmZjNmZDk4NTAxOGUyN2RmZWYzNzI5YTc1OWZiOWNhNThhMzY4OXDEmbI=: 00:15:13.551 03:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.551 03:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:13.551 03:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.551 03:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.551 03:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.551 03:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:13.551 03:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:13.551 03:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:13.551 03:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:13.811 03:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:15:13.811 03:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:13.811 03:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:13.811 03:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:13.811 03:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:13.811 03:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.811 03:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.811 03:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.811 03:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.811 03:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.811 03:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.811 03:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.811 03:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.070 00:15:14.071 03:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:14.071 03:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:14.071 03:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.330 03:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.330 03:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.330 03:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.330 03:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.330 03:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.330 03:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:14.330 { 00:15:14.330 "cntlid": 33, 00:15:14.330 "qid": 0, 00:15:14.330 "state": "enabled", 00:15:14.330 "thread": "nvmf_tgt_poll_group_000", 00:15:14.330 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:14.330 "listen_address": { 00:15:14.330 "trtype": "TCP", 00:15:14.330 "adrfam": "IPv4", 00:15:14.330 "traddr": "10.0.0.2", 00:15:14.330 "trsvcid": "4420" 00:15:14.330 }, 00:15:14.330 "peer_address": { 00:15:14.330 "trtype": "TCP", 00:15:14.330 "adrfam": "IPv4", 00:15:14.330 "traddr": "10.0.0.1", 00:15:14.330 "trsvcid": "47402" 00:15:14.330 }, 00:15:14.330 "auth": { 00:15:14.330 "state": "completed", 00:15:14.330 "digest": "sha256", 00:15:14.330 "dhgroup": "ffdhe6144" 00:15:14.330 } 00:15:14.330 } 00:15:14.330 ]' 00:15:14.330 03:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.330 03:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:14.330 03:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:14.590 03:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:14.590 03:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.590 03:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.590 03:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.590 03:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.590 03:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODU0OTM1YjM3NWRlNzE1YmUwYjlkZmIzNjY5YjQ4ZjdmZGU2ZmQ3OWZmNjE5NTc2tqfyww==: --dhchap-ctrl-secret DHHC-1:03:ZmVlMGQzMzU0ZTQxNTAyOGNjYTkyYTdjMzBjOGFmYzY2NGYxNTFjZDY5NTIyM2ViYjU0NzlhZDE1ZDFlMzg2N2asu+M=: 00:15:14.590 03:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODU0OTM1YjM3NWRlNzE1YmUwYjlkZmIzNjY5YjQ4ZjdmZGU2ZmQ3OWZmNjE5NTc2tqfyww==: --dhchap-ctrl-secret DHHC-1:03:ZmVlMGQzMzU0ZTQxNTAyOGNjYTkyYTdjMzBjOGFmYzY2NGYxNTFjZDY5NTIyM2ViYjU0NzlhZDE1ZDFlMzg2N2asu+M=: 00:15:15.157 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.157 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:15.157 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.157 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.416 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.416 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:15.416 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:15.416 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:15.416 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:15:15.416 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.416 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:15.416 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:15.416 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:15.416 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.416 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.416 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.416 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.416 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.416 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.416 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.416 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.982 00:15:15.982 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:15.982 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:15.982 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.982 03:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.982 03:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.982 03:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.982 03:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.982 03:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.982 03:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:15.982 { 00:15:15.982 "cntlid": 35, 00:15:15.982 "qid": 0, 00:15:15.982 "state": "enabled", 00:15:15.982 "thread": "nvmf_tgt_poll_group_000", 00:15:15.982 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:15.982 "listen_address": { 00:15:15.982 "trtype": "TCP", 00:15:15.982 "adrfam": "IPv4", 00:15:15.982 "traddr": "10.0.0.2", 00:15:15.982 "trsvcid": "4420" 00:15:15.982 }, 00:15:15.982 "peer_address": { 00:15:15.982 "trtype": "TCP", 00:15:15.982 "adrfam": "IPv4", 00:15:15.982 "traddr": "10.0.0.1", 00:15:15.982 "trsvcid": "47430" 00:15:15.982 }, 00:15:15.982 "auth": { 00:15:15.982 "state": "completed", 00:15:15.982 "digest": "sha256", 00:15:15.982 "dhgroup": "ffdhe6144" 00:15:15.982 } 00:15:15.982 } 00:15:15.982 ]' 00:15:15.982 03:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:15.982 03:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:15.983 03:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:16.240 03:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:16.240 03:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:16.240 03:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.240 03:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.240 03:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.499 03:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjA3NGFjYjNhNGJiODExNWY5OTcyYjI4NzdjMjNmNjihpPHa: --dhchap-ctrl-secret DHHC-1:02:Y2M1NWViYTIxODVlZmI3M2VkNDc1ODI1YjExZTAzZGFkZWI1ODg0YWQ1ODUyODU0rHVN0g==: 00:15:16.499 03:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjA3NGFjYjNhNGJiODExNWY5OTcyYjI4NzdjMjNmNjihpPHa: --dhchap-ctrl-secret DHHC-1:02:Y2M1NWViYTIxODVlZmI3M2VkNDc1ODI1YjExZTAzZGFkZWI1ODg0YWQ1ODUyODU0rHVN0g==: 00:15:17.068 03:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.068 03:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:17.068 03:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.068 03:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.068 03:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.068 03:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:17.068 03:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:17.068 03:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:17.068 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:15:17.068 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:17.068 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:17.068 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:17.068 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:17.068 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.068 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.068 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.068 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.068 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.068 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.068 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.068 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.638 00:15:17.638 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:17.638 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:17.638 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.638 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.638 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.638 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.638 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.638 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.638 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:17.638 { 00:15:17.638 "cntlid": 37, 00:15:17.638 "qid": 0, 00:15:17.638 "state": "enabled", 00:15:17.638 "thread": "nvmf_tgt_poll_group_000", 00:15:17.638 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:17.638 "listen_address": { 00:15:17.638 "trtype": "TCP", 00:15:17.638 "adrfam": "IPv4", 00:15:17.638 "traddr": "10.0.0.2", 00:15:17.638 "trsvcid": "4420" 00:15:17.638 }, 00:15:17.638 "peer_address": { 00:15:17.638 "trtype": "TCP", 00:15:17.638 "adrfam": "IPv4", 00:15:17.638 "traddr": "10.0.0.1", 00:15:17.638 "trsvcid": "47462" 00:15:17.638 }, 00:15:17.638 "auth": { 00:15:17.638 "state": "completed", 00:15:17.638 "digest": "sha256", 00:15:17.638 "dhgroup": "ffdhe6144" 00:15:17.638 } 00:15:17.638 } 00:15:17.638 ]' 00:15:17.638 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:17.897 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:17.897 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:17.897 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:17.897 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:17.897 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.897 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.897 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.156 03:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: --dhchap-ctrl-secret DHHC-1:01:MzVhZGVlNzZkOTQ1ZDFkMjkxOGFkNTg5NDU5OTk4NmN4ySi6: 00:15:18.156 03:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: --dhchap-ctrl-secret DHHC-1:01:MzVhZGVlNzZkOTQ1ZDFkMjkxOGFkNTg5NDU5OTk4NmN4ySi6: 00:15:18.724 03:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.724 03:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:18.724 03:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.724 03:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.724 03:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.724 03:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:18.724 03:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:18.724 03:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:18.724 03:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:15:18.724 03:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:18.724 03:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:18.724 03:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:18.724 03:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:18.724 03:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.724 03:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:18.724 03:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.724 03:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.984 03:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.984 03:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:18.984 03:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:18.984 03:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:19.243 00:15:19.243 03:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:19.243 03:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:19.243 03:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.503 03:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.503 03:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.503 03:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.503 03:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.503 03:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.503 03:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:19.503 { 00:15:19.503 "cntlid": 39, 00:15:19.503 "qid": 0, 00:15:19.503 "state": "enabled", 00:15:19.503 "thread": "nvmf_tgt_poll_group_000", 00:15:19.503 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:19.503 "listen_address": { 00:15:19.503 "trtype": "TCP", 00:15:19.503 "adrfam": "IPv4", 00:15:19.503 "traddr": "10.0.0.2", 00:15:19.503 "trsvcid": "4420" 00:15:19.503 }, 00:15:19.503 "peer_address": { 00:15:19.503 "trtype": "TCP", 00:15:19.503 "adrfam": "IPv4", 00:15:19.503 "traddr": "10.0.0.1", 00:15:19.503 "trsvcid": "47504" 00:15:19.503 }, 00:15:19.503 "auth": { 00:15:19.503 "state": "completed", 00:15:19.503 "digest": "sha256", 00:15:19.503 "dhgroup": "ffdhe6144" 00:15:19.503 } 00:15:19.503 } 00:15:19.503 ]' 00:15:19.503 03:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:19.503 03:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:19.503 03:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:19.503 03:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:19.503 03:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:19.503 03:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.503 03:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.503 03:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.763 03:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI0NjQyNzQxZTJiNzEyMTMxNjI0ZGYzZjlmZjNmZDk4NTAxOGUyN2RmZWYzNzI5YTc1OWZiOWNhNThhMzY4OXDEmbI=: 00:15:19.763 03:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZDI0NjQyNzQxZTJiNzEyMTMxNjI0ZGYzZjlmZjNmZDk4NTAxOGUyN2RmZWYzNzI5YTc1OWZiOWNhNThhMzY4OXDEmbI=: 00:15:20.329 03:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.329 03:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:20.329 03:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.329 03:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.329 03:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.329 03:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:20.329 03:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:20.329 03:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:20.329 03:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:20.587 03:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:15:20.587 03:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:20.587 03:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:20.587 03:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:20.587 03:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:20.587 03:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.587 03:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.587 03:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.587 03:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.587 03:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.587 03:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.587 03:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.587 03:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.153 00:15:21.153 03:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:21.153 03:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:21.153 03:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.153 03:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.153 03:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.153 03:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.153 03:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.153 03:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.153 03:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:21.153 { 00:15:21.153 "cntlid": 41, 00:15:21.153 "qid": 0, 00:15:21.153 "state": "enabled", 00:15:21.153 "thread": "nvmf_tgt_poll_group_000", 00:15:21.153 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:21.153 "listen_address": { 00:15:21.153 "trtype": "TCP", 00:15:21.153 "adrfam": "IPv4", 00:15:21.153 "traddr": "10.0.0.2", 00:15:21.153 "trsvcid": "4420" 00:15:21.153 }, 00:15:21.153 "peer_address": { 00:15:21.153 "trtype": "TCP", 00:15:21.153 "adrfam": "IPv4", 00:15:21.153 "traddr": "10.0.0.1", 00:15:21.153 "trsvcid": "47536" 00:15:21.153 }, 00:15:21.153 "auth": { 00:15:21.153 "state": "completed", 00:15:21.153 "digest": "sha256", 00:15:21.153 "dhgroup": "ffdhe8192" 00:15:21.153 } 00:15:21.153 } 00:15:21.153 ]' 00:15:21.153 03:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:21.411 03:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:21.412 03:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:21.412 03:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:21.412 03:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:21.412 03:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.412 03:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.412 03:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.670 03:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODU0OTM1YjM3NWRlNzE1YmUwYjlkZmIzNjY5YjQ4ZjdmZGU2ZmQ3OWZmNjE5NTc2tqfyww==: --dhchap-ctrl-secret DHHC-1:03:ZmVlMGQzMzU0ZTQxNTAyOGNjYTkyYTdjMzBjOGFmYzY2NGYxNTFjZDY5NTIyM2ViYjU0NzlhZDE1ZDFlMzg2N2asu+M=: 00:15:21.670 03:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODU0OTM1YjM3NWRlNzE1YmUwYjlkZmIzNjY5YjQ4ZjdmZGU2ZmQ3OWZmNjE5NTc2tqfyww==: --dhchap-ctrl-secret DHHC-1:03:ZmVlMGQzMzU0ZTQxNTAyOGNjYTkyYTdjMzBjOGFmYzY2NGYxNTFjZDY5NTIyM2ViYjU0NzlhZDE1ZDFlMzg2N2asu+M=: 00:15:22.238 03:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.238 03:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:22.238 03:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.238 03:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.238 03:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.238 03:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:22.238 03:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:22.238 03:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:22.238 03:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:22.238 03:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:22.238 03:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:22.238 03:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:22.238 03:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:22.238 03:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.238 03:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.238 03:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.238 03:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.497 03:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.497 03:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.497 03:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.497 03:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.756 00:15:22.756 03:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:22.756 03:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:22.756 03:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.017 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.017 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.017 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.017 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.017 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.017 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:23.017 { 00:15:23.017 "cntlid": 43, 00:15:23.017 "qid": 0, 00:15:23.017 "state": "enabled", 00:15:23.017 "thread": "nvmf_tgt_poll_group_000", 00:15:23.017 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:23.017 "listen_address": { 00:15:23.017 "trtype": "TCP", 00:15:23.017 "adrfam": "IPv4", 00:15:23.017 "traddr": "10.0.0.2", 00:15:23.017 "trsvcid": "4420" 00:15:23.017 }, 00:15:23.017 "peer_address": { 00:15:23.017 "trtype": "TCP", 00:15:23.017 "adrfam": "IPv4", 00:15:23.017 "traddr": "10.0.0.1", 00:15:23.017 "trsvcid": "47566" 00:15:23.017 }, 00:15:23.017 "auth": { 00:15:23.017 "state": "completed", 00:15:23.017 "digest": "sha256", 00:15:23.017 "dhgroup": "ffdhe8192" 00:15:23.017 } 00:15:23.017 } 00:15:23.017 ]' 00:15:23.017 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:23.017 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:23.017 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:23.017 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:23.017 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:23.276 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.276 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.276 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.277 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjA3NGFjYjNhNGJiODExNWY5OTcyYjI4NzdjMjNmNjihpPHa: --dhchap-ctrl-secret DHHC-1:02:Y2M1NWViYTIxODVlZmI3M2VkNDc1ODI1YjExZTAzZGFkZWI1ODg0YWQ1ODUyODU0rHVN0g==: 00:15:23.277 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjA3NGFjYjNhNGJiODExNWY5OTcyYjI4NzdjMjNmNjihpPHa: --dhchap-ctrl-secret DHHC-1:02:Y2M1NWViYTIxODVlZmI3M2VkNDc1ODI1YjExZTAzZGFkZWI1ODg0YWQ1ODUyODU0rHVN0g==: 00:15:23.845 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.846 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:23.846 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.846 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.846 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.846 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.846 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:23.846 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:24.105 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:24.105 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:24.105 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:24.105 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:24.105 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:24.105 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.105 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.105 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.105 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.105 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.105 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.106 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.106 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.674 00:15:24.674 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:24.674 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:24.674 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.933 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.933 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.933 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.933 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.933 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.933 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.933 { 00:15:24.933 "cntlid": 45, 00:15:24.933 "qid": 0, 00:15:24.933 "state": "enabled", 00:15:24.933 "thread": "nvmf_tgt_poll_group_000", 00:15:24.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:24.933 "listen_address": { 00:15:24.933 "trtype": "TCP", 00:15:24.933 "adrfam": "IPv4", 00:15:24.933 "traddr": "10.0.0.2", 00:15:24.933 "trsvcid": "4420" 00:15:24.933 }, 00:15:24.933 "peer_address": { 00:15:24.933 "trtype": "TCP", 00:15:24.933 "adrfam": "IPv4", 00:15:24.933 "traddr": "10.0.0.1", 00:15:24.933 "trsvcid": "53908" 00:15:24.933 }, 00:15:24.933 "auth": { 00:15:24.933 "state": "completed", 00:15:24.933 "digest": "sha256", 00:15:24.933 "dhgroup": "ffdhe8192" 00:15:24.933 } 00:15:24.933 } 00:15:24.933 ]' 00:15:24.934 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.934 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:24.934 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.934 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:24.934 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.934 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.934 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.934 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.205 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: --dhchap-ctrl-secret DHHC-1:01:MzVhZGVlNzZkOTQ1ZDFkMjkxOGFkNTg5NDU5OTk4NmN4ySi6: 00:15:25.205 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: --dhchap-ctrl-secret DHHC-1:01:MzVhZGVlNzZkOTQ1ZDFkMjkxOGFkNTg5NDU5OTk4NmN4ySi6: 00:15:25.773 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.773 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:25.773 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.773 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.773 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.773 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:25.773 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:25.773 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:26.031 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:26.031 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:26.031 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:26.031 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:26.031 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:26.031 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.031 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:26.031 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.031 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.031 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.031 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:26.031 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:26.031 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:26.596 00:15:26.596 03:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.596 03:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.596 03:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.596 03:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.596 03:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.596 03:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.596 03:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.596 03:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.596 03:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.596 { 00:15:26.596 "cntlid": 47, 00:15:26.596 "qid": 0, 00:15:26.596 "state": "enabled", 00:15:26.596 "thread": "nvmf_tgt_poll_group_000", 00:15:26.596 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:26.596 "listen_address": { 00:15:26.596 "trtype": "TCP", 00:15:26.596 "adrfam": "IPv4", 00:15:26.596 "traddr": "10.0.0.2", 00:15:26.596 "trsvcid": "4420" 00:15:26.596 }, 00:15:26.596 "peer_address": { 00:15:26.596 "trtype": "TCP", 00:15:26.596 "adrfam": "IPv4", 00:15:26.596 "traddr": "10.0.0.1", 00:15:26.596 "trsvcid": "53920" 00:15:26.596 }, 00:15:26.596 "auth": { 00:15:26.596 "state": "completed", 00:15:26.596 "digest": "sha256", 00:15:26.596 "dhgroup": "ffdhe8192" 00:15:26.596 } 00:15:26.596 } 00:15:26.596 ]' 00:15:26.596 03:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.596 03:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:26.596 03:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:26.854 03:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:26.854 03:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.854 03:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.854 03:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.854 03:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.112 03:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI0NjQyNzQxZTJiNzEyMTMxNjI0ZGYzZjlmZjNmZDk4NTAxOGUyN2RmZWYzNzI5YTc1OWZiOWNhNThhMzY4OXDEmbI=: 00:15:27.112 03:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZDI0NjQyNzQxZTJiNzEyMTMxNjI0ZGYzZjlmZjNmZDk4NTAxOGUyN2RmZWYzNzI5YTc1OWZiOWNhNThhMzY4OXDEmbI=: 00:15:27.678 03:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.678 03:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:27.678 03:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.678 03:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.678 03:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.678 03:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:27.678 03:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:27.678 03:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:27.678 03:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:27.678 03:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:27.678 03:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:27.678 03:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:27.678 03:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:27.678 03:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:27.678 03:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:27.678 03:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.678 03:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.678 03:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.678 03:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.678 03:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.678 03:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.678 03:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.678 03:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.935 00:15:27.935 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:27.935 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:27.935 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.193 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.193 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.193 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.193 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.193 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.193 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:28.193 { 00:15:28.193 "cntlid": 49, 00:15:28.193 "qid": 0, 00:15:28.193 "state": "enabled", 00:15:28.193 "thread": "nvmf_tgt_poll_group_000", 00:15:28.193 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:28.193 "listen_address": { 00:15:28.193 "trtype": "TCP", 00:15:28.193 "adrfam": "IPv4", 00:15:28.193 "traddr": "10.0.0.2", 00:15:28.193 "trsvcid": "4420" 00:15:28.193 }, 00:15:28.193 "peer_address": { 00:15:28.193 "trtype": "TCP", 00:15:28.193 "adrfam": "IPv4", 00:15:28.193 "traddr": "10.0.0.1", 00:15:28.193 "trsvcid": "53938" 00:15:28.193 }, 00:15:28.193 "auth": { 00:15:28.193 "state": "completed", 00:15:28.193 "digest": "sha384", 00:15:28.193 "dhgroup": "null" 00:15:28.193 } 00:15:28.193 } 00:15:28.193 ]' 00:15:28.193 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:28.193 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:28.193 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:28.451 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:28.451 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:28.451 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.451 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.451 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.451 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODU0OTM1YjM3NWRlNzE1YmUwYjlkZmIzNjY5YjQ4ZjdmZGU2ZmQ3OWZmNjE5NTc2tqfyww==: --dhchap-ctrl-secret DHHC-1:03:ZmVlMGQzMzU0ZTQxNTAyOGNjYTkyYTdjMzBjOGFmYzY2NGYxNTFjZDY5NTIyM2ViYjU0NzlhZDE1ZDFlMzg2N2asu+M=: 00:15:28.451 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODU0OTM1YjM3NWRlNzE1YmUwYjlkZmIzNjY5YjQ4ZjdmZGU2ZmQ3OWZmNjE5NTc2tqfyww==: --dhchap-ctrl-secret DHHC-1:03:ZmVlMGQzMzU0ZTQxNTAyOGNjYTkyYTdjMzBjOGFmYzY2NGYxNTFjZDY5NTIyM2ViYjU0NzlhZDE1ZDFlMzg2N2asu+M=: 00:15:29.016 03:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.275 03:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:29.275 03:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.275 03:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.275 03:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.275 03:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:29.275 03:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:29.275 03:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:29.275 03:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:29.275 03:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:29.275 03:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:29.275 03:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:29.275 03:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:29.275 03:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.275 03:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.275 03:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.275 03:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.275 03:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.275 03:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.275 03:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.275 03:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.534 00:15:29.534 03:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:29.534 03:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:29.534 03:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.792 03:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.792 03:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.792 03:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.792 03:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.792 03:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.792 03:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:29.792 { 00:15:29.792 "cntlid": 51, 00:15:29.792 "qid": 0, 00:15:29.792 "state": "enabled", 00:15:29.792 "thread": "nvmf_tgt_poll_group_000", 00:15:29.792 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:29.792 "listen_address": { 00:15:29.792 "trtype": "TCP", 00:15:29.792 "adrfam": "IPv4", 00:15:29.792 "traddr": "10.0.0.2", 00:15:29.792 "trsvcid": "4420" 00:15:29.792 }, 00:15:29.792 "peer_address": { 00:15:29.792 "trtype": "TCP", 00:15:29.792 "adrfam": "IPv4", 00:15:29.792 "traddr": "10.0.0.1", 00:15:29.792 "trsvcid": "53952" 00:15:29.792 }, 00:15:29.792 "auth": { 00:15:29.792 "state": "completed", 00:15:29.792 "digest": "sha384", 00:15:29.792 "dhgroup": "null" 00:15:29.792 } 00:15:29.792 } 00:15:29.792 ]' 00:15:29.792 03:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:29.792 03:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:29.792 03:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:30.050 03:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:30.050 03:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:30.050 03:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.050 03:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.050 03:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.050 03:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjA3NGFjYjNhNGJiODExNWY5OTcyYjI4NzdjMjNmNjihpPHa: --dhchap-ctrl-secret DHHC-1:02:Y2M1NWViYTIxODVlZmI3M2VkNDc1ODI1YjExZTAzZGFkZWI1ODg0YWQ1ODUyODU0rHVN0g==: 00:15:30.050 03:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjA3NGFjYjNhNGJiODExNWY5OTcyYjI4NzdjMjNmNjihpPHa: --dhchap-ctrl-secret DHHC-1:02:Y2M1NWViYTIxODVlZmI3M2VkNDc1ODI1YjExZTAzZGFkZWI1ODg0YWQ1ODUyODU0rHVN0g==: 00:15:30.615 03:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.874 03:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:30.874 03:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.875 03:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.875 03:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.875 03:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:30.875 03:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:30.875 03:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:30.875 03:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:15:30.875 03:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:30.875 03:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:30.875 03:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:30.875 03:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:30.875 03:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.875 03:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.875 03:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.875 03:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.875 03:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.875 03:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.875 03:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.875 03:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:31.134 00:15:31.134 03:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:31.134 03:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:31.134 03:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.393 03:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.393 03:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.393 03:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.393 03:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.393 03:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.393 03:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:31.393 { 00:15:31.393 "cntlid": 53, 00:15:31.393 "qid": 0, 00:15:31.393 "state": "enabled", 00:15:31.393 "thread": "nvmf_tgt_poll_group_000", 00:15:31.393 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:31.393 "listen_address": { 00:15:31.393 "trtype": "TCP", 00:15:31.393 "adrfam": "IPv4", 00:15:31.393 "traddr": "10.0.0.2", 00:15:31.393 "trsvcid": "4420" 00:15:31.393 }, 00:15:31.393 "peer_address": { 00:15:31.393 "trtype": "TCP", 00:15:31.393 "adrfam": "IPv4", 00:15:31.393 "traddr": "10.0.0.1", 00:15:31.393 "trsvcid": "53978" 00:15:31.393 }, 00:15:31.393 "auth": { 00:15:31.393 "state": "completed", 00:15:31.393 "digest": "sha384", 00:15:31.393 "dhgroup": "null" 00:15:31.393 } 00:15:31.393 } 00:15:31.393 ]' 00:15:31.393 03:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:31.393 03:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:31.393 03:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:31.652 03:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:31.652 03:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:31.652 03:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.652 03:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.652 03:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.652 03:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: --dhchap-ctrl-secret DHHC-1:01:MzVhZGVlNzZkOTQ1ZDFkMjkxOGFkNTg5NDU5OTk4NmN4ySi6: 00:15:31.652 03:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: --dhchap-ctrl-secret DHHC-1:01:MzVhZGVlNzZkOTQ1ZDFkMjkxOGFkNTg5NDU5OTk4NmN4ySi6: 00:15:32.220 03:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.479 03:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:32.479 03:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.479 03:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.479 03:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.479 03:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:32.479 03:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:32.479 03:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:32.479 03:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:32.479 03:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:32.479 03:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:32.479 03:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:32.479 03:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:32.479 03:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.479 03:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:32.479 03:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.479 03:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.480 03:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.480 03:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:32.480 03:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:32.480 03:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:32.738 00:15:32.738 03:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:32.738 03:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:32.738 03:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.997 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.997 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.997 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.997 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.997 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.997 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.997 { 00:15:32.997 "cntlid": 55, 00:15:32.997 "qid": 0, 00:15:32.997 "state": "enabled", 00:15:32.997 "thread": "nvmf_tgt_poll_group_000", 00:15:32.997 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:32.997 "listen_address": { 00:15:32.997 "trtype": "TCP", 00:15:32.997 "adrfam": "IPv4", 00:15:32.998 "traddr": "10.0.0.2", 00:15:32.998 "trsvcid": "4420" 00:15:32.998 }, 00:15:32.998 "peer_address": { 00:15:32.998 "trtype": "TCP", 00:15:32.998 "adrfam": "IPv4", 00:15:32.998 "traddr": "10.0.0.1", 00:15:32.998 "trsvcid": "54008" 00:15:32.998 }, 00:15:32.998 "auth": { 00:15:32.998 "state": "completed", 00:15:32.998 "digest": "sha384", 00:15:32.998 "dhgroup": "null" 00:15:32.998 } 00:15:32.998 } 00:15:32.998 ]' 00:15:32.998 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.998 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:32.998 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:33.257 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:33.257 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:33.257 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.257 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.257 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.257 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI0NjQyNzQxZTJiNzEyMTMxNjI0ZGYzZjlmZjNmZDk4NTAxOGUyN2RmZWYzNzI5YTc1OWZiOWNhNThhMzY4OXDEmbI=: 00:15:33.257 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZDI0NjQyNzQxZTJiNzEyMTMxNjI0ZGYzZjlmZjNmZDk4NTAxOGUyN2RmZWYzNzI5YTc1OWZiOWNhNThhMzY4OXDEmbI=: 00:15:33.825 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.825 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:33.825 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.825 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.825 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.825 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:33.825 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:33.825 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:33.825 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:34.085 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:15:34.085 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:34.085 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:34.085 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:34.085 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:34.085 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.085 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.085 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.085 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.085 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.085 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.085 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.085 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.344 00:15:34.344 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:34.344 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.344 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:34.603 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.603 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.603 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.603 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.603 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.603 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:34.603 { 00:15:34.603 "cntlid": 57, 00:15:34.603 "qid": 0, 00:15:34.603 "state": "enabled", 00:15:34.603 "thread": "nvmf_tgt_poll_group_000", 00:15:34.603 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:34.603 "listen_address": { 00:15:34.603 "trtype": "TCP", 00:15:34.603 "adrfam": "IPv4", 00:15:34.603 "traddr": "10.0.0.2", 00:15:34.603 "trsvcid": "4420" 00:15:34.603 }, 00:15:34.603 "peer_address": { 00:15:34.603 "trtype": "TCP", 00:15:34.603 "adrfam": "IPv4", 00:15:34.603 "traddr": "10.0.0.1", 00:15:34.603 "trsvcid": "34294" 00:15:34.603 }, 00:15:34.603 "auth": { 00:15:34.603 "state": "completed", 00:15:34.603 "digest": "sha384", 00:15:34.603 "dhgroup": "ffdhe2048" 00:15:34.603 } 00:15:34.603 } 00:15:34.603 ]' 00:15:34.603 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:34.603 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:34.603 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.603 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:34.603 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.604 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.604 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.604 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.862 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODU0OTM1YjM3NWRlNzE1YmUwYjlkZmIzNjY5YjQ4ZjdmZGU2ZmQ3OWZmNjE5NTc2tqfyww==: --dhchap-ctrl-secret DHHC-1:03:ZmVlMGQzMzU0ZTQxNTAyOGNjYTkyYTdjMzBjOGFmYzY2NGYxNTFjZDY5NTIyM2ViYjU0NzlhZDE1ZDFlMzg2N2asu+M=: 00:15:34.863 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODU0OTM1YjM3NWRlNzE1YmUwYjlkZmIzNjY5YjQ4ZjdmZGU2ZmQ3OWZmNjE5NTc2tqfyww==: --dhchap-ctrl-secret DHHC-1:03:ZmVlMGQzMzU0ZTQxNTAyOGNjYTkyYTdjMzBjOGFmYzY2NGYxNTFjZDY5NTIyM2ViYjU0NzlhZDE1ZDFlMzg2N2asu+M=: 00:15:35.429 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.429 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:35.429 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.429 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.429 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.429 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:35.429 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:35.429 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:35.688 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:35.688 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.688 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:35.688 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:35.688 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:35.688 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.688 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.688 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.688 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.688 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.688 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.688 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.688 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.946 00:15:35.946 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:35.946 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.946 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:36.205 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.205 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.205 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.205 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.205 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.205 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:36.205 { 00:15:36.205 "cntlid": 59, 00:15:36.205 "qid": 0, 00:15:36.205 "state": "enabled", 00:15:36.205 "thread": "nvmf_tgt_poll_group_000", 00:15:36.205 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:36.205 "listen_address": { 00:15:36.205 "trtype": "TCP", 00:15:36.205 "adrfam": "IPv4", 00:15:36.205 "traddr": "10.0.0.2", 00:15:36.205 "trsvcid": "4420" 00:15:36.205 }, 00:15:36.205 "peer_address": { 00:15:36.205 "trtype": "TCP", 00:15:36.205 "adrfam": "IPv4", 00:15:36.205 "traddr": "10.0.0.1", 00:15:36.205 "trsvcid": "34318" 00:15:36.205 }, 00:15:36.205 "auth": { 00:15:36.205 "state": "completed", 00:15:36.205 "digest": "sha384", 00:15:36.205 "dhgroup": "ffdhe2048" 00:15:36.205 } 00:15:36.205 } 00:15:36.205 ]' 00:15:36.205 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:36.205 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:36.205 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:36.205 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:36.205 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:36.205 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.205 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.205 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.463 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjA3NGFjYjNhNGJiODExNWY5OTcyYjI4NzdjMjNmNjihpPHa: --dhchap-ctrl-secret DHHC-1:02:Y2M1NWViYTIxODVlZmI3M2VkNDc1ODI1YjExZTAzZGFkZWI1ODg0YWQ1ODUyODU0rHVN0g==: 00:15:36.463 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjA3NGFjYjNhNGJiODExNWY5OTcyYjI4NzdjMjNmNjihpPHa: --dhchap-ctrl-secret DHHC-1:02:Y2M1NWViYTIxODVlZmI3M2VkNDc1ODI1YjExZTAzZGFkZWI1ODg0YWQ1ODUyODU0rHVN0g==: 00:15:37.030 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.030 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:37.030 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.030 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.030 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.030 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:37.030 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:37.030 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:37.289 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:15:37.289 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:37.289 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:37.289 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:37.289 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:37.289 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.289 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.289 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.289 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.289 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.289 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.289 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.289 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.547 00:15:37.547 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:37.547 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:37.547 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.805 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.805 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.805 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.805 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.805 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.805 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:37.805 { 00:15:37.805 "cntlid": 61, 00:15:37.805 "qid": 0, 00:15:37.805 "state": "enabled", 00:15:37.805 "thread": "nvmf_tgt_poll_group_000", 00:15:37.805 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:37.805 "listen_address": { 00:15:37.805 "trtype": "TCP", 00:15:37.805 "adrfam": "IPv4", 00:15:37.805 "traddr": "10.0.0.2", 00:15:37.805 "trsvcid": "4420" 00:15:37.805 }, 00:15:37.805 "peer_address": { 00:15:37.805 "trtype": "TCP", 00:15:37.805 "adrfam": "IPv4", 00:15:37.805 "traddr": "10.0.0.1", 00:15:37.806 "trsvcid": "34362" 00:15:37.806 }, 00:15:37.806 "auth": { 00:15:37.806 "state": "completed", 00:15:37.806 "digest": "sha384", 00:15:37.806 "dhgroup": "ffdhe2048" 00:15:37.806 } 00:15:37.806 } 00:15:37.806 ]' 00:15:37.806 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.806 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:37.806 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:37.806 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:37.806 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:37.806 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.806 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.806 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.063 03:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: --dhchap-ctrl-secret DHHC-1:01:MzVhZGVlNzZkOTQ1ZDFkMjkxOGFkNTg5NDU5OTk4NmN4ySi6: 00:15:38.063 03:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: --dhchap-ctrl-secret DHHC-1:01:MzVhZGVlNzZkOTQ1ZDFkMjkxOGFkNTg5NDU5OTk4NmN4ySi6: 00:15:38.629 03:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.629 03:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:38.629 03:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.629 03:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.629 03:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.629 03:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.629 03:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:38.629 03:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:38.888 03:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:15:38.888 03:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:38.888 03:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:38.888 03:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:38.888 03:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:38.888 03:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.888 03:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:38.888 03:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.888 03:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.888 03:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.888 03:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:38.888 03:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:38.888 03:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:39.147 00:15:39.147 03:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:39.147 03:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:39.147 03:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.406 03:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.406 03:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.406 03:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.406 03:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.406 03:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.406 03:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.406 { 00:15:39.406 "cntlid": 63, 00:15:39.406 "qid": 0, 00:15:39.406 "state": "enabled", 00:15:39.406 "thread": "nvmf_tgt_poll_group_000", 00:15:39.406 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:39.406 "listen_address": { 00:15:39.406 "trtype": "TCP", 00:15:39.406 "adrfam": "IPv4", 00:15:39.406 "traddr": "10.0.0.2", 00:15:39.406 "trsvcid": "4420" 00:15:39.406 }, 00:15:39.406 "peer_address": { 00:15:39.406 "trtype": "TCP", 00:15:39.406 "adrfam": "IPv4", 00:15:39.406 "traddr": "10.0.0.1", 00:15:39.406 "trsvcid": "34392" 00:15:39.406 }, 00:15:39.406 "auth": { 00:15:39.406 "state": "completed", 00:15:39.406 "digest": "sha384", 00:15:39.406 "dhgroup": "ffdhe2048" 00:15:39.406 } 00:15:39.406 } 00:15:39.406 ]' 00:15:39.406 03:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:39.406 03:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:39.406 03:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:39.406 03:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:39.406 03:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.406 03:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.406 03:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.406 03:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.666 03:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI0NjQyNzQxZTJiNzEyMTMxNjI0ZGYzZjlmZjNmZDk4NTAxOGUyN2RmZWYzNzI5YTc1OWZiOWNhNThhMzY4OXDEmbI=: 00:15:39.666 03:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZDI0NjQyNzQxZTJiNzEyMTMxNjI0ZGYzZjlmZjNmZDk4NTAxOGUyN2RmZWYzNzI5YTc1OWZiOWNhNThhMzY4OXDEmbI=: 00:15:40.234 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.234 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:40.234 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.234 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.234 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.234 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:40.234 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:40.234 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:40.234 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:40.493 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:15:40.493 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:40.494 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:40.494 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:40.494 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:40.494 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.494 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.494 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.494 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.494 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.494 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.494 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.494 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.753 00:15:40.753 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.753 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.753 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.013 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.013 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.013 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.013 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.013 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.013 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:41.013 { 00:15:41.013 "cntlid": 65, 00:15:41.013 "qid": 0, 00:15:41.013 "state": "enabled", 00:15:41.013 "thread": "nvmf_tgt_poll_group_000", 00:15:41.013 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:41.013 "listen_address": { 00:15:41.013 "trtype": "TCP", 00:15:41.013 "adrfam": "IPv4", 00:15:41.013 "traddr": "10.0.0.2", 00:15:41.013 "trsvcid": "4420" 00:15:41.013 }, 00:15:41.013 "peer_address": { 00:15:41.013 "trtype": "TCP", 00:15:41.013 "adrfam": "IPv4", 00:15:41.013 "traddr": "10.0.0.1", 00:15:41.013 "trsvcid": "34414" 00:15:41.013 }, 00:15:41.013 "auth": { 00:15:41.013 "state": "completed", 00:15:41.013 "digest": "sha384", 00:15:41.013 "dhgroup": "ffdhe3072" 00:15:41.013 } 00:15:41.013 } 00:15:41.013 ]' 00:15:41.013 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:41.013 03:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:41.013 03:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:41.013 03:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:41.014 03:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:41.014 03:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.014 03:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.014 03:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.273 03:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODU0OTM1YjM3NWRlNzE1YmUwYjlkZmIzNjY5YjQ4ZjdmZGU2ZmQ3OWZmNjE5NTc2tqfyww==: --dhchap-ctrl-secret DHHC-1:03:ZmVlMGQzMzU0ZTQxNTAyOGNjYTkyYTdjMzBjOGFmYzY2NGYxNTFjZDY5NTIyM2ViYjU0NzlhZDE1ZDFlMzg2N2asu+M=: 00:15:41.273 03:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODU0OTM1YjM3NWRlNzE1YmUwYjlkZmIzNjY5YjQ4ZjdmZGU2ZmQ3OWZmNjE5NTc2tqfyww==: --dhchap-ctrl-secret DHHC-1:03:ZmVlMGQzMzU0ZTQxNTAyOGNjYTkyYTdjMzBjOGFmYzY2NGYxNTFjZDY5NTIyM2ViYjU0NzlhZDE1ZDFlMzg2N2asu+M=: 00:15:41.842 03:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.842 03:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:41.842 03:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.842 03:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.842 03:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.842 03:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:41.842 03:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:41.842 03:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:42.102 03:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:15:42.102 03:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.102 03:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:42.102 03:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:42.102 03:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:42.102 03:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.102 03:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.102 03:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.102 03:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.102 03:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.102 03:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.102 03:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.102 03:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.362 00:15:42.362 03:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.362 03:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.362 03:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.621 03:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.621 03:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.621 03:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.621 03:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.621 03:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.621 03:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.621 { 00:15:42.621 "cntlid": 67, 00:15:42.621 "qid": 0, 00:15:42.621 "state": "enabled", 00:15:42.621 "thread": "nvmf_tgt_poll_group_000", 00:15:42.621 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:42.621 "listen_address": { 00:15:42.621 "trtype": "TCP", 00:15:42.621 "adrfam": "IPv4", 00:15:42.621 "traddr": "10.0.0.2", 00:15:42.621 "trsvcid": "4420" 00:15:42.621 }, 00:15:42.622 "peer_address": { 00:15:42.622 "trtype": "TCP", 00:15:42.622 "adrfam": "IPv4", 00:15:42.622 "traddr": "10.0.0.1", 00:15:42.622 "trsvcid": "34442" 00:15:42.622 }, 00:15:42.622 "auth": { 00:15:42.622 "state": "completed", 00:15:42.622 "digest": "sha384", 00:15:42.622 "dhgroup": "ffdhe3072" 00:15:42.622 } 00:15:42.622 } 00:15:42.622 ]' 00:15:42.622 03:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.622 03:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:42.622 03:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.622 03:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:42.622 03:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.622 03:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.622 03:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.622 03:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.881 03:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjA3NGFjYjNhNGJiODExNWY5OTcyYjI4NzdjMjNmNjihpPHa: --dhchap-ctrl-secret DHHC-1:02:Y2M1NWViYTIxODVlZmI3M2VkNDc1ODI1YjExZTAzZGFkZWI1ODg0YWQ1ODUyODU0rHVN0g==: 00:15:42.881 03:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjA3NGFjYjNhNGJiODExNWY5OTcyYjI4NzdjMjNmNjihpPHa: --dhchap-ctrl-secret DHHC-1:02:Y2M1NWViYTIxODVlZmI3M2VkNDc1ODI1YjExZTAzZGFkZWI1ODg0YWQ1ODUyODU0rHVN0g==: 00:15:43.449 03:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.449 03:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:43.449 03:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.449 03:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.449 03:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.449 03:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.449 03:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:43.449 03:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:43.708 03:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:15:43.708 03:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.708 03:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:43.709 03:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:43.709 03:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:43.709 03:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.709 03:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.709 03:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.709 03:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.709 03:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.709 03:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.709 03:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.709 03:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.969 00:15:43.969 03:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:43.969 03:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:43.969 03:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.228 03:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.228 03:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.229 03:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.229 03:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.229 03:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.229 03:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.229 { 00:15:44.229 "cntlid": 69, 00:15:44.229 "qid": 0, 00:15:44.229 "state": "enabled", 00:15:44.229 "thread": "nvmf_tgt_poll_group_000", 00:15:44.229 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:44.229 "listen_address": { 00:15:44.229 "trtype": "TCP", 00:15:44.229 "adrfam": "IPv4", 00:15:44.229 "traddr": "10.0.0.2", 00:15:44.229 "trsvcid": "4420" 00:15:44.229 }, 00:15:44.229 "peer_address": { 00:15:44.229 "trtype": "TCP", 00:15:44.229 "adrfam": "IPv4", 00:15:44.229 "traddr": "10.0.0.1", 00:15:44.229 "trsvcid": "60642" 00:15:44.229 }, 00:15:44.229 "auth": { 00:15:44.229 "state": "completed", 00:15:44.229 "digest": "sha384", 00:15:44.229 "dhgroup": "ffdhe3072" 00:15:44.229 } 00:15:44.229 } 00:15:44.229 ]' 00:15:44.229 03:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.229 03:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:44.229 03:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.229 03:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:44.229 03:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.229 03:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.229 03:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.229 03:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.488 03:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: --dhchap-ctrl-secret DHHC-1:01:MzVhZGVlNzZkOTQ1ZDFkMjkxOGFkNTg5NDU5OTk4NmN4ySi6: 00:15:44.488 03:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: --dhchap-ctrl-secret DHHC-1:01:MzVhZGVlNzZkOTQ1ZDFkMjkxOGFkNTg5NDU5OTk4NmN4ySi6: 00:15:45.059 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.059 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:45.059 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.059 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.059 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.059 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:45.059 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:45.059 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:45.318 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:15:45.318 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:45.318 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:45.318 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:45.318 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:45.318 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.318 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:45.318 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.318 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.318 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.318 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:45.318 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:45.318 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:45.576 00:15:45.576 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:45.576 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.576 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.835 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.835 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.835 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.835 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.835 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.835 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:45.835 { 00:15:45.835 "cntlid": 71, 00:15:45.835 "qid": 0, 00:15:45.835 "state": "enabled", 00:15:45.835 "thread": "nvmf_tgt_poll_group_000", 00:15:45.835 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:45.835 "listen_address": { 00:15:45.835 "trtype": "TCP", 00:15:45.835 "adrfam": "IPv4", 00:15:45.835 "traddr": "10.0.0.2", 00:15:45.835 "trsvcid": "4420" 00:15:45.835 }, 00:15:45.835 "peer_address": { 00:15:45.835 "trtype": "TCP", 00:15:45.835 "adrfam": "IPv4", 00:15:45.835 "traddr": "10.0.0.1", 00:15:45.835 "trsvcid": "60674" 00:15:45.835 }, 00:15:45.835 "auth": { 00:15:45.835 "state": "completed", 00:15:45.835 "digest": "sha384", 00:15:45.835 "dhgroup": "ffdhe3072" 00:15:45.835 } 00:15:45.835 } 00:15:45.835 ]' 00:15:45.835 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:45.835 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:45.835 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:45.835 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:45.835 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:45.835 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.835 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.835 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.093 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI0NjQyNzQxZTJiNzEyMTMxNjI0ZGYzZjlmZjNmZDk4NTAxOGUyN2RmZWYzNzI5YTc1OWZiOWNhNThhMzY4OXDEmbI=: 00:15:46.093 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZDI0NjQyNzQxZTJiNzEyMTMxNjI0ZGYzZjlmZjNmZDk4NTAxOGUyN2RmZWYzNzI5YTc1OWZiOWNhNThhMzY4OXDEmbI=: 00:15:46.660 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.660 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:46.660 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.660 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.660 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.660 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:46.660 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.660 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:46.660 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:46.918 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:15:46.918 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.918 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:46.918 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:46.918 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:46.918 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.918 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.918 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.918 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.918 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.918 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.918 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.918 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.176 00:15:47.176 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.177 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.177 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.435 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.435 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.435 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.435 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.435 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.435 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.435 { 00:15:47.435 "cntlid": 73, 00:15:47.435 "qid": 0, 00:15:47.435 "state": "enabled", 00:15:47.435 "thread": "nvmf_tgt_poll_group_000", 00:15:47.435 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:47.435 "listen_address": { 00:15:47.435 "trtype": "TCP", 00:15:47.435 "adrfam": "IPv4", 00:15:47.435 "traddr": "10.0.0.2", 00:15:47.435 "trsvcid": "4420" 00:15:47.435 }, 00:15:47.435 "peer_address": { 00:15:47.435 "trtype": "TCP", 00:15:47.435 "adrfam": "IPv4", 00:15:47.435 "traddr": "10.0.0.1", 00:15:47.435 "trsvcid": "60692" 00:15:47.435 }, 00:15:47.435 "auth": { 00:15:47.435 "state": "completed", 00:15:47.435 "digest": "sha384", 00:15:47.435 "dhgroup": "ffdhe4096" 00:15:47.435 } 00:15:47.435 } 00:15:47.435 ]' 00:15:47.435 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.435 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:47.435 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.435 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:47.435 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.435 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.435 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.435 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.695 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODU0OTM1YjM3NWRlNzE1YmUwYjlkZmIzNjY5YjQ4ZjdmZGU2ZmQ3OWZmNjE5NTc2tqfyww==: --dhchap-ctrl-secret DHHC-1:03:ZmVlMGQzMzU0ZTQxNTAyOGNjYTkyYTdjMzBjOGFmYzY2NGYxNTFjZDY5NTIyM2ViYjU0NzlhZDE1ZDFlMzg2N2asu+M=: 00:15:47.695 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODU0OTM1YjM3NWRlNzE1YmUwYjlkZmIzNjY5YjQ4ZjdmZGU2ZmQ3OWZmNjE5NTc2tqfyww==: --dhchap-ctrl-secret DHHC-1:03:ZmVlMGQzMzU0ZTQxNTAyOGNjYTkyYTdjMzBjOGFmYzY2NGYxNTFjZDY5NTIyM2ViYjU0NzlhZDE1ZDFlMzg2N2asu+M=: 00:15:48.264 03:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.264 03:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:48.264 03:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.264 03:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.264 03:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.264 03:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.264 03:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:48.264 03:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:48.522 03:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:15:48.522 03:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.522 03:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:48.522 03:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:48.522 03:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:48.522 03:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.522 03:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.522 03:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.522 03:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.522 03:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.522 03:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.522 03:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.522 03:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.780 00:15:48.780 03:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:48.780 03:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.780 03:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.039 03:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.039 03:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.039 03:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.039 03:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.039 03:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.039 03:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.039 { 00:15:49.039 "cntlid": 75, 00:15:49.039 "qid": 0, 00:15:49.039 "state": "enabled", 00:15:49.039 "thread": "nvmf_tgt_poll_group_000", 00:15:49.039 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:49.039 "listen_address": { 00:15:49.039 "trtype": "TCP", 00:15:49.039 "adrfam": "IPv4", 00:15:49.039 "traddr": "10.0.0.2", 00:15:49.039 "trsvcid": "4420" 00:15:49.039 }, 00:15:49.039 "peer_address": { 00:15:49.039 "trtype": "TCP", 00:15:49.039 "adrfam": "IPv4", 00:15:49.039 "traddr": "10.0.0.1", 00:15:49.039 "trsvcid": "60712" 00:15:49.039 }, 00:15:49.039 "auth": { 00:15:49.039 "state": "completed", 00:15:49.039 "digest": "sha384", 00:15:49.039 "dhgroup": "ffdhe4096" 00:15:49.039 } 00:15:49.039 } 00:15:49.039 ]' 00:15:49.039 03:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.039 03:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:49.039 03:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.039 03:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:49.039 03:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.039 03:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.039 03:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.039 03:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.297 03:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjA3NGFjYjNhNGJiODExNWY5OTcyYjI4NzdjMjNmNjihpPHa: --dhchap-ctrl-secret DHHC-1:02:Y2M1NWViYTIxODVlZmI3M2VkNDc1ODI1YjExZTAzZGFkZWI1ODg0YWQ1ODUyODU0rHVN0g==: 00:15:49.297 03:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjA3NGFjYjNhNGJiODExNWY5OTcyYjI4NzdjMjNmNjihpPHa: --dhchap-ctrl-secret DHHC-1:02:Y2M1NWViYTIxODVlZmI3M2VkNDc1ODI1YjExZTAzZGFkZWI1ODg0YWQ1ODUyODU0rHVN0g==: 00:15:49.862 03:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.862 03:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:49.862 03:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.862 03:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.862 03:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.862 03:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:49.862 03:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:49.862 03:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:50.121 03:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:15:50.121 03:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.121 03:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:50.121 03:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:50.121 03:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:50.121 03:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.121 03:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.121 03:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.121 03:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.121 03:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.121 03:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.121 03:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.121 03:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.378 00:15:50.378 03:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.378 03:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:50.378 03:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.637 03:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.637 03:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.637 03:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.637 03:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.637 03:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.637 03:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:50.637 { 00:15:50.637 "cntlid": 77, 00:15:50.637 "qid": 0, 00:15:50.637 "state": "enabled", 00:15:50.637 "thread": "nvmf_tgt_poll_group_000", 00:15:50.637 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:50.637 "listen_address": { 00:15:50.637 "trtype": "TCP", 00:15:50.637 "adrfam": "IPv4", 00:15:50.637 "traddr": "10.0.0.2", 00:15:50.637 "trsvcid": "4420" 00:15:50.637 }, 00:15:50.637 "peer_address": { 00:15:50.637 "trtype": "TCP", 00:15:50.637 "adrfam": "IPv4", 00:15:50.637 "traddr": "10.0.0.1", 00:15:50.637 "trsvcid": "60742" 00:15:50.637 }, 00:15:50.637 "auth": { 00:15:50.637 "state": "completed", 00:15:50.637 "digest": "sha384", 00:15:50.637 "dhgroup": "ffdhe4096" 00:15:50.637 } 00:15:50.637 } 00:15:50.637 ]' 00:15:50.637 03:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:50.637 03:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:50.637 03:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:50.902 03:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:50.902 03:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:50.902 03:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.902 03:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.902 03:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.902 03:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: --dhchap-ctrl-secret DHHC-1:01:MzVhZGVlNzZkOTQ1ZDFkMjkxOGFkNTg5NDU5OTk4NmN4ySi6: 00:15:50.902 03:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: --dhchap-ctrl-secret DHHC-1:01:MzVhZGVlNzZkOTQ1ZDFkMjkxOGFkNTg5NDU5OTk4NmN4ySi6: 00:15:51.469 03:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.728 03:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:51.728 03:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.728 03:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.728 03:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.728 03:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.728 03:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:51.728 03:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:51.728 03:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:15:51.728 03:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.728 03:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:51.728 03:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:51.728 03:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:51.728 03:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.728 03:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:51.728 03:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.728 03:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.728 03:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.728 03:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:51.728 03:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:51.728 03:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:51.987 00:15:51.987 03:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.987 03:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.987 03:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.247 03:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.247 03:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.247 03:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.247 03:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.247 03:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.247 03:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:52.247 { 00:15:52.247 "cntlid": 79, 00:15:52.247 "qid": 0, 00:15:52.247 "state": "enabled", 00:15:52.247 "thread": "nvmf_tgt_poll_group_000", 00:15:52.247 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:52.247 "listen_address": { 00:15:52.247 "trtype": "TCP", 00:15:52.247 "adrfam": "IPv4", 00:15:52.247 "traddr": "10.0.0.2", 00:15:52.247 "trsvcid": "4420" 00:15:52.247 }, 00:15:52.247 "peer_address": { 00:15:52.247 "trtype": "TCP", 00:15:52.247 "adrfam": "IPv4", 00:15:52.247 "traddr": "10.0.0.1", 00:15:52.247 "trsvcid": "60774" 00:15:52.247 }, 00:15:52.247 "auth": { 00:15:52.247 "state": "completed", 00:15:52.247 "digest": "sha384", 00:15:52.247 "dhgroup": "ffdhe4096" 00:15:52.247 } 00:15:52.247 } 00:15:52.247 ]' 00:15:52.247 03:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:52.247 03:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:52.247 03:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.506 03:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:52.506 03:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.506 03:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.506 03:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.506 03:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.763 03:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI0NjQyNzQxZTJiNzEyMTMxNjI0ZGYzZjlmZjNmZDk4NTAxOGUyN2RmZWYzNzI5YTc1OWZiOWNhNThhMzY4OXDEmbI=: 00:15:52.763 03:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZDI0NjQyNzQxZTJiNzEyMTMxNjI0ZGYzZjlmZjNmZDk4NTAxOGUyN2RmZWYzNzI5YTc1OWZiOWNhNThhMzY4OXDEmbI=: 00:15:53.329 03:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.329 03:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:53.329 03:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.329 03:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.329 03:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.329 03:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:53.329 03:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.329 03:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:53.329 03:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:53.329 03:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:15:53.329 03:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.329 03:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:53.329 03:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:53.329 03:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:53.329 03:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.329 03:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.329 03:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.329 03:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.329 03:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.329 03:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.329 03:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.329 03:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.896 00:15:53.896 03:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.896 03:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.896 03:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.896 03:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.896 03:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.896 03:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.896 03:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.896 03:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.896 03:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.896 { 00:15:53.896 "cntlid": 81, 00:15:53.896 "qid": 0, 00:15:53.896 "state": "enabled", 00:15:53.896 "thread": "nvmf_tgt_poll_group_000", 00:15:53.896 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:53.896 "listen_address": { 00:15:53.896 "trtype": "TCP", 00:15:53.896 "adrfam": "IPv4", 00:15:53.896 "traddr": "10.0.0.2", 00:15:53.896 "trsvcid": "4420" 00:15:53.896 }, 00:15:53.896 "peer_address": { 00:15:53.896 "trtype": "TCP", 00:15:53.896 "adrfam": "IPv4", 00:15:53.896 "traddr": "10.0.0.1", 00:15:53.896 "trsvcid": "60794" 00:15:53.896 }, 00:15:53.896 "auth": { 00:15:53.896 "state": "completed", 00:15:53.896 "digest": "sha384", 00:15:53.896 "dhgroup": "ffdhe6144" 00:15:53.896 } 00:15:53.896 } 00:15:53.896 ]' 00:15:53.896 03:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.896 03:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:53.896 03:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.155 03:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:54.155 03:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:54.155 03:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.155 03:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.155 03:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.413 03:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODU0OTM1YjM3NWRlNzE1YmUwYjlkZmIzNjY5YjQ4ZjdmZGU2ZmQ3OWZmNjE5NTc2tqfyww==: --dhchap-ctrl-secret DHHC-1:03:ZmVlMGQzMzU0ZTQxNTAyOGNjYTkyYTdjMzBjOGFmYzY2NGYxNTFjZDY5NTIyM2ViYjU0NzlhZDE1ZDFlMzg2N2asu+M=: 00:15:54.413 03:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODU0OTM1YjM3NWRlNzE1YmUwYjlkZmIzNjY5YjQ4ZjdmZGU2ZmQ3OWZmNjE5NTc2tqfyww==: --dhchap-ctrl-secret DHHC-1:03:ZmVlMGQzMzU0ZTQxNTAyOGNjYTkyYTdjMzBjOGFmYzY2NGYxNTFjZDY5NTIyM2ViYjU0NzlhZDE1ZDFlMzg2N2asu+M=: 00:15:54.981 03:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.981 03:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:54.981 03:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.982 03:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.982 03:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.982 03:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.982 03:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:54.982 03:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:54.982 03:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:15:54.982 03:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:54.982 03:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:54.982 03:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:54.982 03:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:54.982 03:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.982 03:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.982 03:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.982 03:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.982 03:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.982 03:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.982 03:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.982 03:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.548 00:15:55.548 03:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.548 03:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.548 03:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.548 03:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.548 03:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.548 03:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.548 03:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.548 03:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.548 03:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.548 { 00:15:55.548 "cntlid": 83, 00:15:55.548 "qid": 0, 00:15:55.548 "state": "enabled", 00:15:55.548 "thread": "nvmf_tgt_poll_group_000", 00:15:55.548 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:55.548 "listen_address": { 00:15:55.548 "trtype": "TCP", 00:15:55.548 "adrfam": "IPv4", 00:15:55.548 "traddr": "10.0.0.2", 00:15:55.548 "trsvcid": "4420" 00:15:55.548 }, 00:15:55.548 "peer_address": { 00:15:55.548 "trtype": "TCP", 00:15:55.548 "adrfam": "IPv4", 00:15:55.548 "traddr": "10.0.0.1", 00:15:55.548 "trsvcid": "49688" 00:15:55.548 }, 00:15:55.548 "auth": { 00:15:55.548 "state": "completed", 00:15:55.548 "digest": "sha384", 00:15:55.548 "dhgroup": "ffdhe6144" 00:15:55.548 } 00:15:55.548 } 00:15:55.548 ]' 00:15:55.548 03:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.807 03:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:55.807 03:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.807 03:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:55.807 03:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.807 03:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.807 03:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.807 03:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.067 03:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjA3NGFjYjNhNGJiODExNWY5OTcyYjI4NzdjMjNmNjihpPHa: --dhchap-ctrl-secret DHHC-1:02:Y2M1NWViYTIxODVlZmI3M2VkNDc1ODI1YjExZTAzZGFkZWI1ODg0YWQ1ODUyODU0rHVN0g==: 00:15:56.067 03:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjA3NGFjYjNhNGJiODExNWY5OTcyYjI4NzdjMjNmNjihpPHa: --dhchap-ctrl-secret DHHC-1:02:Y2M1NWViYTIxODVlZmI3M2VkNDc1ODI1YjExZTAzZGFkZWI1ODg0YWQ1ODUyODU0rHVN0g==: 00:15:56.831 03:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.831 03:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:56.831 03:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.831 03:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.831 03:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.831 03:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:56.831 03:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:56.831 03:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:56.831 03:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:15:56.831 03:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:56.831 03:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:56.831 03:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:56.831 03:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:56.831 03:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.831 03:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.831 03:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.831 03:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.831 03:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.831 03:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.831 03:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.832 03:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.163 00:15:57.163 03:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.163 03:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:57.163 03:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.422 03:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.422 03:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.422 03:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.422 03:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.422 03:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.422 03:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:57.422 { 00:15:57.422 "cntlid": 85, 00:15:57.422 "qid": 0, 00:15:57.422 "state": "enabled", 00:15:57.422 "thread": "nvmf_tgt_poll_group_000", 00:15:57.422 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:57.422 "listen_address": { 00:15:57.422 "trtype": "TCP", 00:15:57.422 "adrfam": "IPv4", 00:15:57.422 "traddr": "10.0.0.2", 00:15:57.422 "trsvcid": "4420" 00:15:57.422 }, 00:15:57.422 "peer_address": { 00:15:57.422 "trtype": "TCP", 00:15:57.422 "adrfam": "IPv4", 00:15:57.422 "traddr": "10.0.0.1", 00:15:57.422 "trsvcid": "49716" 00:15:57.422 }, 00:15:57.422 "auth": { 00:15:57.422 "state": "completed", 00:15:57.422 "digest": "sha384", 00:15:57.422 "dhgroup": "ffdhe6144" 00:15:57.422 } 00:15:57.422 } 00:15:57.422 ]' 00:15:57.422 03:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.422 03:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:57.422 03:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.422 03:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:57.422 03:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.422 03:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.422 03:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.422 03:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.682 03:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: --dhchap-ctrl-secret DHHC-1:01:MzVhZGVlNzZkOTQ1ZDFkMjkxOGFkNTg5NDU5OTk4NmN4ySi6: 00:15:57.682 03:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: --dhchap-ctrl-secret DHHC-1:01:MzVhZGVlNzZkOTQ1ZDFkMjkxOGFkNTg5NDU5OTk4NmN4ySi6: 00:15:58.250 03:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.250 03:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:58.250 03:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.250 03:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.250 03:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.250 03:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.250 03:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:58.250 03:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:58.509 03:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:15:58.509 03:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.509 03:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:58.509 03:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:58.509 03:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:58.509 03:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.509 03:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:58.510 03:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.510 03:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.510 03:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.510 03:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:58.510 03:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:58.510 03:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:58.769 00:15:58.769 03:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.769 03:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.769 03:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.029 03:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.029 03:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.029 03:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.029 03:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.029 03:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.029 03:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:59.029 { 00:15:59.029 "cntlid": 87, 00:15:59.029 "qid": 0, 00:15:59.029 "state": "enabled", 00:15:59.029 "thread": "nvmf_tgt_poll_group_000", 00:15:59.029 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:59.029 "listen_address": { 00:15:59.029 "trtype": "TCP", 00:15:59.029 "adrfam": "IPv4", 00:15:59.029 "traddr": "10.0.0.2", 00:15:59.029 "trsvcid": "4420" 00:15:59.029 }, 00:15:59.029 "peer_address": { 00:15:59.029 "trtype": "TCP", 00:15:59.029 "adrfam": "IPv4", 00:15:59.029 "traddr": "10.0.0.1", 00:15:59.029 "trsvcid": "49746" 00:15:59.029 }, 00:15:59.029 "auth": { 00:15:59.029 "state": "completed", 00:15:59.029 "digest": "sha384", 00:15:59.029 "dhgroup": "ffdhe6144" 00:15:59.029 } 00:15:59.029 } 00:15:59.029 ]' 00:15:59.029 03:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:59.029 03:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:59.029 03:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:59.029 03:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:59.029 03:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:59.029 03:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.029 03:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.029 03:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.288 03:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI0NjQyNzQxZTJiNzEyMTMxNjI0ZGYzZjlmZjNmZDk4NTAxOGUyN2RmZWYzNzI5YTc1OWZiOWNhNThhMzY4OXDEmbI=: 00:15:59.288 03:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZDI0NjQyNzQxZTJiNzEyMTMxNjI0ZGYzZjlmZjNmZDk4NTAxOGUyN2RmZWYzNzI5YTc1OWZiOWNhNThhMzY4OXDEmbI=: 00:15:59.857 03:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.857 03:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:59.857 03:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.857 03:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.857 03:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.857 03:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:59.857 03:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.857 03:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:59.857 03:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:00.116 03:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:00.116 03:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.116 03:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:00.116 03:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:00.117 03:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:00.117 03:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.117 03:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.117 03:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.117 03:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.117 03:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.117 03:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.117 03:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.117 03:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.685 00:16:00.685 03:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.685 03:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.686 03:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.944 03:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.944 03:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.944 03:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.944 03:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.944 03:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.944 03:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.944 { 00:16:00.944 "cntlid": 89, 00:16:00.944 "qid": 0, 00:16:00.944 "state": "enabled", 00:16:00.944 "thread": "nvmf_tgt_poll_group_000", 00:16:00.944 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:00.944 "listen_address": { 00:16:00.944 "trtype": "TCP", 00:16:00.944 "adrfam": "IPv4", 00:16:00.945 "traddr": "10.0.0.2", 00:16:00.945 "trsvcid": "4420" 00:16:00.945 }, 00:16:00.945 "peer_address": { 00:16:00.945 "trtype": "TCP", 00:16:00.945 "adrfam": "IPv4", 00:16:00.945 "traddr": "10.0.0.1", 00:16:00.945 "trsvcid": "49766" 00:16:00.945 }, 00:16:00.945 "auth": { 00:16:00.945 "state": "completed", 00:16:00.945 "digest": "sha384", 00:16:00.945 "dhgroup": "ffdhe8192" 00:16:00.945 } 00:16:00.945 } 00:16:00.945 ]' 00:16:00.945 03:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.945 03:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:00.945 03:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.945 03:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:00.945 03:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.945 03:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.945 03:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.945 03:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.204 03:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODU0OTM1YjM3NWRlNzE1YmUwYjlkZmIzNjY5YjQ4ZjdmZGU2ZmQ3OWZmNjE5NTc2tqfyww==: --dhchap-ctrl-secret DHHC-1:03:ZmVlMGQzMzU0ZTQxNTAyOGNjYTkyYTdjMzBjOGFmYzY2NGYxNTFjZDY5NTIyM2ViYjU0NzlhZDE1ZDFlMzg2N2asu+M=: 00:16:01.204 03:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODU0OTM1YjM3NWRlNzE1YmUwYjlkZmIzNjY5YjQ4ZjdmZGU2ZmQ3OWZmNjE5NTc2tqfyww==: --dhchap-ctrl-secret DHHC-1:03:ZmVlMGQzMzU0ZTQxNTAyOGNjYTkyYTdjMzBjOGFmYzY2NGYxNTFjZDY5NTIyM2ViYjU0NzlhZDE1ZDFlMzg2N2asu+M=: 00:16:01.771 03:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.771 03:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:01.771 03:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.771 03:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.771 03:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.771 03:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.771 03:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:01.771 03:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:02.030 03:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:02.030 03:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.030 03:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:02.030 03:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:02.030 03:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:02.030 03:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.030 03:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.030 03:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.030 03:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.030 03:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.030 03:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.030 03:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.031 03:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.599 00:16:02.599 03:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:02.599 03:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:02.599 03:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.599 03:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.599 03:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.599 03:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.599 03:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.599 03:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.599 03:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:02.599 { 00:16:02.599 "cntlid": 91, 00:16:02.599 "qid": 0, 00:16:02.599 "state": "enabled", 00:16:02.599 "thread": "nvmf_tgt_poll_group_000", 00:16:02.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:02.599 "listen_address": { 00:16:02.599 "trtype": "TCP", 00:16:02.599 "adrfam": "IPv4", 00:16:02.599 "traddr": "10.0.0.2", 00:16:02.599 "trsvcid": "4420" 00:16:02.599 }, 00:16:02.599 "peer_address": { 00:16:02.599 "trtype": "TCP", 00:16:02.599 "adrfam": "IPv4", 00:16:02.599 "traddr": "10.0.0.1", 00:16:02.599 "trsvcid": "49780" 00:16:02.599 }, 00:16:02.599 "auth": { 00:16:02.599 "state": "completed", 00:16:02.599 "digest": "sha384", 00:16:02.599 "dhgroup": "ffdhe8192" 00:16:02.599 } 00:16:02.599 } 00:16:02.599 ]' 00:16:02.599 03:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:02.858 03:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:02.858 03:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:02.858 03:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:02.858 03:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:02.858 03:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.858 03:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.858 03:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.117 03:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjA3NGFjYjNhNGJiODExNWY5OTcyYjI4NzdjMjNmNjihpPHa: --dhchap-ctrl-secret DHHC-1:02:Y2M1NWViYTIxODVlZmI3M2VkNDc1ODI1YjExZTAzZGFkZWI1ODg0YWQ1ODUyODU0rHVN0g==: 00:16:03.117 03:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjA3NGFjYjNhNGJiODExNWY5OTcyYjI4NzdjMjNmNjihpPHa: --dhchap-ctrl-secret DHHC-1:02:Y2M1NWViYTIxODVlZmI3M2VkNDc1ODI1YjExZTAzZGFkZWI1ODg0YWQ1ODUyODU0rHVN0g==: 00:16:03.685 03:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.685 03:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:03.685 03:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.685 03:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.685 03:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.685 03:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:03.685 03:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:03.685 03:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:03.685 03:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:03.685 03:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:03.685 03:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:03.685 03:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:03.685 03:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:03.685 03:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.685 03:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.685 03:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.685 03:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.944 03:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.944 03:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.944 03:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.944 03:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.203 00:16:04.203 03:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.203 03:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:04.203 03:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.462 03:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.462 03:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.462 03:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.462 03:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.462 03:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.462 03:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:04.462 { 00:16:04.462 "cntlid": 93, 00:16:04.462 "qid": 0, 00:16:04.462 "state": "enabled", 00:16:04.462 "thread": "nvmf_tgt_poll_group_000", 00:16:04.462 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:04.462 "listen_address": { 00:16:04.462 "trtype": "TCP", 00:16:04.462 "adrfam": "IPv4", 00:16:04.462 "traddr": "10.0.0.2", 00:16:04.462 "trsvcid": "4420" 00:16:04.462 }, 00:16:04.462 "peer_address": { 00:16:04.462 "trtype": "TCP", 00:16:04.462 "adrfam": "IPv4", 00:16:04.462 "traddr": "10.0.0.1", 00:16:04.462 "trsvcid": "52774" 00:16:04.462 }, 00:16:04.462 "auth": { 00:16:04.462 "state": "completed", 00:16:04.462 "digest": "sha384", 00:16:04.462 "dhgroup": "ffdhe8192" 00:16:04.462 } 00:16:04.462 } 00:16:04.462 ]' 00:16:04.462 03:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:04.462 03:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:04.462 03:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:04.721 03:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:04.721 03:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:04.721 03:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.721 03:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.721 03:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.722 03:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: --dhchap-ctrl-secret DHHC-1:01:MzVhZGVlNzZkOTQ1ZDFkMjkxOGFkNTg5NDU5OTk4NmN4ySi6: 00:16:04.722 03:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: --dhchap-ctrl-secret DHHC-1:01:MzVhZGVlNzZkOTQ1ZDFkMjkxOGFkNTg5NDU5OTk4NmN4ySi6: 00:16:05.289 03:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.289 03:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:05.289 03:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.289 03:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.289 03:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.289 03:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:05.289 03:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:05.289 03:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:05.547 03:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:05.547 03:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.547 03:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:05.547 03:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:05.547 03:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:05.547 03:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.547 03:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:05.547 03:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.547 03:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.547 03:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.547 03:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:05.547 03:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:05.547 03:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:06.114 00:16:06.114 03:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:06.114 03:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:06.114 03:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.373 03:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.373 03:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.373 03:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.373 03:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.373 03:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.373 03:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:06.373 { 00:16:06.373 "cntlid": 95, 00:16:06.373 "qid": 0, 00:16:06.373 "state": "enabled", 00:16:06.373 "thread": "nvmf_tgt_poll_group_000", 00:16:06.373 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:06.373 "listen_address": { 00:16:06.373 "trtype": "TCP", 00:16:06.373 "adrfam": "IPv4", 00:16:06.373 "traddr": "10.0.0.2", 00:16:06.373 "trsvcid": "4420" 00:16:06.373 }, 00:16:06.373 "peer_address": { 00:16:06.373 "trtype": "TCP", 00:16:06.373 "adrfam": "IPv4", 00:16:06.373 "traddr": "10.0.0.1", 00:16:06.373 "trsvcid": "52800" 00:16:06.373 }, 00:16:06.373 "auth": { 00:16:06.373 "state": "completed", 00:16:06.373 "digest": "sha384", 00:16:06.373 "dhgroup": "ffdhe8192" 00:16:06.373 } 00:16:06.373 } 00:16:06.373 ]' 00:16:06.373 03:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:06.373 03:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:06.373 03:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:06.373 03:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:06.373 03:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:06.373 03:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.373 03:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.373 03:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.632 03:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI0NjQyNzQxZTJiNzEyMTMxNjI0ZGYzZjlmZjNmZDk4NTAxOGUyN2RmZWYzNzI5YTc1OWZiOWNhNThhMzY4OXDEmbI=: 00:16:06.632 03:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZDI0NjQyNzQxZTJiNzEyMTMxNjI0ZGYzZjlmZjNmZDk4NTAxOGUyN2RmZWYzNzI5YTc1OWZiOWNhNThhMzY4OXDEmbI=: 00:16:07.201 03:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.201 03:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:07.201 03:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.201 03:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.201 03:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.201 03:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:07.201 03:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:07.201 03:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.201 03:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:07.201 03:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:07.460 03:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:07.460 03:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.460 03:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:07.460 03:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:07.460 03:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:07.460 03:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.461 03:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.461 03:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.461 03:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.461 03:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.461 03:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.461 03:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.461 03:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.720 00:16:07.720 03:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.720 03:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.720 03:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.979 03:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.979 03:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.979 03:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.979 03:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.979 03:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.979 03:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.979 { 00:16:07.979 "cntlid": 97, 00:16:07.979 "qid": 0, 00:16:07.979 "state": "enabled", 00:16:07.979 "thread": "nvmf_tgt_poll_group_000", 00:16:07.979 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:07.979 "listen_address": { 00:16:07.979 "trtype": "TCP", 00:16:07.979 "adrfam": "IPv4", 00:16:07.979 "traddr": "10.0.0.2", 00:16:07.979 "trsvcid": "4420" 00:16:07.979 }, 00:16:07.979 "peer_address": { 00:16:07.979 "trtype": "TCP", 00:16:07.979 "adrfam": "IPv4", 00:16:07.979 "traddr": "10.0.0.1", 00:16:07.979 "trsvcid": "52844" 00:16:07.979 }, 00:16:07.979 "auth": { 00:16:07.979 "state": "completed", 00:16:07.979 "digest": "sha512", 00:16:07.979 "dhgroup": "null" 00:16:07.979 } 00:16:07.979 } 00:16:07.979 ]' 00:16:07.979 03:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.979 03:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:07.979 03:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.979 03:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:07.979 03:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.979 03:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.979 03:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.979 03:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.238 03:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODU0OTM1YjM3NWRlNzE1YmUwYjlkZmIzNjY5YjQ4ZjdmZGU2ZmQ3OWZmNjE5NTc2tqfyww==: --dhchap-ctrl-secret DHHC-1:03:ZmVlMGQzMzU0ZTQxNTAyOGNjYTkyYTdjMzBjOGFmYzY2NGYxNTFjZDY5NTIyM2ViYjU0NzlhZDE1ZDFlMzg2N2asu+M=: 00:16:08.238 03:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODU0OTM1YjM3NWRlNzE1YmUwYjlkZmIzNjY5YjQ4ZjdmZGU2ZmQ3OWZmNjE5NTc2tqfyww==: --dhchap-ctrl-secret DHHC-1:03:ZmVlMGQzMzU0ZTQxNTAyOGNjYTkyYTdjMzBjOGFmYzY2NGYxNTFjZDY5NTIyM2ViYjU0NzlhZDE1ZDFlMzg2N2asu+M=: 00:16:08.806 03:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.806 03:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:08.806 03:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.806 03:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.806 03:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.806 03:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.806 03:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:08.806 03:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:09.065 03:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:09.065 03:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.065 03:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:09.065 03:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:09.065 03:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:09.065 03:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.065 03:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.065 03:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.065 03:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.065 03:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.065 03:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.065 03:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.065 03:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.324 00:16:09.324 03:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.324 03:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.324 03:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.583 03:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.583 03:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.583 03:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.583 03:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.583 03:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.583 03:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.583 { 00:16:09.583 "cntlid": 99, 00:16:09.583 "qid": 0, 00:16:09.583 "state": "enabled", 00:16:09.583 "thread": "nvmf_tgt_poll_group_000", 00:16:09.583 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:09.583 "listen_address": { 00:16:09.583 "trtype": "TCP", 00:16:09.583 "adrfam": "IPv4", 00:16:09.583 "traddr": "10.0.0.2", 00:16:09.583 "trsvcid": "4420" 00:16:09.583 }, 00:16:09.583 "peer_address": { 00:16:09.583 "trtype": "TCP", 00:16:09.583 "adrfam": "IPv4", 00:16:09.583 "traddr": "10.0.0.1", 00:16:09.583 "trsvcid": "52872" 00:16:09.583 }, 00:16:09.583 "auth": { 00:16:09.583 "state": "completed", 00:16:09.583 "digest": "sha512", 00:16:09.583 "dhgroup": "null" 00:16:09.583 } 00:16:09.583 } 00:16:09.583 ]' 00:16:09.583 03:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.583 03:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:09.583 03:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.583 03:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:09.583 03:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.583 03:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.583 03:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.583 03:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.841 03:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjA3NGFjYjNhNGJiODExNWY5OTcyYjI4NzdjMjNmNjihpPHa: --dhchap-ctrl-secret DHHC-1:02:Y2M1NWViYTIxODVlZmI3M2VkNDc1ODI1YjExZTAzZGFkZWI1ODg0YWQ1ODUyODU0rHVN0g==: 00:16:09.842 03:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjA3NGFjYjNhNGJiODExNWY5OTcyYjI4NzdjMjNmNjihpPHa: --dhchap-ctrl-secret DHHC-1:02:Y2M1NWViYTIxODVlZmI3M2VkNDc1ODI1YjExZTAzZGFkZWI1ODg0YWQ1ODUyODU0rHVN0g==: 00:16:10.409 03:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.409 03:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:10.409 03:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.409 03:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.409 03:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.409 03:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.409 03:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:10.409 03:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:10.669 03:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:10.669 03:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.669 03:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:10.669 03:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:10.669 03:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:10.669 03:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.669 03:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.669 03:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.669 03:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.669 03:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.669 03:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.669 03:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.669 03:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.928 00:16:10.928 03:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.928 03:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.928 03:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.187 03:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.187 03:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.187 03:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.187 03:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.187 03:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.187 03:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:11.187 { 00:16:11.187 "cntlid": 101, 00:16:11.187 "qid": 0, 00:16:11.187 "state": "enabled", 00:16:11.187 "thread": "nvmf_tgt_poll_group_000", 00:16:11.187 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:11.187 "listen_address": { 00:16:11.187 "trtype": "TCP", 00:16:11.187 "adrfam": "IPv4", 00:16:11.187 "traddr": "10.0.0.2", 00:16:11.187 "trsvcid": "4420" 00:16:11.187 }, 00:16:11.187 "peer_address": { 00:16:11.187 "trtype": "TCP", 00:16:11.187 "adrfam": "IPv4", 00:16:11.187 "traddr": "10.0.0.1", 00:16:11.187 "trsvcid": "52904" 00:16:11.187 }, 00:16:11.187 "auth": { 00:16:11.187 "state": "completed", 00:16:11.187 "digest": "sha512", 00:16:11.187 "dhgroup": "null" 00:16:11.187 } 00:16:11.187 } 00:16:11.187 ]' 00:16:11.187 03:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:11.187 03:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:11.187 03:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:11.187 03:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:11.187 03:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:11.187 03:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.187 03:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.187 03:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.447 03:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: --dhchap-ctrl-secret DHHC-1:01:MzVhZGVlNzZkOTQ1ZDFkMjkxOGFkNTg5NDU5OTk4NmN4ySi6: 00:16:11.447 03:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: --dhchap-ctrl-secret DHHC-1:01:MzVhZGVlNzZkOTQ1ZDFkMjkxOGFkNTg5NDU5OTk4NmN4ySi6: 00:16:12.017 03:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.017 03:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:12.017 03:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.017 03:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.017 03:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.017 03:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:12.017 03:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:12.017 03:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:12.276 03:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:12.276 03:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:12.276 03:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:12.276 03:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:12.276 03:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:12.276 03:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.276 03:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:12.276 03:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.276 03:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.276 03:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.276 03:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:12.276 03:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:12.276 03:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:12.536 00:16:12.536 03:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.536 03:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.536 03:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.796 03:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.796 03:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.796 03:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.796 03:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.796 03:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.796 03:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.796 { 00:16:12.796 "cntlid": 103, 00:16:12.796 "qid": 0, 00:16:12.796 "state": "enabled", 00:16:12.796 "thread": "nvmf_tgt_poll_group_000", 00:16:12.796 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:12.796 "listen_address": { 00:16:12.796 "trtype": "TCP", 00:16:12.796 "adrfam": "IPv4", 00:16:12.796 "traddr": "10.0.0.2", 00:16:12.796 "trsvcid": "4420" 00:16:12.796 }, 00:16:12.796 "peer_address": { 00:16:12.796 "trtype": "TCP", 00:16:12.796 "adrfam": "IPv4", 00:16:12.796 "traddr": "10.0.0.1", 00:16:12.796 "trsvcid": "52928" 00:16:12.796 }, 00:16:12.796 "auth": { 00:16:12.796 "state": "completed", 00:16:12.796 "digest": "sha512", 00:16:12.796 "dhgroup": "null" 00:16:12.796 } 00:16:12.796 } 00:16:12.796 ]' 00:16:12.796 03:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.796 03:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:12.796 03:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.796 03:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:12.796 03:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.796 03:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.796 03:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.796 03:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.055 03:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI0NjQyNzQxZTJiNzEyMTMxNjI0ZGYzZjlmZjNmZDk4NTAxOGUyN2RmZWYzNzI5YTc1OWZiOWNhNThhMzY4OXDEmbI=: 00:16:13.055 03:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZDI0NjQyNzQxZTJiNzEyMTMxNjI0ZGYzZjlmZjNmZDk4NTAxOGUyN2RmZWYzNzI5YTc1OWZiOWNhNThhMzY4OXDEmbI=: 00:16:13.624 03:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.625 03:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:13.625 03:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.625 03:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.625 03:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.625 03:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:13.625 03:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.625 03:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:13.625 03:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:13.884 03:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:13.884 03:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.884 03:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:13.884 03:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:13.884 03:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:13.884 03:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.884 03:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.884 03:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.884 03:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.884 03:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.884 03:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.884 03:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.884 03:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.143 00:16:14.143 03:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.143 03:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.143 03:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.143 03:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.143 03:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.143 03:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.143 03:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.143 03:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.143 03:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.143 { 00:16:14.143 "cntlid": 105, 00:16:14.143 "qid": 0, 00:16:14.143 "state": "enabled", 00:16:14.143 "thread": "nvmf_tgt_poll_group_000", 00:16:14.143 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:14.143 "listen_address": { 00:16:14.143 "trtype": "TCP", 00:16:14.143 "adrfam": "IPv4", 00:16:14.143 "traddr": "10.0.0.2", 00:16:14.143 "trsvcid": "4420" 00:16:14.143 }, 00:16:14.143 "peer_address": { 00:16:14.143 "trtype": "TCP", 00:16:14.143 "adrfam": "IPv4", 00:16:14.143 "traddr": "10.0.0.1", 00:16:14.143 "trsvcid": "57234" 00:16:14.143 }, 00:16:14.143 "auth": { 00:16:14.143 "state": "completed", 00:16:14.143 "digest": "sha512", 00:16:14.143 "dhgroup": "ffdhe2048" 00:16:14.143 } 00:16:14.143 } 00:16:14.143 ]' 00:16:14.143 03:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.403 03:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:14.403 03:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.403 03:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:14.403 03:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.403 03:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.403 03:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.403 03:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.662 03:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODU0OTM1YjM3NWRlNzE1YmUwYjlkZmIzNjY5YjQ4ZjdmZGU2ZmQ3OWZmNjE5NTc2tqfyww==: --dhchap-ctrl-secret DHHC-1:03:ZmVlMGQzMzU0ZTQxNTAyOGNjYTkyYTdjMzBjOGFmYzY2NGYxNTFjZDY5NTIyM2ViYjU0NzlhZDE1ZDFlMzg2N2asu+M=: 00:16:14.662 03:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODU0OTM1YjM3NWRlNzE1YmUwYjlkZmIzNjY5YjQ4ZjdmZGU2ZmQ3OWZmNjE5NTc2tqfyww==: --dhchap-ctrl-secret DHHC-1:03:ZmVlMGQzMzU0ZTQxNTAyOGNjYTkyYTdjMzBjOGFmYzY2NGYxNTFjZDY5NTIyM2ViYjU0NzlhZDE1ZDFlMzg2N2asu+M=: 00:16:15.230 03:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.230 03:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:15.230 03:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.230 03:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.230 03:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.230 03:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.230 03:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:15.230 03:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:15.230 03:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:15.230 03:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.230 03:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:15.230 03:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:15.230 03:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:15.230 03:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.230 03:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.230 03:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.230 03:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.230 03:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.230 03:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.230 03:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.230 03:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.488 00:16:15.488 03:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.488 03:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.488 03:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.747 03:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.747 03:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.747 03:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.747 03:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.747 03:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.747 03:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.747 { 00:16:15.747 "cntlid": 107, 00:16:15.747 "qid": 0, 00:16:15.747 "state": "enabled", 00:16:15.747 "thread": "nvmf_tgt_poll_group_000", 00:16:15.747 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:15.747 "listen_address": { 00:16:15.747 "trtype": "TCP", 00:16:15.747 "adrfam": "IPv4", 00:16:15.747 "traddr": "10.0.0.2", 00:16:15.747 "trsvcid": "4420" 00:16:15.747 }, 00:16:15.747 "peer_address": { 00:16:15.747 "trtype": "TCP", 00:16:15.747 "adrfam": "IPv4", 00:16:15.747 "traddr": "10.0.0.1", 00:16:15.747 "trsvcid": "57258" 00:16:15.747 }, 00:16:15.747 "auth": { 00:16:15.747 "state": "completed", 00:16:15.747 "digest": "sha512", 00:16:15.747 "dhgroup": "ffdhe2048" 00:16:15.747 } 00:16:15.747 } 00:16:15.747 ]' 00:16:15.747 03:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.747 03:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:15.747 03:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.006 03:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:16.006 03:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.006 03:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.006 03:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.006 03:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.006 03:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjA3NGFjYjNhNGJiODExNWY5OTcyYjI4NzdjMjNmNjihpPHa: --dhchap-ctrl-secret DHHC-1:02:Y2M1NWViYTIxODVlZmI3M2VkNDc1ODI1YjExZTAzZGFkZWI1ODg0YWQ1ODUyODU0rHVN0g==: 00:16:16.006 03:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjA3NGFjYjNhNGJiODExNWY5OTcyYjI4NzdjMjNmNjihpPHa: --dhchap-ctrl-secret DHHC-1:02:Y2M1NWViYTIxODVlZmI3M2VkNDc1ODI1YjExZTAzZGFkZWI1ODg0YWQ1ODUyODU0rHVN0g==: 00:16:16.574 03:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.833 03:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:16.833 03:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.833 03:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.833 03:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.833 03:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.833 03:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:16.833 03:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:16.833 03:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:16.833 03:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.833 03:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:16.833 03:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:16.833 03:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:16.833 03:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.833 03:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.833 03:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.833 03:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.833 03:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.833 03:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.833 03:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.833 03:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.091 00:16:17.091 03:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.091 03:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.091 03:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.350 03:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.350 03:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.350 03:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.350 03:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.350 03:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.350 03:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.350 { 00:16:17.350 "cntlid": 109, 00:16:17.350 "qid": 0, 00:16:17.350 "state": "enabled", 00:16:17.350 "thread": "nvmf_tgt_poll_group_000", 00:16:17.350 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:17.350 "listen_address": { 00:16:17.350 "trtype": "TCP", 00:16:17.350 "adrfam": "IPv4", 00:16:17.350 "traddr": "10.0.0.2", 00:16:17.350 "trsvcid": "4420" 00:16:17.350 }, 00:16:17.350 "peer_address": { 00:16:17.350 "trtype": "TCP", 00:16:17.350 "adrfam": "IPv4", 00:16:17.350 "traddr": "10.0.0.1", 00:16:17.350 "trsvcid": "57286" 00:16:17.350 }, 00:16:17.350 "auth": { 00:16:17.350 "state": "completed", 00:16:17.350 "digest": "sha512", 00:16:17.350 "dhgroup": "ffdhe2048" 00:16:17.350 } 00:16:17.350 } 00:16:17.350 ]' 00:16:17.350 03:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.350 03:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:17.350 03:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.610 03:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:17.610 03:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.610 03:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.610 03:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.610 03:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.610 03:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: --dhchap-ctrl-secret DHHC-1:01:MzVhZGVlNzZkOTQ1ZDFkMjkxOGFkNTg5NDU5OTk4NmN4ySi6: 00:16:17.610 03:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: --dhchap-ctrl-secret DHHC-1:01:MzVhZGVlNzZkOTQ1ZDFkMjkxOGFkNTg5NDU5OTk4NmN4ySi6: 00:16:18.178 03:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.178 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.178 03:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:18.178 03:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.178 03:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.178 03:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.178 03:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.178 03:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:18.178 03:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:18.438 03:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:18.438 03:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.438 03:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:18.438 03:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:18.438 03:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:18.438 03:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.438 03:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:18.438 03:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.438 03:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.438 03:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.438 03:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:18.438 03:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:18.438 03:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:18.697 00:16:18.697 03:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.697 03:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.697 03:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.956 03:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.956 03:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.956 03:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.956 03:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.956 03:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.956 03:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.956 { 00:16:18.956 "cntlid": 111, 00:16:18.956 "qid": 0, 00:16:18.956 "state": "enabled", 00:16:18.956 "thread": "nvmf_tgt_poll_group_000", 00:16:18.956 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:18.956 "listen_address": { 00:16:18.956 "trtype": "TCP", 00:16:18.956 "adrfam": "IPv4", 00:16:18.956 "traddr": "10.0.0.2", 00:16:18.956 "trsvcid": "4420" 00:16:18.956 }, 00:16:18.956 "peer_address": { 00:16:18.956 "trtype": "TCP", 00:16:18.956 "adrfam": "IPv4", 00:16:18.956 "traddr": "10.0.0.1", 00:16:18.956 "trsvcid": "57304" 00:16:18.956 }, 00:16:18.956 "auth": { 00:16:18.956 "state": "completed", 00:16:18.956 "digest": "sha512", 00:16:18.956 "dhgroup": "ffdhe2048" 00:16:18.956 } 00:16:18.956 } 00:16:18.956 ]' 00:16:18.956 03:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.956 03:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:18.956 03:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.956 03:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:18.956 03:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.956 03:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.956 03:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.956 03:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.215 03:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI0NjQyNzQxZTJiNzEyMTMxNjI0ZGYzZjlmZjNmZDk4NTAxOGUyN2RmZWYzNzI5YTc1OWZiOWNhNThhMzY4OXDEmbI=: 00:16:19.215 03:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZDI0NjQyNzQxZTJiNzEyMTMxNjI0ZGYzZjlmZjNmZDk4NTAxOGUyN2RmZWYzNzI5YTc1OWZiOWNhNThhMzY4OXDEmbI=: 00:16:19.784 03:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.784 03:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:19.784 03:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.784 03:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.784 03:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.784 03:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:19.784 03:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.784 03:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:19.785 03:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:20.043 03:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:20.043 03:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.043 03:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:20.043 03:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:20.043 03:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:20.043 03:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.043 03:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.043 03:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.043 03:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.043 03:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.043 03:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.043 03:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.043 03:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.302 00:16:20.302 03:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.302 03:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.302 03:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.562 03:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.562 03:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.562 03:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.562 03:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.562 03:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.562 03:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.562 { 00:16:20.562 "cntlid": 113, 00:16:20.562 "qid": 0, 00:16:20.562 "state": "enabled", 00:16:20.562 "thread": "nvmf_tgt_poll_group_000", 00:16:20.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:20.562 "listen_address": { 00:16:20.562 "trtype": "TCP", 00:16:20.562 "adrfam": "IPv4", 00:16:20.562 "traddr": "10.0.0.2", 00:16:20.562 "trsvcid": "4420" 00:16:20.562 }, 00:16:20.562 "peer_address": { 00:16:20.562 "trtype": "TCP", 00:16:20.562 "adrfam": "IPv4", 00:16:20.562 "traddr": "10.0.0.1", 00:16:20.562 "trsvcid": "57336" 00:16:20.562 }, 00:16:20.562 "auth": { 00:16:20.562 "state": "completed", 00:16:20.562 "digest": "sha512", 00:16:20.562 "dhgroup": "ffdhe3072" 00:16:20.562 } 00:16:20.562 } 00:16:20.562 ]' 00:16:20.562 03:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.562 03:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:20.562 03:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.562 03:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:20.562 03:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.562 03:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.562 03:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.562 03:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.820 03:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODU0OTM1YjM3NWRlNzE1YmUwYjlkZmIzNjY5YjQ4ZjdmZGU2ZmQ3OWZmNjE5NTc2tqfyww==: --dhchap-ctrl-secret DHHC-1:03:ZmVlMGQzMzU0ZTQxNTAyOGNjYTkyYTdjMzBjOGFmYzY2NGYxNTFjZDY5NTIyM2ViYjU0NzlhZDE1ZDFlMzg2N2asu+M=: 00:16:20.820 03:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODU0OTM1YjM3NWRlNzE1YmUwYjlkZmIzNjY5YjQ4ZjdmZGU2ZmQ3OWZmNjE5NTc2tqfyww==: --dhchap-ctrl-secret DHHC-1:03:ZmVlMGQzMzU0ZTQxNTAyOGNjYTkyYTdjMzBjOGFmYzY2NGYxNTFjZDY5NTIyM2ViYjU0NzlhZDE1ZDFlMzg2N2asu+M=: 00:16:21.386 03:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.386 03:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:21.386 03:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.386 03:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.386 03:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.386 03:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.386 03:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:21.386 03:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:21.646 03:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:21.646 03:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.646 03:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:21.646 03:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:21.646 03:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:21.646 03:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.646 03:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.646 03:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.646 03:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.646 03:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.646 03:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.646 03:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.646 03:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.904 00:16:21.904 03:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.904 03:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.904 03:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.163 03:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.163 03:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.163 03:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.163 03:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.163 03:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.163 03:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.163 { 00:16:22.163 "cntlid": 115, 00:16:22.163 "qid": 0, 00:16:22.163 "state": "enabled", 00:16:22.163 "thread": "nvmf_tgt_poll_group_000", 00:16:22.163 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:22.163 "listen_address": { 00:16:22.163 "trtype": "TCP", 00:16:22.163 "adrfam": "IPv4", 00:16:22.163 "traddr": "10.0.0.2", 00:16:22.163 "trsvcid": "4420" 00:16:22.163 }, 00:16:22.163 "peer_address": { 00:16:22.163 "trtype": "TCP", 00:16:22.163 "adrfam": "IPv4", 00:16:22.163 "traddr": "10.0.0.1", 00:16:22.163 "trsvcid": "57378" 00:16:22.163 }, 00:16:22.163 "auth": { 00:16:22.163 "state": "completed", 00:16:22.163 "digest": "sha512", 00:16:22.163 "dhgroup": "ffdhe3072" 00:16:22.163 } 00:16:22.163 } 00:16:22.163 ]' 00:16:22.163 03:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.163 03:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:22.163 03:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.163 03:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:22.163 03:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.163 03:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.163 03:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.163 03:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.422 03:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjA3NGFjYjNhNGJiODExNWY5OTcyYjI4NzdjMjNmNjihpPHa: --dhchap-ctrl-secret DHHC-1:02:Y2M1NWViYTIxODVlZmI3M2VkNDc1ODI1YjExZTAzZGFkZWI1ODg0YWQ1ODUyODU0rHVN0g==: 00:16:22.422 03:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjA3NGFjYjNhNGJiODExNWY5OTcyYjI4NzdjMjNmNjihpPHa: --dhchap-ctrl-secret DHHC-1:02:Y2M1NWViYTIxODVlZmI3M2VkNDc1ODI1YjExZTAzZGFkZWI1ODg0YWQ1ODUyODU0rHVN0g==: 00:16:22.989 03:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.989 03:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:22.989 03:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.989 03:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.989 03:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.989 03:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.989 03:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:22.989 03:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:23.247 03:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:23.247 03:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.247 03:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:23.247 03:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:23.247 03:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:23.247 03:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.248 03:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.248 03:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.248 03:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.248 03:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.248 03:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.248 03:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.248 03:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.505 00:16:23.505 03:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.505 03:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.505 03:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.762 03:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.762 03:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.762 03:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.762 03:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.762 03:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.762 03:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.762 { 00:16:23.763 "cntlid": 117, 00:16:23.763 "qid": 0, 00:16:23.763 "state": "enabled", 00:16:23.763 "thread": "nvmf_tgt_poll_group_000", 00:16:23.763 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:23.763 "listen_address": { 00:16:23.763 "trtype": "TCP", 00:16:23.763 "adrfam": "IPv4", 00:16:23.763 "traddr": "10.0.0.2", 00:16:23.763 "trsvcid": "4420" 00:16:23.763 }, 00:16:23.763 "peer_address": { 00:16:23.763 "trtype": "TCP", 00:16:23.763 "adrfam": "IPv4", 00:16:23.763 "traddr": "10.0.0.1", 00:16:23.763 "trsvcid": "57404" 00:16:23.763 }, 00:16:23.763 "auth": { 00:16:23.763 "state": "completed", 00:16:23.763 "digest": "sha512", 00:16:23.763 "dhgroup": "ffdhe3072" 00:16:23.763 } 00:16:23.763 } 00:16:23.763 ]' 00:16:23.763 03:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.763 03:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:23.763 03:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.763 03:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:23.763 03:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.763 03:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.763 03:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.763 03:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.020 03:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: --dhchap-ctrl-secret DHHC-1:01:MzVhZGVlNzZkOTQ1ZDFkMjkxOGFkNTg5NDU5OTk4NmN4ySi6: 00:16:24.020 03:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: --dhchap-ctrl-secret DHHC-1:01:MzVhZGVlNzZkOTQ1ZDFkMjkxOGFkNTg5NDU5OTk4NmN4ySi6: 00:16:24.586 03:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.586 03:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:24.586 03:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.586 03:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.586 03:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.586 03:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.586 03:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:24.586 03:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:24.845 03:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:24.845 03:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.845 03:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:24.845 03:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:24.845 03:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:24.845 03:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.845 03:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:24.845 03:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.845 03:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.845 03:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.845 03:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:24.845 03:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:24.845 03:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:25.103 00:16:25.103 03:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.103 03:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.103 03:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.360 03:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.360 03:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.360 03:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.360 03:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.360 03:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.360 03:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.360 { 00:16:25.360 "cntlid": 119, 00:16:25.360 "qid": 0, 00:16:25.360 "state": "enabled", 00:16:25.360 "thread": "nvmf_tgt_poll_group_000", 00:16:25.360 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:25.360 "listen_address": { 00:16:25.360 "trtype": "TCP", 00:16:25.360 "adrfam": "IPv4", 00:16:25.360 "traddr": "10.0.0.2", 00:16:25.360 "trsvcid": "4420" 00:16:25.360 }, 00:16:25.360 "peer_address": { 00:16:25.360 "trtype": "TCP", 00:16:25.360 "adrfam": "IPv4", 00:16:25.360 "traddr": "10.0.0.1", 00:16:25.360 "trsvcid": "58460" 00:16:25.360 }, 00:16:25.360 "auth": { 00:16:25.360 "state": "completed", 00:16:25.360 "digest": "sha512", 00:16:25.360 "dhgroup": "ffdhe3072" 00:16:25.360 } 00:16:25.360 } 00:16:25.360 ]' 00:16:25.360 03:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.360 03:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:25.360 03:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.360 03:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:25.360 03:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.360 03:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.360 03:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.360 03:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.618 03:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI0NjQyNzQxZTJiNzEyMTMxNjI0ZGYzZjlmZjNmZDk4NTAxOGUyN2RmZWYzNzI5YTc1OWZiOWNhNThhMzY4OXDEmbI=: 00:16:25.618 03:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZDI0NjQyNzQxZTJiNzEyMTMxNjI0ZGYzZjlmZjNmZDk4NTAxOGUyN2RmZWYzNzI5YTc1OWZiOWNhNThhMzY4OXDEmbI=: 00:16:26.184 03:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.184 03:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:26.184 03:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.184 03:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.184 03:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.184 03:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:26.184 03:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.184 03:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:26.184 03:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:26.463 03:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:26.463 03:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.463 03:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:26.463 03:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:26.463 03:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:26.463 03:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.463 03:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.463 03:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.463 03:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.463 03:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.463 03:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.463 03:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.463 03:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.723 00:16:26.723 03:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.723 03:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.723 03:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.983 03:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.983 03:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.983 03:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.983 03:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.983 03:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.983 03:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.983 { 00:16:26.983 "cntlid": 121, 00:16:26.983 "qid": 0, 00:16:26.983 "state": "enabled", 00:16:26.983 "thread": "nvmf_tgt_poll_group_000", 00:16:26.983 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:26.983 "listen_address": { 00:16:26.983 "trtype": "TCP", 00:16:26.983 "adrfam": "IPv4", 00:16:26.983 "traddr": "10.0.0.2", 00:16:26.983 "trsvcid": "4420" 00:16:26.983 }, 00:16:26.983 "peer_address": { 00:16:26.983 "trtype": "TCP", 00:16:26.983 "adrfam": "IPv4", 00:16:26.983 "traddr": "10.0.0.1", 00:16:26.983 "trsvcid": "58492" 00:16:26.983 }, 00:16:26.983 "auth": { 00:16:26.983 "state": "completed", 00:16:26.983 "digest": "sha512", 00:16:26.983 "dhgroup": "ffdhe4096" 00:16:26.983 } 00:16:26.983 } 00:16:26.983 ]' 00:16:26.983 03:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.983 03:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:26.983 03:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.983 03:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:26.983 03:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.983 03:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.983 03:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.983 03:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.242 03:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODU0OTM1YjM3NWRlNzE1YmUwYjlkZmIzNjY5YjQ4ZjdmZGU2ZmQ3OWZmNjE5NTc2tqfyww==: --dhchap-ctrl-secret DHHC-1:03:ZmVlMGQzMzU0ZTQxNTAyOGNjYTkyYTdjMzBjOGFmYzY2NGYxNTFjZDY5NTIyM2ViYjU0NzlhZDE1ZDFlMzg2N2asu+M=: 00:16:27.242 03:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODU0OTM1YjM3NWRlNzE1YmUwYjlkZmIzNjY5YjQ4ZjdmZGU2ZmQ3OWZmNjE5NTc2tqfyww==: --dhchap-ctrl-secret DHHC-1:03:ZmVlMGQzMzU0ZTQxNTAyOGNjYTkyYTdjMzBjOGFmYzY2NGYxNTFjZDY5NTIyM2ViYjU0NzlhZDE1ZDFlMzg2N2asu+M=: 00:16:27.811 03:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.811 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.811 03:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:27.811 03:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.811 03:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.811 03:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.811 03:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.811 03:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:27.811 03:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:28.069 03:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:28.069 03:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.069 03:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:28.069 03:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:28.069 03:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:28.069 03:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.069 03:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.069 03:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.069 03:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.069 03:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.069 03:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.069 03:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.069 03:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.327 00:16:28.327 03:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.327 03:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.327 03:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.585 03:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.585 03:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.585 03:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.585 03:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.585 03:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.585 03:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.585 { 00:16:28.585 "cntlid": 123, 00:16:28.585 "qid": 0, 00:16:28.585 "state": "enabled", 00:16:28.585 "thread": "nvmf_tgt_poll_group_000", 00:16:28.585 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:28.585 "listen_address": { 00:16:28.585 "trtype": "TCP", 00:16:28.585 "adrfam": "IPv4", 00:16:28.585 "traddr": "10.0.0.2", 00:16:28.585 "trsvcid": "4420" 00:16:28.585 }, 00:16:28.585 "peer_address": { 00:16:28.585 "trtype": "TCP", 00:16:28.585 "adrfam": "IPv4", 00:16:28.585 "traddr": "10.0.0.1", 00:16:28.585 "trsvcid": "58518" 00:16:28.585 }, 00:16:28.585 "auth": { 00:16:28.585 "state": "completed", 00:16:28.585 "digest": "sha512", 00:16:28.585 "dhgroup": "ffdhe4096" 00:16:28.585 } 00:16:28.585 } 00:16:28.585 ]' 00:16:28.585 03:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.585 03:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:28.585 03:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.585 03:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:28.585 03:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.585 03:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.585 03:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.585 03:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.844 03:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjA3NGFjYjNhNGJiODExNWY5OTcyYjI4NzdjMjNmNjihpPHa: --dhchap-ctrl-secret DHHC-1:02:Y2M1NWViYTIxODVlZmI3M2VkNDc1ODI1YjExZTAzZGFkZWI1ODg0YWQ1ODUyODU0rHVN0g==: 00:16:28.844 03:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjA3NGFjYjNhNGJiODExNWY5OTcyYjI4NzdjMjNmNjihpPHa: --dhchap-ctrl-secret DHHC-1:02:Y2M1NWViYTIxODVlZmI3M2VkNDc1ODI1YjExZTAzZGFkZWI1ODg0YWQ1ODUyODU0rHVN0g==: 00:16:29.410 03:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.410 03:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:29.410 03:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.410 03:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.410 03:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.410 03:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.410 03:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:29.411 03:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:29.669 03:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:29.669 03:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.669 03:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:29.669 03:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:29.669 03:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:29.669 03:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.669 03:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.669 03:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.669 03:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.669 03:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.669 03:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.669 03:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.669 03:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.927 00:16:29.927 03:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.927 03:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.927 03:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.186 03:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.186 03:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.186 03:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.186 03:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.186 03:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.186 03:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.186 { 00:16:30.186 "cntlid": 125, 00:16:30.186 "qid": 0, 00:16:30.186 "state": "enabled", 00:16:30.186 "thread": "nvmf_tgt_poll_group_000", 00:16:30.186 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:30.186 "listen_address": { 00:16:30.186 "trtype": "TCP", 00:16:30.186 "adrfam": "IPv4", 00:16:30.186 "traddr": "10.0.0.2", 00:16:30.186 "trsvcid": "4420" 00:16:30.186 }, 00:16:30.186 "peer_address": { 00:16:30.186 "trtype": "TCP", 00:16:30.186 "adrfam": "IPv4", 00:16:30.186 "traddr": "10.0.0.1", 00:16:30.186 "trsvcid": "58556" 00:16:30.186 }, 00:16:30.186 "auth": { 00:16:30.186 "state": "completed", 00:16:30.186 "digest": "sha512", 00:16:30.186 "dhgroup": "ffdhe4096" 00:16:30.186 } 00:16:30.186 } 00:16:30.186 ]' 00:16:30.186 03:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.186 03:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:30.186 03:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.186 03:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:30.186 03:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.186 03:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.186 03:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.186 03:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.445 03:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: --dhchap-ctrl-secret DHHC-1:01:MzVhZGVlNzZkOTQ1ZDFkMjkxOGFkNTg5NDU5OTk4NmN4ySi6: 00:16:30.445 03:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: --dhchap-ctrl-secret DHHC-1:01:MzVhZGVlNzZkOTQ1ZDFkMjkxOGFkNTg5NDU5OTk4NmN4ySi6: 00:16:31.012 03:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.012 03:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:31.012 03:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.012 03:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.012 03:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.012 03:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.012 03:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:31.012 03:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:31.270 03:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:16:31.270 03:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.270 03:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:31.270 03:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:31.270 03:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:31.270 03:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.270 03:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:31.270 03:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.270 03:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.270 03:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.270 03:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:31.270 03:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:31.270 03:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:31.529 00:16:31.529 03:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.529 03:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.529 03:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.787 03:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.788 03:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.788 03:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.788 03:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.788 03:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.788 03:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.788 { 00:16:31.788 "cntlid": 127, 00:16:31.788 "qid": 0, 00:16:31.788 "state": "enabled", 00:16:31.788 "thread": "nvmf_tgt_poll_group_000", 00:16:31.788 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:31.788 "listen_address": { 00:16:31.788 "trtype": "TCP", 00:16:31.788 "adrfam": "IPv4", 00:16:31.788 "traddr": "10.0.0.2", 00:16:31.788 "trsvcid": "4420" 00:16:31.788 }, 00:16:31.788 "peer_address": { 00:16:31.788 "trtype": "TCP", 00:16:31.788 "adrfam": "IPv4", 00:16:31.788 "traddr": "10.0.0.1", 00:16:31.788 "trsvcid": "58578" 00:16:31.788 }, 00:16:31.788 "auth": { 00:16:31.788 "state": "completed", 00:16:31.788 "digest": "sha512", 00:16:31.788 "dhgroup": "ffdhe4096" 00:16:31.788 } 00:16:31.788 } 00:16:31.788 ]' 00:16:31.788 03:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.788 03:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:31.788 03:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.788 03:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:31.788 03:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.788 03:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.788 03:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.788 03:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.046 03:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI0NjQyNzQxZTJiNzEyMTMxNjI0ZGYzZjlmZjNmZDk4NTAxOGUyN2RmZWYzNzI5YTc1OWZiOWNhNThhMzY4OXDEmbI=: 00:16:32.046 03:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZDI0NjQyNzQxZTJiNzEyMTMxNjI0ZGYzZjlmZjNmZDk4NTAxOGUyN2RmZWYzNzI5YTc1OWZiOWNhNThhMzY4OXDEmbI=: 00:16:32.613 03:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.613 03:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:32.613 03:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.613 03:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.613 03:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.613 03:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:32.613 03:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.613 03:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:32.613 03:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:32.872 03:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:16:32.872 03:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.872 03:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:32.872 03:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:32.872 03:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:32.872 03:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.872 03:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.872 03:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.872 03:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.872 03:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.872 03:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.872 03:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.872 03:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.132 00:16:33.132 03:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.132 03:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.132 03:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.390 03:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.390 03:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.390 03:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.390 03:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.390 03:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.390 03:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.390 { 00:16:33.390 "cntlid": 129, 00:16:33.390 "qid": 0, 00:16:33.390 "state": "enabled", 00:16:33.390 "thread": "nvmf_tgt_poll_group_000", 00:16:33.390 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:33.390 "listen_address": { 00:16:33.390 "trtype": "TCP", 00:16:33.390 "adrfam": "IPv4", 00:16:33.390 "traddr": "10.0.0.2", 00:16:33.390 "trsvcid": "4420" 00:16:33.390 }, 00:16:33.390 "peer_address": { 00:16:33.390 "trtype": "TCP", 00:16:33.390 "adrfam": "IPv4", 00:16:33.390 "traddr": "10.0.0.1", 00:16:33.390 "trsvcid": "58620" 00:16:33.390 }, 00:16:33.390 "auth": { 00:16:33.390 "state": "completed", 00:16:33.390 "digest": "sha512", 00:16:33.390 "dhgroup": "ffdhe6144" 00:16:33.390 } 00:16:33.390 } 00:16:33.390 ]' 00:16:33.390 03:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.390 03:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:33.391 03:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.649 03:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:33.649 03:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.649 03:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.649 03:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.649 03:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.649 03:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODU0OTM1YjM3NWRlNzE1YmUwYjlkZmIzNjY5YjQ4ZjdmZGU2ZmQ3OWZmNjE5NTc2tqfyww==: --dhchap-ctrl-secret DHHC-1:03:ZmVlMGQzMzU0ZTQxNTAyOGNjYTkyYTdjMzBjOGFmYzY2NGYxNTFjZDY5NTIyM2ViYjU0NzlhZDE1ZDFlMzg2N2asu+M=: 00:16:33.649 03:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODU0OTM1YjM3NWRlNzE1YmUwYjlkZmIzNjY5YjQ4ZjdmZGU2ZmQ3OWZmNjE5NTc2tqfyww==: --dhchap-ctrl-secret DHHC-1:03:ZmVlMGQzMzU0ZTQxNTAyOGNjYTkyYTdjMzBjOGFmYzY2NGYxNTFjZDY5NTIyM2ViYjU0NzlhZDE1ZDFlMzg2N2asu+M=: 00:16:34.216 03:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.216 03:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:34.216 03:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.216 03:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.216 03:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.216 03:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.216 03:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:34.216 03:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:34.474 03:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:16:34.474 03:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.474 03:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:34.474 03:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:34.474 03:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:34.474 03:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.474 03:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.474 03:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.474 03:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.474 03:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.474 03:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.474 03:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.474 03:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.040 00:16:35.040 03:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.040 03:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.040 03:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.040 03:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.040 03:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.040 03:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.040 03:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.040 03:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.040 03:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.040 { 00:16:35.040 "cntlid": 131, 00:16:35.040 "qid": 0, 00:16:35.040 "state": "enabled", 00:16:35.040 "thread": "nvmf_tgt_poll_group_000", 00:16:35.040 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:35.040 "listen_address": { 00:16:35.040 "trtype": "TCP", 00:16:35.040 "adrfam": "IPv4", 00:16:35.040 "traddr": "10.0.0.2", 00:16:35.040 "trsvcid": "4420" 00:16:35.040 }, 00:16:35.040 "peer_address": { 00:16:35.040 "trtype": "TCP", 00:16:35.040 "adrfam": "IPv4", 00:16:35.040 "traddr": "10.0.0.1", 00:16:35.040 "trsvcid": "55538" 00:16:35.040 }, 00:16:35.040 "auth": { 00:16:35.040 "state": "completed", 00:16:35.040 "digest": "sha512", 00:16:35.040 "dhgroup": "ffdhe6144" 00:16:35.040 } 00:16:35.040 } 00:16:35.040 ]' 00:16:35.040 03:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.040 03:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:35.040 03:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.299 03:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:35.299 03:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.299 03:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.299 03:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.299 03:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.557 03:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjA3NGFjYjNhNGJiODExNWY5OTcyYjI4NzdjMjNmNjihpPHa: --dhchap-ctrl-secret DHHC-1:02:Y2M1NWViYTIxODVlZmI3M2VkNDc1ODI1YjExZTAzZGFkZWI1ODg0YWQ1ODUyODU0rHVN0g==: 00:16:35.557 03:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjA3NGFjYjNhNGJiODExNWY5OTcyYjI4NzdjMjNmNjihpPHa: --dhchap-ctrl-secret DHHC-1:02:Y2M1NWViYTIxODVlZmI3M2VkNDc1ODI1YjExZTAzZGFkZWI1ODg0YWQ1ODUyODU0rHVN0g==: 00:16:36.126 03:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.126 03:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:36.126 03:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.126 03:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.126 03:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.126 03:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.126 03:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:36.126 03:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:36.126 03:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:16:36.126 03:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.126 03:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:36.126 03:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:36.126 03:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:36.126 03:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.126 03:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.126 03:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.126 03:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.385 03:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.385 03:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.385 03:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.385 03:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.645 00:16:36.645 03:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.645 03:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.645 03:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.913 03:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.914 03:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.914 03:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.914 03:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.914 03:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.914 03:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.914 { 00:16:36.914 "cntlid": 133, 00:16:36.914 "qid": 0, 00:16:36.914 "state": "enabled", 00:16:36.914 "thread": "nvmf_tgt_poll_group_000", 00:16:36.914 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:36.914 "listen_address": { 00:16:36.914 "trtype": "TCP", 00:16:36.914 "adrfam": "IPv4", 00:16:36.914 "traddr": "10.0.0.2", 00:16:36.914 "trsvcid": "4420" 00:16:36.914 }, 00:16:36.914 "peer_address": { 00:16:36.914 "trtype": "TCP", 00:16:36.914 "adrfam": "IPv4", 00:16:36.914 "traddr": "10.0.0.1", 00:16:36.914 "trsvcid": "55570" 00:16:36.914 }, 00:16:36.914 "auth": { 00:16:36.914 "state": "completed", 00:16:36.914 "digest": "sha512", 00:16:36.914 "dhgroup": "ffdhe6144" 00:16:36.914 } 00:16:36.914 } 00:16:36.914 ]' 00:16:36.914 03:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.914 03:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.914 03:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.914 03:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:36.914 03:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.914 03:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.914 03:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.914 03:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.172 03:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: --dhchap-ctrl-secret DHHC-1:01:MzVhZGVlNzZkOTQ1ZDFkMjkxOGFkNTg5NDU5OTk4NmN4ySi6: 00:16:37.172 03:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: --dhchap-ctrl-secret DHHC-1:01:MzVhZGVlNzZkOTQ1ZDFkMjkxOGFkNTg5NDU5OTk4NmN4ySi6: 00:16:37.738 03:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.738 03:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:37.738 03:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.738 03:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.738 03:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.738 03:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.738 03:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:37.738 03:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:37.997 03:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:16:37.997 03:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.997 03:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:37.997 03:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:37.997 03:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:37.997 03:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.997 03:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:37.997 03:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.997 03:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.997 03:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.997 03:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:37.997 03:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.997 03:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:38.256 00:16:38.256 03:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.256 03:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.256 03:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.514 03:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.514 03:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.514 03:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.514 03:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.514 03:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.514 03:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.514 { 00:16:38.514 "cntlid": 135, 00:16:38.514 "qid": 0, 00:16:38.514 "state": "enabled", 00:16:38.514 "thread": "nvmf_tgt_poll_group_000", 00:16:38.514 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:38.514 "listen_address": { 00:16:38.514 "trtype": "TCP", 00:16:38.514 "adrfam": "IPv4", 00:16:38.514 "traddr": "10.0.0.2", 00:16:38.514 "trsvcid": "4420" 00:16:38.514 }, 00:16:38.514 "peer_address": { 00:16:38.514 "trtype": "TCP", 00:16:38.514 "adrfam": "IPv4", 00:16:38.514 "traddr": "10.0.0.1", 00:16:38.514 "trsvcid": "55602" 00:16:38.514 }, 00:16:38.514 "auth": { 00:16:38.514 "state": "completed", 00:16:38.514 "digest": "sha512", 00:16:38.514 "dhgroup": "ffdhe6144" 00:16:38.514 } 00:16:38.514 } 00:16:38.514 ]' 00:16:38.514 03:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.514 03:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:38.514 03:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.514 03:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:38.514 03:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.515 03:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.515 03:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.515 03:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.773 03:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI0NjQyNzQxZTJiNzEyMTMxNjI0ZGYzZjlmZjNmZDk4NTAxOGUyN2RmZWYzNzI5YTc1OWZiOWNhNThhMzY4OXDEmbI=: 00:16:38.773 03:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZDI0NjQyNzQxZTJiNzEyMTMxNjI0ZGYzZjlmZjNmZDk4NTAxOGUyN2RmZWYzNzI5YTc1OWZiOWNhNThhMzY4OXDEmbI=: 00:16:39.339 03:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.339 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.339 03:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:39.339 03:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.339 03:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.339 03:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.339 03:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:39.339 03:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.339 03:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:39.339 03:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:39.598 03:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:16:39.598 03:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.598 03:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:39.598 03:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:39.598 03:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:39.598 03:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.598 03:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.598 03:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.598 03:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.598 03:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.598 03:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.598 03:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.598 03:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.164 00:16:40.164 03:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.164 03:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.164 03:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.423 03:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.423 03:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.423 03:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.423 03:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.423 03:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.423 03:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.423 { 00:16:40.423 "cntlid": 137, 00:16:40.423 "qid": 0, 00:16:40.423 "state": "enabled", 00:16:40.423 "thread": "nvmf_tgt_poll_group_000", 00:16:40.423 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:40.423 "listen_address": { 00:16:40.423 "trtype": "TCP", 00:16:40.423 "adrfam": "IPv4", 00:16:40.423 "traddr": "10.0.0.2", 00:16:40.423 "trsvcid": "4420" 00:16:40.423 }, 00:16:40.423 "peer_address": { 00:16:40.423 "trtype": "TCP", 00:16:40.423 "adrfam": "IPv4", 00:16:40.423 "traddr": "10.0.0.1", 00:16:40.423 "trsvcid": "55640" 00:16:40.423 }, 00:16:40.423 "auth": { 00:16:40.423 "state": "completed", 00:16:40.423 "digest": "sha512", 00:16:40.423 "dhgroup": "ffdhe8192" 00:16:40.423 } 00:16:40.423 } 00:16:40.423 ]' 00:16:40.423 03:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.423 03:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:40.423 03:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.423 03:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:40.423 03:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.423 03:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.423 03:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.423 03:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.683 03:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODU0OTM1YjM3NWRlNzE1YmUwYjlkZmIzNjY5YjQ4ZjdmZGU2ZmQ3OWZmNjE5NTc2tqfyww==: --dhchap-ctrl-secret DHHC-1:03:ZmVlMGQzMzU0ZTQxNTAyOGNjYTkyYTdjMzBjOGFmYzY2NGYxNTFjZDY5NTIyM2ViYjU0NzlhZDE1ZDFlMzg2N2asu+M=: 00:16:40.683 03:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODU0OTM1YjM3NWRlNzE1YmUwYjlkZmIzNjY5YjQ4ZjdmZGU2ZmQ3OWZmNjE5NTc2tqfyww==: --dhchap-ctrl-secret DHHC-1:03:ZmVlMGQzMzU0ZTQxNTAyOGNjYTkyYTdjMzBjOGFmYzY2NGYxNTFjZDY5NTIyM2ViYjU0NzlhZDE1ZDFlMzg2N2asu+M=: 00:16:41.252 03:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.252 03:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:41.252 03:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.252 03:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.252 03:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.252 03:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.252 03:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:41.252 03:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:41.512 03:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:16:41.512 03:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.512 03:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:41.512 03:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:41.512 03:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:41.512 03:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.512 03:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.512 03:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.512 03:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.512 03:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.512 03:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.512 03:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.512 03:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.080 00:16:42.080 03:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.080 03:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.080 03:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.080 03:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.080 03:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.080 03:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.080 03:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.080 03:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.080 03:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.080 { 00:16:42.080 "cntlid": 139, 00:16:42.080 "qid": 0, 00:16:42.080 "state": "enabled", 00:16:42.080 "thread": "nvmf_tgt_poll_group_000", 00:16:42.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:42.080 "listen_address": { 00:16:42.080 "trtype": "TCP", 00:16:42.080 "adrfam": "IPv4", 00:16:42.080 "traddr": "10.0.0.2", 00:16:42.080 "trsvcid": "4420" 00:16:42.080 }, 00:16:42.080 "peer_address": { 00:16:42.080 "trtype": "TCP", 00:16:42.080 "adrfam": "IPv4", 00:16:42.080 "traddr": "10.0.0.1", 00:16:42.080 "trsvcid": "55660" 00:16:42.080 }, 00:16:42.080 "auth": { 00:16:42.080 "state": "completed", 00:16:42.080 "digest": "sha512", 00:16:42.080 "dhgroup": "ffdhe8192" 00:16:42.080 } 00:16:42.080 } 00:16:42.080 ]' 00:16:42.080 03:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.080 03:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:42.080 03:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.339 03:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:42.339 03:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.339 03:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.339 03:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.339 03:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.598 03:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjA3NGFjYjNhNGJiODExNWY5OTcyYjI4NzdjMjNmNjihpPHa: --dhchap-ctrl-secret DHHC-1:02:Y2M1NWViYTIxODVlZmI3M2VkNDc1ODI1YjExZTAzZGFkZWI1ODg0YWQ1ODUyODU0rHVN0g==: 00:16:42.598 03:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjA3NGFjYjNhNGJiODExNWY5OTcyYjI4NzdjMjNmNjihpPHa: --dhchap-ctrl-secret DHHC-1:02:Y2M1NWViYTIxODVlZmI3M2VkNDc1ODI1YjExZTAzZGFkZWI1ODg0YWQ1ODUyODU0rHVN0g==: 00:16:43.167 03:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.167 03:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:43.167 03:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.167 03:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.167 03:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.167 03:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.167 03:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:43.167 03:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:43.167 03:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:16:43.167 03:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.167 03:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:43.167 03:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:43.167 03:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:43.167 03:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.167 03:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.167 03:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.167 03:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.167 03:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.167 03:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.167 03:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.167 03:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.734 00:16:43.734 03:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.734 03:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.734 03:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.992 03:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.992 03:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.992 03:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.992 03:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.992 03:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.992 03:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.992 { 00:16:43.992 "cntlid": 141, 00:16:43.992 "qid": 0, 00:16:43.992 "state": "enabled", 00:16:43.992 "thread": "nvmf_tgt_poll_group_000", 00:16:43.992 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:43.992 "listen_address": { 00:16:43.992 "trtype": "TCP", 00:16:43.992 "adrfam": "IPv4", 00:16:43.992 "traddr": "10.0.0.2", 00:16:43.993 "trsvcid": "4420" 00:16:43.993 }, 00:16:43.993 "peer_address": { 00:16:43.993 "trtype": "TCP", 00:16:43.993 "adrfam": "IPv4", 00:16:43.993 "traddr": "10.0.0.1", 00:16:43.993 "trsvcid": "55684" 00:16:43.993 }, 00:16:43.993 "auth": { 00:16:43.993 "state": "completed", 00:16:43.993 "digest": "sha512", 00:16:43.993 "dhgroup": "ffdhe8192" 00:16:43.993 } 00:16:43.993 } 00:16:43.993 ]' 00:16:43.993 03:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.993 03:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:43.993 03:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.993 03:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:43.993 03:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.993 03:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.993 03:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.993 03:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.251 03:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: --dhchap-ctrl-secret DHHC-1:01:MzVhZGVlNzZkOTQ1ZDFkMjkxOGFkNTg5NDU5OTk4NmN4ySi6: 00:16:44.251 03:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: --dhchap-ctrl-secret DHHC-1:01:MzVhZGVlNzZkOTQ1ZDFkMjkxOGFkNTg5NDU5OTk4NmN4ySi6: 00:16:44.819 03:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.819 03:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:44.819 03:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.819 03:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.819 03:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.819 03:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.819 03:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:44.819 03:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:45.079 03:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:16:45.079 03:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.079 03:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:45.079 03:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:45.079 03:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:45.079 03:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.079 03:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:45.079 03:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.079 03:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.079 03:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.079 03:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:45.079 03:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:45.079 03:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:45.645 00:16:45.645 03:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.645 03:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.645 03:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.645 03:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.645 03:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.645 03:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.645 03:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.645 03:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.645 03:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.645 { 00:16:45.645 "cntlid": 143, 00:16:45.645 "qid": 0, 00:16:45.645 "state": "enabled", 00:16:45.645 "thread": "nvmf_tgt_poll_group_000", 00:16:45.645 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:45.645 "listen_address": { 00:16:45.645 "trtype": "TCP", 00:16:45.645 "adrfam": "IPv4", 00:16:45.645 "traddr": "10.0.0.2", 00:16:45.645 "trsvcid": "4420" 00:16:45.645 }, 00:16:45.645 "peer_address": { 00:16:45.645 "trtype": "TCP", 00:16:45.645 "adrfam": "IPv4", 00:16:45.645 "traddr": "10.0.0.1", 00:16:45.645 "trsvcid": "38282" 00:16:45.645 }, 00:16:45.645 "auth": { 00:16:45.645 "state": "completed", 00:16:45.645 "digest": "sha512", 00:16:45.645 "dhgroup": "ffdhe8192" 00:16:45.645 } 00:16:45.645 } 00:16:45.645 ]' 00:16:45.645 03:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.903 03:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.903 03:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.903 03:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:45.903 03:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.903 03:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.903 03:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.903 03:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.162 03:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI0NjQyNzQxZTJiNzEyMTMxNjI0ZGYzZjlmZjNmZDk4NTAxOGUyN2RmZWYzNzI5YTc1OWZiOWNhNThhMzY4OXDEmbI=: 00:16:46.162 03:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZDI0NjQyNzQxZTJiNzEyMTMxNjI0ZGYzZjlmZjNmZDk4NTAxOGUyN2RmZWYzNzI5YTc1OWZiOWNhNThhMzY4OXDEmbI=: 00:16:46.729 03:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.729 03:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:46.729 03:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.729 03:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.729 03:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.729 03:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:46.729 03:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:16:46.729 03:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:46.729 03:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:46.729 03:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:46.729 03:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:46.988 03:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:16:46.988 03:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.988 03:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:46.988 03:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:46.988 03:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:46.988 03:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.988 03:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.988 03:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.988 03:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.988 03:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.988 03:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.988 03:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.989 03:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.247 00:16:47.506 03:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.506 03:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.506 03:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.506 03:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.506 03:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.506 03:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.506 03:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.506 03:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.506 03:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.506 { 00:16:47.506 "cntlid": 145, 00:16:47.506 "qid": 0, 00:16:47.506 "state": "enabled", 00:16:47.506 "thread": "nvmf_tgt_poll_group_000", 00:16:47.507 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:47.507 "listen_address": { 00:16:47.507 "trtype": "TCP", 00:16:47.507 "adrfam": "IPv4", 00:16:47.507 "traddr": "10.0.0.2", 00:16:47.507 "trsvcid": "4420" 00:16:47.507 }, 00:16:47.507 "peer_address": { 00:16:47.507 "trtype": "TCP", 00:16:47.507 "adrfam": "IPv4", 00:16:47.507 "traddr": "10.0.0.1", 00:16:47.507 "trsvcid": "38310" 00:16:47.507 }, 00:16:47.507 "auth": { 00:16:47.507 "state": "completed", 00:16:47.507 "digest": "sha512", 00:16:47.507 "dhgroup": "ffdhe8192" 00:16:47.507 } 00:16:47.507 } 00:16:47.507 ]' 00:16:47.507 03:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.507 03:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:47.507 03:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.766 03:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:47.766 03:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.766 03:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.766 03:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.766 03:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.025 03:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODU0OTM1YjM3NWRlNzE1YmUwYjlkZmIzNjY5YjQ4ZjdmZGU2ZmQ3OWZmNjE5NTc2tqfyww==: --dhchap-ctrl-secret DHHC-1:03:ZmVlMGQzMzU0ZTQxNTAyOGNjYTkyYTdjMzBjOGFmYzY2NGYxNTFjZDY5NTIyM2ViYjU0NzlhZDE1ZDFlMzg2N2asu+M=: 00:16:48.025 03:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODU0OTM1YjM3NWRlNzE1YmUwYjlkZmIzNjY5YjQ4ZjdmZGU2ZmQ3OWZmNjE5NTc2tqfyww==: --dhchap-ctrl-secret DHHC-1:03:ZmVlMGQzMzU0ZTQxNTAyOGNjYTkyYTdjMzBjOGFmYzY2NGYxNTFjZDY5NTIyM2ViYjU0NzlhZDE1ZDFlMzg2N2asu+M=: 00:16:48.594 03:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.594 03:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:48.594 03:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.594 03:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.594 03:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.594 03:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:16:48.594 03:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.594 03:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.594 03:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.594 03:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:16:48.594 03:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:48.594 03:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:16:48.594 03:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:48.594 03:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:48.594 03:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:48.594 03:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:48.594 03:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:16:48.594 03:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:48.594 03:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:48.853 request: 00:16:48.853 { 00:16:48.853 "name": "nvme0", 00:16:48.853 "trtype": "tcp", 00:16:48.853 "traddr": "10.0.0.2", 00:16:48.853 "adrfam": "ipv4", 00:16:48.853 "trsvcid": "4420", 00:16:48.853 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:48.853 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:48.853 "prchk_reftag": false, 00:16:48.853 "prchk_guard": false, 00:16:48.853 "hdgst": false, 00:16:48.853 "ddgst": false, 00:16:48.853 "dhchap_key": "key2", 00:16:48.853 "allow_unrecognized_csi": false, 00:16:48.853 "method": "bdev_nvme_attach_controller", 00:16:48.853 "req_id": 1 00:16:48.853 } 00:16:48.853 Got JSON-RPC error response 00:16:48.853 response: 00:16:48.853 { 00:16:48.853 "code": -5, 00:16:48.853 "message": "Input/output error" 00:16:48.853 } 00:16:48.853 03:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:48.853 03:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:48.853 03:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:48.853 03:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:48.853 03:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:48.853 03:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.853 03:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.853 03:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.853 03:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.853 03:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.853 03:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.113 03:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.113 03:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:49.113 03:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:49.113 03:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:49.113 03:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:49.113 03:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:49.113 03:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:49.113 03:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:49.113 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:49.113 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:49.113 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:49.372 request: 00:16:49.372 { 00:16:49.372 "name": "nvme0", 00:16:49.372 "trtype": "tcp", 00:16:49.372 "traddr": "10.0.0.2", 00:16:49.372 "adrfam": "ipv4", 00:16:49.372 "trsvcid": "4420", 00:16:49.372 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:49.372 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:49.372 "prchk_reftag": false, 00:16:49.372 "prchk_guard": false, 00:16:49.372 "hdgst": false, 00:16:49.372 "ddgst": false, 00:16:49.372 "dhchap_key": "key1", 00:16:49.372 "dhchap_ctrlr_key": "ckey2", 00:16:49.372 "allow_unrecognized_csi": false, 00:16:49.372 "method": "bdev_nvme_attach_controller", 00:16:49.372 "req_id": 1 00:16:49.372 } 00:16:49.372 Got JSON-RPC error response 00:16:49.372 response: 00:16:49.372 { 00:16:49.372 "code": -5, 00:16:49.372 "message": "Input/output error" 00:16:49.372 } 00:16:49.372 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:49.372 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:49.372 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:49.372 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:49.372 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:49.372 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.372 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.372 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.372 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:16:49.372 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.372 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.372 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.373 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.373 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:49.373 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.373 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:49.373 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:49.373 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:49.373 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:49.373 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.373 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.373 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.942 request: 00:16:49.942 { 00:16:49.942 "name": "nvme0", 00:16:49.942 "trtype": "tcp", 00:16:49.942 "traddr": "10.0.0.2", 00:16:49.942 "adrfam": "ipv4", 00:16:49.942 "trsvcid": "4420", 00:16:49.942 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:49.942 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:49.942 "prchk_reftag": false, 00:16:49.942 "prchk_guard": false, 00:16:49.942 "hdgst": false, 00:16:49.942 "ddgst": false, 00:16:49.942 "dhchap_key": "key1", 00:16:49.942 "dhchap_ctrlr_key": "ckey1", 00:16:49.942 "allow_unrecognized_csi": false, 00:16:49.942 "method": "bdev_nvme_attach_controller", 00:16:49.942 "req_id": 1 00:16:49.942 } 00:16:49.942 Got JSON-RPC error response 00:16:49.942 response: 00:16:49.942 { 00:16:49.942 "code": -5, 00:16:49.942 "message": "Input/output error" 00:16:49.942 } 00:16:49.942 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:49.942 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:49.942 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:49.942 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:49.942 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:49.942 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.942 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.942 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.942 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2593904 00:16:49.942 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2593904 ']' 00:16:49.942 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2593904 00:16:49.942 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:16:49.942 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:49.942 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2593904 00:16:49.942 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:49.942 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:49.942 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2593904' 00:16:49.942 killing process with pid 2593904 00:16:49.942 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2593904 00:16:49.942 03:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2593904 00:16:50.202 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:50.202 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:50.202 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:50.202 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.202 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2615931 00:16:50.202 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2615931 00:16:50.202 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:50.202 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2615931 ']' 00:16:50.202 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.202 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:50.202 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.202 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:50.202 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.461 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:50.461 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:50.461 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:50.461 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:50.461 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.461 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:50.461 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:50.461 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2615931 00:16:50.461 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2615931 ']' 00:16:50.461 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.461 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:50.461 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.461 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:50.461 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.461 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:50.461 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:50.461 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:16:50.461 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.461 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.721 null0 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.2ZC 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.Kro ]] 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Kro 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.oLP 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.U87 ]] 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.U87 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.pIY 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.5eB ]] 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.5eB 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.lkS 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:50.721 03:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:51.656 nvme0n1 00:16:51.656 03:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.657 03:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.657 03:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.657 03:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.657 03:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.657 03:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.657 03:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.657 03:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.657 03:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.657 { 00:16:51.657 "cntlid": 1, 00:16:51.657 "qid": 0, 00:16:51.657 "state": "enabled", 00:16:51.657 "thread": "nvmf_tgt_poll_group_000", 00:16:51.657 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:51.657 "listen_address": { 00:16:51.657 "trtype": "TCP", 00:16:51.657 "adrfam": "IPv4", 00:16:51.657 "traddr": "10.0.0.2", 00:16:51.657 "trsvcid": "4420" 00:16:51.657 }, 00:16:51.657 "peer_address": { 00:16:51.657 "trtype": "TCP", 00:16:51.657 "adrfam": "IPv4", 00:16:51.657 "traddr": "10.0.0.1", 00:16:51.657 "trsvcid": "38368" 00:16:51.657 }, 00:16:51.657 "auth": { 00:16:51.657 "state": "completed", 00:16:51.657 "digest": "sha512", 00:16:51.657 "dhgroup": "ffdhe8192" 00:16:51.657 } 00:16:51.657 } 00:16:51.657 ]' 00:16:51.657 03:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.657 03:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:51.914 03:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.914 03:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:51.914 03:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.914 03:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.914 03:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.914 03:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.171 03:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI0NjQyNzQxZTJiNzEyMTMxNjI0ZGYzZjlmZjNmZDk4NTAxOGUyN2RmZWYzNzI5YTc1OWZiOWNhNThhMzY4OXDEmbI=: 00:16:52.171 03:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZDI0NjQyNzQxZTJiNzEyMTMxNjI0ZGYzZjlmZjNmZDk4NTAxOGUyN2RmZWYzNzI5YTc1OWZiOWNhNThhMzY4OXDEmbI=: 00:16:52.736 03:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.736 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.736 03:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:52.736 03:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.736 03:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.736 03:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.736 03:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:52.736 03:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.736 03:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.736 03:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.736 03:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:16:52.736 03:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:16:52.996 03:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:52.996 03:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:52.996 03:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:52.996 03:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:52.996 03:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:52.996 03:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:52.996 03:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:52.996 03:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:52.996 03:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:52.996 03:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:52.996 request: 00:16:52.996 { 00:16:52.996 "name": "nvme0", 00:16:52.996 "trtype": "tcp", 00:16:52.996 "traddr": "10.0.0.2", 00:16:52.996 "adrfam": "ipv4", 00:16:52.996 "trsvcid": "4420", 00:16:52.996 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:52.996 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:52.996 "prchk_reftag": false, 00:16:52.996 "prchk_guard": false, 00:16:52.996 "hdgst": false, 00:16:52.996 "ddgst": false, 00:16:52.996 "dhchap_key": "key3", 00:16:52.996 "allow_unrecognized_csi": false, 00:16:52.996 "method": "bdev_nvme_attach_controller", 00:16:52.996 "req_id": 1 00:16:52.996 } 00:16:52.996 Got JSON-RPC error response 00:16:52.996 response: 00:16:52.996 { 00:16:52.996 "code": -5, 00:16:52.996 "message": "Input/output error" 00:16:52.996 } 00:16:52.996 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:52.996 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:52.996 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:52.996 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:52.996 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:16:52.996 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:16:52.996 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:52.996 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:53.280 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:53.280 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:53.280 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:53.280 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:53.280 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:53.280 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:53.280 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:53.280 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:53.280 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:53.280 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:53.586 request: 00:16:53.586 { 00:16:53.586 "name": "nvme0", 00:16:53.586 "trtype": "tcp", 00:16:53.587 "traddr": "10.0.0.2", 00:16:53.587 "adrfam": "ipv4", 00:16:53.587 "trsvcid": "4420", 00:16:53.587 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:53.587 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:53.587 "prchk_reftag": false, 00:16:53.587 "prchk_guard": false, 00:16:53.587 "hdgst": false, 00:16:53.587 "ddgst": false, 00:16:53.587 "dhchap_key": "key3", 00:16:53.587 "allow_unrecognized_csi": false, 00:16:53.587 "method": "bdev_nvme_attach_controller", 00:16:53.587 "req_id": 1 00:16:53.587 } 00:16:53.587 Got JSON-RPC error response 00:16:53.587 response: 00:16:53.587 { 00:16:53.587 "code": -5, 00:16:53.587 "message": "Input/output error" 00:16:53.587 } 00:16:53.587 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:53.587 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:53.587 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:53.587 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:53.587 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:53.587 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:16:53.587 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:53.587 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:53.587 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:53.587 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:53.910 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:53.910 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.910 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.910 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.910 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:53.910 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.910 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.910 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.910 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:53.910 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:53.910 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:53.910 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:53.910 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:53.910 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:53.910 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:53.910 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:53.910 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:53.910 03:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:54.196 request: 00:16:54.196 { 00:16:54.196 "name": "nvme0", 00:16:54.196 "trtype": "tcp", 00:16:54.196 "traddr": "10.0.0.2", 00:16:54.196 "adrfam": "ipv4", 00:16:54.196 "trsvcid": "4420", 00:16:54.196 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:54.196 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:54.196 "prchk_reftag": false, 00:16:54.196 "prchk_guard": false, 00:16:54.196 "hdgst": false, 00:16:54.196 "ddgst": false, 00:16:54.196 "dhchap_key": "key0", 00:16:54.196 "dhchap_ctrlr_key": "key1", 00:16:54.196 "allow_unrecognized_csi": false, 00:16:54.196 "method": "bdev_nvme_attach_controller", 00:16:54.196 "req_id": 1 00:16:54.196 } 00:16:54.196 Got JSON-RPC error response 00:16:54.196 response: 00:16:54.196 { 00:16:54.196 "code": -5, 00:16:54.196 "message": "Input/output error" 00:16:54.196 } 00:16:54.196 03:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:54.197 03:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:54.197 03:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:54.197 03:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:54.197 03:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:16:54.197 03:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:54.197 03:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:54.456 nvme0n1 00:16:54.456 03:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:16:54.456 03:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.456 03:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:16:54.456 03:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.456 03:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.456 03:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.715 03:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:16:54.715 03:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.715 03:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.715 03:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.715 03:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:54.715 03:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:54.716 03:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:55.651 nvme0n1 00:16:55.651 03:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:16:55.651 03:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:16:55.651 03:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.651 03:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.651 03:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:55.651 03:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.651 03:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.651 03:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.651 03:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:16:55.651 03:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.651 03:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:16:55.910 03:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.910 03:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: --dhchap-ctrl-secret DHHC-1:03:ZDI0NjQyNzQxZTJiNzEyMTMxNjI0ZGYzZjlmZjNmZDk4NTAxOGUyN2RmZWYzNzI5YTc1OWZiOWNhNThhMzY4OXDEmbI=: 00:16:55.910 03:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: --dhchap-ctrl-secret DHHC-1:03:ZDI0NjQyNzQxZTJiNzEyMTMxNjI0ZGYzZjlmZjNmZDk4NTAxOGUyN2RmZWYzNzI5YTc1OWZiOWNhNThhMzY4OXDEmbI=: 00:16:56.478 03:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:16:56.478 03:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:16:56.478 03:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:16:56.478 03:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:16:56.478 03:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:16:56.478 03:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:16:56.478 03:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:16:56.478 03:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.478 03:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.738 03:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:16:56.738 03:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:56.738 03:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:16:56.738 03:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:56.738 03:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:56.738 03:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:56.738 03:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:56.738 03:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:56.738 03:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:56.738 03:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:56.997 request: 00:16:56.997 { 00:16:56.997 "name": "nvme0", 00:16:56.997 "trtype": "tcp", 00:16:56.997 "traddr": "10.0.0.2", 00:16:56.997 "adrfam": "ipv4", 00:16:56.997 "trsvcid": "4420", 00:16:56.997 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:56.997 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:56.997 "prchk_reftag": false, 00:16:56.997 "prchk_guard": false, 00:16:56.997 "hdgst": false, 00:16:56.997 "ddgst": false, 00:16:56.997 "dhchap_key": "key1", 00:16:56.997 "allow_unrecognized_csi": false, 00:16:56.997 "method": "bdev_nvme_attach_controller", 00:16:56.997 "req_id": 1 00:16:56.997 } 00:16:56.997 Got JSON-RPC error response 00:16:56.997 response: 00:16:56.997 { 00:16:56.997 "code": -5, 00:16:56.997 "message": "Input/output error" 00:16:56.997 } 00:16:56.997 03:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:56.997 03:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:57.256 03:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:57.256 03:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:57.256 03:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:57.256 03:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:57.256 03:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:57.824 nvme0n1 00:16:57.824 03:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:16:57.824 03:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:16:57.824 03:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.083 03:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.083 03:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.083 03:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.342 03:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:58.342 03:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.342 03:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.342 03:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.342 03:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:16:58.342 03:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:16:58.342 03:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:16:58.602 nvme0n1 00:16:58.602 03:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:16:58.602 03:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:16:58.602 03:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.862 03:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.862 03:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.862 03:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.862 03:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:58.862 03:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.862 03:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.862 03:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.862 03:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZjA3NGFjYjNhNGJiODExNWY5OTcyYjI4NzdjMjNmNjihpPHa: '' 2s 00:16:58.862 03:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:16:58.862 03:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:16:58.862 03:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZjA3NGFjYjNhNGJiODExNWY5OTcyYjI4NzdjMjNmNjihpPHa: 00:16:58.862 03:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:16:58.862 03:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:16:58.862 03:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:16:58.862 03:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZjA3NGFjYjNhNGJiODExNWY5OTcyYjI4NzdjMjNmNjihpPHa: ]] 00:16:58.862 03:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZjA3NGFjYjNhNGJiODExNWY5OTcyYjI4NzdjMjNmNjihpPHa: 00:16:58.862 03:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:16:58.862 03:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:16:58.862 03:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:01.397 03:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:01.397 03:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:01.397 03:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:01.397 03:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:01.397 03:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:01.397 03:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:01.397 03:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:01.397 03:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:01.397 03:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.397 03:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.398 03:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.398 03:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: 2s 00:17:01.398 03:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:01.398 03:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:01.398 03:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:01.398 03:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: 00:17:01.398 03:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:01.398 03:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:01.398 03:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:01.398 03:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: ]] 00:17:01.398 03:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YTBjYTRhMDlhZWY3ODZiNzdjZTc1YmNmYzc3OGRlYmU5ZmM1MmFlNmQ4ZTc0NGI3WJrqKQ==: 00:17:01.398 03:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:01.398 03:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:03.300 03:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:03.300 03:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:03.300 03:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:03.300 03:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:03.300 03:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:03.301 03:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:03.301 03:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:03.301 03:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.301 03:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:03.301 03:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.301 03:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.301 03:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.301 03:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:03.301 03:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:03.301 03:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:03.867 nvme0n1 00:17:03.867 03:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:03.867 03:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.867 03:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.867 03:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.867 03:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:03.867 03:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:04.433 03:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:04.433 03:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:04.433 03:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.433 03:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.433 03:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:04.433 03:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.433 03:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.433 03:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.433 03:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:04.433 03:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:04.691 03:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:04.691 03:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:04.691 03:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.949 03:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.949 03:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:04.949 03:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.949 03:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.949 03:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.949 03:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:04.949 03:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:04.949 03:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:04.949 03:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:04.949 03:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:04.949 03:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:04.949 03:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:04.949 03:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:04.949 03:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:05.515 request: 00:17:05.515 { 00:17:05.515 "name": "nvme0", 00:17:05.515 "dhchap_key": "key1", 00:17:05.515 "dhchap_ctrlr_key": "key3", 00:17:05.515 "method": "bdev_nvme_set_keys", 00:17:05.515 "req_id": 1 00:17:05.515 } 00:17:05.515 Got JSON-RPC error response 00:17:05.515 response: 00:17:05.515 { 00:17:05.515 "code": -13, 00:17:05.515 "message": "Permission denied" 00:17:05.515 } 00:17:05.515 03:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:05.515 03:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:05.515 03:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:05.515 03:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:05.515 03:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:05.515 03:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:05.515 03:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.515 03:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:05.515 03:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:06.889 03:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:06.889 03:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:06.889 03:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.889 03:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:06.889 03:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:06.889 03:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.889 03:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.889 03:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.889 03:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:06.890 03:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:06.890 03:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:07.456 nvme0n1 00:17:07.456 03:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:07.456 03:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.456 03:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.456 03:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.456 03:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:07.456 03:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:07.456 03:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:07.456 03:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:07.456 03:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:07.456 03:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:07.456 03:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:07.456 03:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:07.456 03:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:08.022 request: 00:17:08.022 { 00:17:08.022 "name": "nvme0", 00:17:08.022 "dhchap_key": "key2", 00:17:08.022 "dhchap_ctrlr_key": "key0", 00:17:08.022 "method": "bdev_nvme_set_keys", 00:17:08.022 "req_id": 1 00:17:08.022 } 00:17:08.022 Got JSON-RPC error response 00:17:08.022 response: 00:17:08.022 { 00:17:08.022 "code": -13, 00:17:08.022 "message": "Permission denied" 00:17:08.022 } 00:17:08.022 03:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:08.022 03:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:08.022 03:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:08.022 03:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:08.022 03:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:08.022 03:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:08.022 03:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.281 03:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:08.281 03:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:09.218 03:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:09.218 03:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:09.218 03:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.477 03:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:09.477 03:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:09.477 03:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:09.477 03:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2593925 00:17:09.477 03:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2593925 ']' 00:17:09.477 03:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2593925 00:17:09.477 03:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:09.477 03:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:09.477 03:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2593925 00:17:09.477 03:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:09.477 03:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:09.477 03:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2593925' 00:17:09.477 killing process with pid 2593925 00:17:09.477 03:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2593925 00:17:09.477 03:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2593925 00:17:09.737 03:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:09.737 03:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:09.737 03:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:09.737 03:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:09.737 03:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:09.737 03:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:09.737 03:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:09.737 rmmod nvme_tcp 00:17:09.737 rmmod nvme_fabrics 00:17:09.737 rmmod nvme_keyring 00:17:09.737 03:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:09.737 03:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:09.737 03:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:09.737 03:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2615931 ']' 00:17:09.737 03:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2615931 00:17:09.737 03:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2615931 ']' 00:17:09.737 03:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2615931 00:17:09.737 03:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:09.737 03:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:09.997 03:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2615931 00:17:09.997 03:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:09.997 03:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:09.997 03:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2615931' 00:17:09.997 killing process with pid 2615931 00:17:09.997 03:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2615931 00:17:09.997 03:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2615931 00:17:09.997 03:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:09.997 03:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:09.997 03:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:09.997 03:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:09.997 03:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:09.997 03:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:09.997 03:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:09.997 03:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:09.997 03:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:09.997 03:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.997 03:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:09.997 03:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:12.532 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:12.532 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.2ZC /tmp/spdk.key-sha256.oLP /tmp/spdk.key-sha384.pIY /tmp/spdk.key-sha512.lkS /tmp/spdk.key-sha512.Kro /tmp/spdk.key-sha384.U87 /tmp/spdk.key-sha256.5eB '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:12.532 00:17:12.532 real 2m31.967s 00:17:12.532 user 5m51.011s 00:17:12.532 sys 0m23.865s 00:17:12.532 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:12.532 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.532 ************************************ 00:17:12.532 END TEST nvmf_auth_target 00:17:12.532 ************************************ 00:17:12.532 03:25:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:12.532 03:25:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:12.532 03:25:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:12.532 03:25:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:12.532 03:25:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:12.532 ************************************ 00:17:12.532 START TEST nvmf_bdevio_no_huge 00:17:12.532 ************************************ 00:17:12.532 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:12.532 * Looking for test storage... 00:17:12.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:12.532 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:12.532 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:17:12.532 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:12.532 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:12.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.533 --rc genhtml_branch_coverage=1 00:17:12.533 --rc genhtml_function_coverage=1 00:17:12.533 --rc genhtml_legend=1 00:17:12.533 --rc geninfo_all_blocks=1 00:17:12.533 --rc geninfo_unexecuted_blocks=1 00:17:12.533 00:17:12.533 ' 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:12.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.533 --rc genhtml_branch_coverage=1 00:17:12.533 --rc genhtml_function_coverage=1 00:17:12.533 --rc genhtml_legend=1 00:17:12.533 --rc geninfo_all_blocks=1 00:17:12.533 --rc geninfo_unexecuted_blocks=1 00:17:12.533 00:17:12.533 ' 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:12.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.533 --rc genhtml_branch_coverage=1 00:17:12.533 --rc genhtml_function_coverage=1 00:17:12.533 --rc genhtml_legend=1 00:17:12.533 --rc geninfo_all_blocks=1 00:17:12.533 --rc geninfo_unexecuted_blocks=1 00:17:12.533 00:17:12.533 ' 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:12.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.533 --rc genhtml_branch_coverage=1 00:17:12.533 --rc genhtml_function_coverage=1 00:17:12.533 --rc genhtml_legend=1 00:17:12.533 --rc geninfo_all_blocks=1 00:17:12.533 --rc geninfo_unexecuted_blocks=1 00:17:12.533 00:17:12.533 ' 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:12.533 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:12.533 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:12.534 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:12.534 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:12.534 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:12.534 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:12.534 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:12.534 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.534 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:12.534 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:12.534 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:12.534 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:12.534 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:17:12.534 03:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:17.801 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:17.801 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:17.801 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:17.802 Found net devices under 0000:86:00.0: cvl_0_0 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:17.802 Found net devices under 0000:86:00.1: cvl_0_1 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:17.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:17.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:17:17.802 00:17:17.802 --- 10.0.0.2 ping statistics --- 00:17:17.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.802 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:17.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:17.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:17:17.802 00:17:17.802 --- 10.0.0.1 ping statistics --- 00:17:17.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.802 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2622815 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2622815 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2622815 ']' 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:17.802 03:25:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:17.802 [2024-12-06 03:25:37.884567] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:17:17.802 [2024-12-06 03:25:37.884612] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:18.062 [2024-12-06 03:25:37.957092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:18.062 [2024-12-06 03:25:38.004486] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:18.062 [2024-12-06 03:25:38.004518] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:18.062 [2024-12-06 03:25:38.004526] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:18.062 [2024-12-06 03:25:38.004531] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:18.062 [2024-12-06 03:25:38.004536] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:18.062 [2024-12-06 03:25:38.005605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:18.062 [2024-12-06 03:25:38.005718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:17:18.062 [2024-12-06 03:25:38.005824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:18.062 [2024-12-06 03:25:38.005824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:17:18.062 03:25:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:18.062 03:25:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:17:18.062 03:25:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:18.062 03:25:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:18.062 03:25:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:18.062 03:25:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:18.062 03:25:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:18.062 03:25:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.062 03:25:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:18.062 [2024-12-06 03:25:38.150455] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:18.062 03:25:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.062 03:25:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:18.062 03:25:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.062 03:25:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:18.062 Malloc0 00:17:18.062 03:25:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.062 03:25:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:18.062 03:25:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.062 03:25:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:18.062 03:25:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.062 03:25:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:18.062 03:25:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.062 03:25:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:18.062 03:25:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.062 03:25:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:18.062 03:25:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.062 03:25:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:18.062 [2024-12-06 03:25:38.194749] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:18.062 03:25:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.322 03:25:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:18.322 03:25:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:18.322 03:25:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:17:18.322 03:25:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:17:18.322 03:25:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:18.322 03:25:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:18.322 { 00:17:18.322 "params": { 00:17:18.322 "name": "Nvme$subsystem", 00:17:18.322 "trtype": "$TEST_TRANSPORT", 00:17:18.322 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:18.322 "adrfam": "ipv4", 00:17:18.322 "trsvcid": "$NVMF_PORT", 00:17:18.322 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:18.322 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:18.322 "hdgst": ${hdgst:-false}, 00:17:18.322 "ddgst": ${ddgst:-false} 00:17:18.322 }, 00:17:18.322 "method": "bdev_nvme_attach_controller" 00:17:18.322 } 00:17:18.322 EOF 00:17:18.322 )") 00:17:18.322 03:25:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:17:18.322 03:25:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:17:18.322 03:25:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:17:18.322 03:25:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:18.322 "params": { 00:17:18.322 "name": "Nvme1", 00:17:18.322 "trtype": "tcp", 00:17:18.322 "traddr": "10.0.0.2", 00:17:18.322 "adrfam": "ipv4", 00:17:18.322 "trsvcid": "4420", 00:17:18.322 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:18.322 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:18.322 "hdgst": false, 00:17:18.322 "ddgst": false 00:17:18.322 }, 00:17:18.322 "method": "bdev_nvme_attach_controller" 00:17:18.322 }' 00:17:18.322 [2024-12-06 03:25:38.246467] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:17:18.322 [2024-12-06 03:25:38.246515] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2622847 ] 00:17:18.322 [2024-12-06 03:25:38.312383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:18.322 [2024-12-06 03:25:38.361972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.322 [2024-12-06 03:25:38.362068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.322 [2024-12-06 03:25:38.362068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:18.581 I/O targets: 00:17:18.581 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:18.581 00:17:18.581 00:17:18.581 CUnit - A unit testing framework for C - Version 2.1-3 00:17:18.581 http://cunit.sourceforge.net/ 00:17:18.581 00:17:18.581 00:17:18.581 Suite: bdevio tests on: Nvme1n1 00:17:18.581 Test: blockdev write read block ...passed 00:17:18.840 Test: blockdev write zeroes read block ...passed 00:17:18.840 Test: blockdev write zeroes read no split ...passed 00:17:18.840 Test: blockdev write zeroes read split ...passed 00:17:18.840 Test: blockdev write zeroes read split partial ...passed 00:17:18.840 Test: blockdev reset ...[2024-12-06 03:25:38.812214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:18.840 [2024-12-06 03:25:38.812282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16dc510 (9): Bad file descriptor 00:17:18.840 [2024-12-06 03:25:38.832887] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:17:18.840 passed 00:17:18.840 Test: blockdev write read 8 blocks ...passed 00:17:18.840 Test: blockdev write read size > 128k ...passed 00:17:18.840 Test: blockdev write read invalid size ...passed 00:17:18.840 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:18.840 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:18.840 Test: blockdev write read max offset ...passed 00:17:18.840 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:19.100 Test: blockdev writev readv 8 blocks ...passed 00:17:19.100 Test: blockdev writev readv 30 x 1block ...passed 00:17:19.100 Test: blockdev writev readv block ...passed 00:17:19.100 Test: blockdev writev readv size > 128k ...passed 00:17:19.100 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:19.100 Test: blockdev comparev and writev ...[2024-12-06 03:25:39.043851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.100 [2024-12-06 03:25:39.043882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:19.101 [2024-12-06 03:25:39.043897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.101 [2024-12-06 03:25:39.043906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:19.101 [2024-12-06 03:25:39.044151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.101 [2024-12-06 03:25:39.044164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:19.101 [2024-12-06 03:25:39.044184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.101 [2024-12-06 03:25:39.044192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:19.101 [2024-12-06 03:25:39.044438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.101 [2024-12-06 03:25:39.044449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:19.101 [2024-12-06 03:25:39.044462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.101 [2024-12-06 03:25:39.044469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:19.101 [2024-12-06 03:25:39.044703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.101 [2024-12-06 03:25:39.044713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:19.101 [2024-12-06 03:25:39.044725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.101 [2024-12-06 03:25:39.044732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:19.101 passed 00:17:19.101 Test: blockdev nvme passthru rw ...passed 00:17:19.101 Test: blockdev nvme passthru vendor specific ...[2024-12-06 03:25:39.127312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:19.101 [2024-12-06 03:25:39.127330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:19.101 [2024-12-06 03:25:39.127444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:19.101 [2024-12-06 03:25:39.127454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:19.101 [2024-12-06 03:25:39.127563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:19.101 [2024-12-06 03:25:39.127573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:19.101 [2024-12-06 03:25:39.127681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:19.101 [2024-12-06 03:25:39.127691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:19.101 passed 00:17:19.101 Test: blockdev nvme admin passthru ...passed 00:17:19.101 Test: blockdev copy ...passed 00:17:19.101 00:17:19.101 Run Summary: Type Total Ran Passed Failed Inactive 00:17:19.101 suites 1 1 n/a 0 0 00:17:19.101 tests 23 23 23 0 0 00:17:19.101 asserts 152 152 152 0 n/a 00:17:19.101 00:17:19.101 Elapsed time = 1.062 seconds 00:17:19.361 03:25:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:19.361 03:25:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.361 03:25:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:19.361 03:25:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.361 03:25:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:19.361 03:25:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:19.361 03:25:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:19.361 03:25:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:17:19.361 03:25:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:19.361 03:25:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:17:19.361 03:25:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:19.361 03:25:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:19.361 rmmod nvme_tcp 00:17:19.361 rmmod nvme_fabrics 00:17:19.361 rmmod nvme_keyring 00:17:19.622 03:25:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:19.622 03:25:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:17:19.622 03:25:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:17:19.622 03:25:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2622815 ']' 00:17:19.622 03:25:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2622815 00:17:19.622 03:25:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2622815 ']' 00:17:19.622 03:25:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2622815 00:17:19.622 03:25:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:17:19.622 03:25:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:19.622 03:25:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2622815 00:17:19.622 03:25:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:17:19.622 03:25:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:17:19.622 03:25:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2622815' 00:17:19.622 killing process with pid 2622815 00:17:19.622 03:25:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2622815 00:17:19.622 03:25:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2622815 00:17:19.881 03:25:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:19.881 03:25:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:19.881 03:25:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:19.881 03:25:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:17:19.881 03:25:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:19.881 03:25:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:17:19.881 03:25:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:17:19.881 03:25:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:19.881 03:25:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:19.881 03:25:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.881 03:25:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:19.881 03:25:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.788 03:25:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:21.788 00:17:21.788 real 0m9.687s 00:17:21.788 user 0m11.069s 00:17:21.788 sys 0m4.876s 00:17:21.788 03:25:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:21.788 03:25:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:21.788 ************************************ 00:17:21.788 END TEST nvmf_bdevio_no_huge 00:17:21.788 ************************************ 00:17:22.048 03:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:22.048 03:25:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:22.048 03:25:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:22.048 03:25:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:22.048 ************************************ 00:17:22.048 START TEST nvmf_tls 00:17:22.048 ************************************ 00:17:22.048 03:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:22.048 * Looking for test storage... 00:17:22.048 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:22.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.048 --rc genhtml_branch_coverage=1 00:17:22.048 --rc genhtml_function_coverage=1 00:17:22.048 --rc genhtml_legend=1 00:17:22.048 --rc geninfo_all_blocks=1 00:17:22.048 --rc geninfo_unexecuted_blocks=1 00:17:22.048 00:17:22.048 ' 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:22.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.048 --rc genhtml_branch_coverage=1 00:17:22.048 --rc genhtml_function_coverage=1 00:17:22.048 --rc genhtml_legend=1 00:17:22.048 --rc geninfo_all_blocks=1 00:17:22.048 --rc geninfo_unexecuted_blocks=1 00:17:22.048 00:17:22.048 ' 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:22.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.048 --rc genhtml_branch_coverage=1 00:17:22.048 --rc genhtml_function_coverage=1 00:17:22.048 --rc genhtml_legend=1 00:17:22.048 --rc geninfo_all_blocks=1 00:17:22.048 --rc geninfo_unexecuted_blocks=1 00:17:22.048 00:17:22.048 ' 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:22.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.048 --rc genhtml_branch_coverage=1 00:17:22.048 --rc genhtml_function_coverage=1 00:17:22.048 --rc genhtml_legend=1 00:17:22.048 --rc geninfo_all_blocks=1 00:17:22.048 --rc geninfo_unexecuted_blocks=1 00:17:22.048 00:17:22.048 ' 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:22.048 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:22.308 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:22.308 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:22.308 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:22.308 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:22.308 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:22.308 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:22.308 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:22.308 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:17:22.308 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:22.308 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:22.308 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:22.308 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.308 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.308 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.308 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:22.309 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.309 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:17:22.309 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:22.309 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:22.309 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:22.309 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:22.309 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:22.309 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:22.309 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:22.309 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:22.309 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:22.309 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:22.309 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:22.309 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:17:22.309 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:22.309 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:22.309 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:22.309 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:22.309 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:22.309 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.309 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:22.309 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.309 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:22.309 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:22.309 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:17:22.309 03:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:27.581 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:27.581 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:17:27.581 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:27.581 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:27.581 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:27.581 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:27.581 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:27.581 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:17:27.581 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:27.581 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:17:27.581 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:17:27.581 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:17:27.581 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:17:27.581 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:17:27.581 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:17:27.581 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:27.581 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:27.581 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:27.581 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:27.581 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:27.581 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:27.581 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:27.581 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:27.581 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:27.581 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:27.582 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:27.582 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:27.582 Found net devices under 0000:86:00.0: cvl_0_0 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:27.582 Found net devices under 0000:86:00.1: cvl_0_1 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:27.582 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:27.842 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:27.842 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:27.842 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:27.842 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:27.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:27.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:17:27.842 00:17:27.842 --- 10.0.0.2 ping statistics --- 00:17:27.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.842 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:17:27.842 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:27.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:27.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:17:27.842 00:17:27.842 --- 10.0.0.1 ping statistics --- 00:17:27.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.842 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:17:27.842 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:27.842 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:17:27.842 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:27.842 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:27.842 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:27.842 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:27.842 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:27.842 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:27.842 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:27.842 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:27.842 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:27.842 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:27.842 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:27.842 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:27.842 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2626603 00:17:27.842 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2626603 00:17:27.842 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2626603 ']' 00:17:27.842 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.842 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:27.842 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.842 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:27.842 03:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:27.842 [2024-12-06 03:25:47.850005] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:17:27.843 [2024-12-06 03:25:47.850052] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:27.843 [2024-12-06 03:25:47.917118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.843 [2024-12-06 03:25:47.958620] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:27.843 [2024-12-06 03:25:47.958652] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:27.843 [2024-12-06 03:25:47.958660] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:27.843 [2024-12-06 03:25:47.958666] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:27.843 [2024-12-06 03:25:47.958671] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:27.843 [2024-12-06 03:25:47.959235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:28.102 03:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:28.102 03:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:28.102 03:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:28.102 03:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:28.102 03:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:28.102 03:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:28.102 03:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:17:28.102 03:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:28.102 true 00:17:28.102 03:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:28.102 03:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:17:28.361 03:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:17:28.361 03:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:17:28.361 03:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:28.620 03:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:28.620 03:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:17:28.880 03:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:17:28.880 03:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:17:28.880 03:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:28.880 03:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:28.880 03:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:17:29.139 03:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:17:29.139 03:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:17:29.139 03:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:29.139 03:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:17:29.398 03:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:17:29.398 03:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:17:29.398 03:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:29.657 03:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:29.657 03:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:29.657 03:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:17:29.657 03:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:17:29.657 03:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:29.917 03:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:29.917 03:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:30.176 03:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:17:30.177 03:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:17:30.177 03:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:30.177 03:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:30.177 03:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:30.177 03:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:30.177 03:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:17:30.177 03:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:30.177 03:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:30.177 03:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:30.177 03:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:30.177 03:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:30.177 03:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:30.177 03:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:30.177 03:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:17:30.177 03:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:30.177 03:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:30.177 03:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:30.177 03:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:30.177 03:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.D12tBfAMKg 00:17:30.177 03:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:17:30.177 03:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.1uLNxe6jKv 00:17:30.177 03:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:30.177 03:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:30.177 03:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.D12tBfAMKg 00:17:30.177 03:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.1uLNxe6jKv 00:17:30.177 03:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:30.435 03:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:30.694 03:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.D12tBfAMKg 00:17:30.694 03:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.D12tBfAMKg 00:17:30.694 03:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:30.953 [2024-12-06 03:25:50.882331] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:30.953 03:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:30.953 03:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:31.211 [2024-12-06 03:25:51.259292] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:31.211 [2024-12-06 03:25:51.259517] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.211 03:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:31.470 malloc0 00:17:31.470 03:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:31.728 03:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.D12tBfAMKg 00:17:31.728 03:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:31.987 03:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.D12tBfAMKg 00:17:44.191 Initializing NVMe Controllers 00:17:44.191 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:44.191 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:44.191 Initialization complete. Launching workers. 00:17:44.191 ======================================================== 00:17:44.191 Latency(us) 00:17:44.191 Device Information : IOPS MiB/s Average min max 00:17:44.191 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16500.08 64.45 3878.83 855.53 5459.78 00:17:44.191 ======================================================== 00:17:44.191 Total : 16500.08 64.45 3878.83 855.53 5459.78 00:17:44.191 00:17:44.191 03:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.D12tBfAMKg 00:17:44.191 03:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:44.191 03:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:44.191 03:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:44.191 03:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.D12tBfAMKg 00:17:44.191 03:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:44.191 03:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2628951 00:17:44.191 03:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:44.191 03:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:44.191 03:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2628951 /var/tmp/bdevperf.sock 00:17:44.191 03:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2628951 ']' 00:17:44.191 03:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:44.191 03:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:44.191 03:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:44.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:44.191 03:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:44.191 03:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:44.191 [2024-12-06 03:26:02.195433] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:17:44.191 [2024-12-06 03:26:02.195481] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2628951 ] 00:17:44.191 [2024-12-06 03:26:02.253631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.191 [2024-12-06 03:26:02.296053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:44.191 03:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:44.191 03:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:44.191 03:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.D12tBfAMKg 00:17:44.191 03:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:44.191 [2024-12-06 03:26:02.732095] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:44.191 TLSTESTn1 00:17:44.191 03:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:44.191 Running I/O for 10 seconds... 00:17:45.128 4860.00 IOPS, 18.98 MiB/s [2024-12-06T02:26:06.201Z] 5247.50 IOPS, 20.50 MiB/s [2024-12-06T02:26:07.136Z] 5330.67 IOPS, 20.82 MiB/s [2024-12-06T02:26:08.072Z] 5384.00 IOPS, 21.03 MiB/s [2024-12-06T02:26:09.014Z] 5434.40 IOPS, 21.23 MiB/s [2024-12-06T02:26:09.948Z] 5479.83 IOPS, 21.41 MiB/s [2024-12-06T02:26:11.327Z] 5515.71 IOPS, 21.55 MiB/s [2024-12-06T02:26:12.266Z] 5511.75 IOPS, 21.53 MiB/s [2024-12-06T02:26:13.200Z] 5532.33 IOPS, 21.61 MiB/s [2024-12-06T02:26:13.200Z] 5556.20 IOPS, 21.70 MiB/s 00:17:53.059 Latency(us) 00:17:53.059 [2024-12-06T02:26:13.200Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.059 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:53.059 Verification LBA range: start 0x0 length 0x2000 00:17:53.059 TLSTESTn1 : 10.02 5558.33 21.71 0.00 0.00 22988.54 4701.50 81606.57 00:17:53.059 [2024-12-06T02:26:13.200Z] =================================================================================================================== 00:17:53.059 [2024-12-06T02:26:13.200Z] Total : 5558.33 21.71 0.00 0.00 22988.54 4701.50 81606.57 00:17:53.059 { 00:17:53.059 "results": [ 00:17:53.059 { 00:17:53.059 "job": "TLSTESTn1", 00:17:53.059 "core_mask": "0x4", 00:17:53.059 "workload": "verify", 00:17:53.059 "status": "finished", 00:17:53.059 "verify_range": { 00:17:53.059 "start": 0, 00:17:53.059 "length": 8192 00:17:53.059 }, 00:17:53.059 "queue_depth": 128, 00:17:53.059 "io_size": 4096, 00:17:53.059 "runtime": 10.018844, 00:17:53.059 "iops": 5558.325890691581, 00:17:53.059 "mibps": 21.712210510513987, 00:17:53.059 "io_failed": 0, 00:17:53.059 "io_timeout": 0, 00:17:53.059 "avg_latency_us": 22988.536553593625, 00:17:53.059 "min_latency_us": 4701.495652173913, 00:17:53.059 "max_latency_us": 81606.56695652174 00:17:53.059 } 00:17:53.059 ], 00:17:53.059 "core_count": 1 00:17:53.059 } 00:17:53.059 03:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:53.059 03:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2628951 00:17:53.059 03:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2628951 ']' 00:17:53.059 03:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2628951 00:17:53.059 03:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:53.059 03:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:53.059 03:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2628951 00:17:53.059 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:53.059 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:53.059 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2628951' 00:17:53.059 killing process with pid 2628951 00:17:53.059 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2628951 00:17:53.059 Received shutdown signal, test time was about 10.000000 seconds 00:17:53.059 00:17:53.059 Latency(us) 00:17:53.059 [2024-12-06T02:26:13.200Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.059 [2024-12-06T02:26:13.200Z] =================================================================================================================== 00:17:53.059 [2024-12-06T02:26:13.200Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:53.059 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2628951 00:17:53.059 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1uLNxe6jKv 00:17:53.059 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:53.059 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1uLNxe6jKv 00:17:53.059 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:53.059 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:53.059 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:53.059 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:53.059 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1uLNxe6jKv 00:17:53.059 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:53.059 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:53.059 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:53.059 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.1uLNxe6jKv 00:17:53.059 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:53.059 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2630789 00:17:53.059 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:53.059 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:53.059 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2630789 /var/tmp/bdevperf.sock 00:17:53.059 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2630789 ']' 00:17:53.059 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:53.059 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:53.059 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:53.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:53.059 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:53.059 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:53.318 [2024-12-06 03:26:13.226743] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:17:53.318 [2024-12-06 03:26:13.226794] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2630789 ] 00:17:53.318 [2024-12-06 03:26:13.285339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.318 [2024-12-06 03:26:13.322873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:53.318 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:53.318 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:53.319 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1uLNxe6jKv 00:17:53.577 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:53.836 [2024-12-06 03:26:13.786792] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:53.836 [2024-12-06 03:26:13.791743] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:53.836 [2024-12-06 03:26:13.792309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2044dc0 (107): Transport endpoint is not connected 00:17:53.836 [2024-12-06 03:26:13.793301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2044dc0 (9): Bad file descriptor 00:17:53.836 [2024-12-06 03:26:13.794303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:17:53.836 [2024-12-06 03:26:13.794313] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:53.836 [2024-12-06 03:26:13.794321] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:17:53.836 [2024-12-06 03:26:13.794332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:17:53.836 request: 00:17:53.836 { 00:17:53.836 "name": "TLSTEST", 00:17:53.836 "trtype": "tcp", 00:17:53.836 "traddr": "10.0.0.2", 00:17:53.836 "adrfam": "ipv4", 00:17:53.836 "trsvcid": "4420", 00:17:53.836 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:53.836 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:53.836 "prchk_reftag": false, 00:17:53.836 "prchk_guard": false, 00:17:53.836 "hdgst": false, 00:17:53.836 "ddgst": false, 00:17:53.836 "psk": "key0", 00:17:53.836 "allow_unrecognized_csi": false, 00:17:53.836 "method": "bdev_nvme_attach_controller", 00:17:53.836 "req_id": 1 00:17:53.836 } 00:17:53.836 Got JSON-RPC error response 00:17:53.836 response: 00:17:53.836 { 00:17:53.836 "code": -5, 00:17:53.836 "message": "Input/output error" 00:17:53.836 } 00:17:53.836 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2630789 00:17:53.836 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2630789 ']' 00:17:53.836 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2630789 00:17:53.836 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:53.836 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:53.836 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2630789 00:17:53.836 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:53.836 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:53.836 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2630789' 00:17:53.836 killing process with pid 2630789 00:17:53.836 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2630789 00:17:53.836 Received shutdown signal, test time was about 10.000000 seconds 00:17:53.836 00:17:53.836 Latency(us) 00:17:53.836 [2024-12-06T02:26:13.977Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.836 [2024-12-06T02:26:13.977Z] =================================================================================================================== 00:17:53.836 [2024-12-06T02:26:13.977Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:53.836 03:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2630789 00:17:54.096 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:54.096 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:54.096 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:54.096 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:54.096 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:54.096 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.D12tBfAMKg 00:17:54.096 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:54.096 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.D12tBfAMKg 00:17:54.096 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:54.096 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.096 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:54.096 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.096 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.D12tBfAMKg 00:17:54.096 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:54.096 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:54.096 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:54.096 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.D12tBfAMKg 00:17:54.096 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:54.096 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2631021 00:17:54.096 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:54.096 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:54.096 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2631021 /var/tmp/bdevperf.sock 00:17:54.096 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2631021 ']' 00:17:54.096 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:54.096 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.096 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:54.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:54.096 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.096 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:54.096 [2024-12-06 03:26:14.071913] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:17:54.096 [2024-12-06 03:26:14.071967] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2631021 ] 00:17:54.096 [2024-12-06 03:26:14.129914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.096 [2024-12-06 03:26:14.166797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:54.355 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:54.356 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:54.356 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.D12tBfAMKg 00:17:54.356 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:17:54.616 [2024-12-06 03:26:14.619202] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:54.616 [2024-12-06 03:26:14.623963] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:54.616 [2024-12-06 03:26:14.623986] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:54.616 [2024-12-06 03:26:14.624024] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:54.616 [2024-12-06 03:26:14.624677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb16dc0 (107): Transport endpoint is not connected 00:17:54.616 [2024-12-06 03:26:14.625669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb16dc0 (9): Bad file descriptor 00:17:54.616 [2024-12-06 03:26:14.626671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:17:54.616 [2024-12-06 03:26:14.626681] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:54.616 [2024-12-06 03:26:14.626689] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:17:54.616 [2024-12-06 03:26:14.626699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:17:54.616 request: 00:17:54.616 { 00:17:54.616 "name": "TLSTEST", 00:17:54.616 "trtype": "tcp", 00:17:54.616 "traddr": "10.0.0.2", 00:17:54.616 "adrfam": "ipv4", 00:17:54.616 "trsvcid": "4420", 00:17:54.616 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:54.616 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:54.616 "prchk_reftag": false, 00:17:54.616 "prchk_guard": false, 00:17:54.616 "hdgst": false, 00:17:54.616 "ddgst": false, 00:17:54.616 "psk": "key0", 00:17:54.616 "allow_unrecognized_csi": false, 00:17:54.616 "method": "bdev_nvme_attach_controller", 00:17:54.616 "req_id": 1 00:17:54.616 } 00:17:54.616 Got JSON-RPC error response 00:17:54.616 response: 00:17:54.616 { 00:17:54.616 "code": -5, 00:17:54.616 "message": "Input/output error" 00:17:54.616 } 00:17:54.616 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2631021 00:17:54.616 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2631021 ']' 00:17:54.616 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2631021 00:17:54.616 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:54.616 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:54.616 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2631021 00:17:54.616 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:54.616 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:54.616 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2631021' 00:17:54.616 killing process with pid 2631021 00:17:54.616 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2631021 00:17:54.616 Received shutdown signal, test time was about 10.000000 seconds 00:17:54.616 00:17:54.616 Latency(us) 00:17:54.616 [2024-12-06T02:26:14.757Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.616 [2024-12-06T02:26:14.757Z] =================================================================================================================== 00:17:54.616 [2024-12-06T02:26:14.757Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:54.616 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2631021 00:17:54.875 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:54.875 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:54.875 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:54.875 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:54.875 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:54.875 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.D12tBfAMKg 00:17:54.875 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:54.876 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.D12tBfAMKg 00:17:54.876 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:54.876 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.876 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:54.876 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.876 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.D12tBfAMKg 00:17:54.876 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:54.876 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:54.876 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:54.876 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.D12tBfAMKg 00:17:54.876 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:54.876 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2631041 00:17:54.876 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:54.876 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:54.876 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2631041 /var/tmp/bdevperf.sock 00:17:54.876 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2631041 ']' 00:17:54.876 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:54.876 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.876 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:54.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:54.876 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.876 03:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:54.876 [2024-12-06 03:26:14.901855] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:17:54.876 [2024-12-06 03:26:14.901908] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2631041 ] 00:17:54.876 [2024-12-06 03:26:14.961075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.876 [2024-12-06 03:26:14.999819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:55.136 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:55.136 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:55.136 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.D12tBfAMKg 00:17:55.394 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:55.394 [2024-12-06 03:26:15.467942] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:55.394 [2024-12-06 03:26:15.475038] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:55.394 [2024-12-06 03:26:15.475059] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:55.394 [2024-12-06 03:26:15.475085] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:55.395 [2024-12-06 03:26:15.475438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2dc0 (107): Transport endpoint is not connected 00:17:55.395 [2024-12-06 03:26:15.476432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2dc0 (9): Bad file descriptor 00:17:55.395 [2024-12-06 03:26:15.477434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:17:55.395 [2024-12-06 03:26:15.477443] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:55.395 [2024-12-06 03:26:15.477451] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:17:55.395 [2024-12-06 03:26:15.477461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:17:55.395 request: 00:17:55.395 { 00:17:55.395 "name": "TLSTEST", 00:17:55.395 "trtype": "tcp", 00:17:55.395 "traddr": "10.0.0.2", 00:17:55.395 "adrfam": "ipv4", 00:17:55.395 "trsvcid": "4420", 00:17:55.395 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:55.395 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:55.395 "prchk_reftag": false, 00:17:55.395 "prchk_guard": false, 00:17:55.395 "hdgst": false, 00:17:55.395 "ddgst": false, 00:17:55.395 "psk": "key0", 00:17:55.395 "allow_unrecognized_csi": false, 00:17:55.395 "method": "bdev_nvme_attach_controller", 00:17:55.395 "req_id": 1 00:17:55.395 } 00:17:55.395 Got JSON-RPC error response 00:17:55.395 response: 00:17:55.395 { 00:17:55.395 "code": -5, 00:17:55.395 "message": "Input/output error" 00:17:55.395 } 00:17:55.395 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2631041 00:17:55.395 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2631041 ']' 00:17:55.395 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2631041 00:17:55.395 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:55.395 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:55.395 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2631041 00:17:55.653 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:55.653 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:55.653 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2631041' 00:17:55.653 killing process with pid 2631041 00:17:55.653 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2631041 00:17:55.653 Received shutdown signal, test time was about 10.000000 seconds 00:17:55.653 00:17:55.653 Latency(us) 00:17:55.653 [2024-12-06T02:26:15.794Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.653 [2024-12-06T02:26:15.794Z] =================================================================================================================== 00:17:55.653 [2024-12-06T02:26:15.794Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:55.653 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2631041 00:17:55.653 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:55.653 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:55.653 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:55.653 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:55.653 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:55.653 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:55.653 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:55.653 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:55.653 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:55.653 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:55.653 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:55.653 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:55.653 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:55.653 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:55.653 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:55.653 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:55.653 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:55.653 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:55.653 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2631270 00:17:55.653 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:55.653 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:55.653 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2631270 /var/tmp/bdevperf.sock 00:17:55.653 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2631270 ']' 00:17:55.653 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:55.653 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:55.653 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:55.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:55.653 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:55.653 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:55.653 [2024-12-06 03:26:15.751666] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:17:55.653 [2024-12-06 03:26:15.751714] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2631270 ] 00:17:55.912 [2024-12-06 03:26:15.809924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.912 [2024-12-06 03:26:15.847388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:55.912 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:55.912 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:55.912 03:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:17:56.171 [2024-12-06 03:26:16.107065] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:17:56.171 [2024-12-06 03:26:16.107099] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:17:56.171 request: 00:17:56.171 { 00:17:56.171 "name": "key0", 00:17:56.171 "path": "", 00:17:56.171 "method": "keyring_file_add_key", 00:17:56.171 "req_id": 1 00:17:56.171 } 00:17:56.171 Got JSON-RPC error response 00:17:56.171 response: 00:17:56.171 { 00:17:56.171 "code": -1, 00:17:56.171 "message": "Operation not permitted" 00:17:56.171 } 00:17:56.171 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:56.171 [2024-12-06 03:26:16.299658] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:56.171 [2024-12-06 03:26:16.299690] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:17:56.171 request: 00:17:56.171 { 00:17:56.171 "name": "TLSTEST", 00:17:56.171 "trtype": "tcp", 00:17:56.171 "traddr": "10.0.0.2", 00:17:56.171 "adrfam": "ipv4", 00:17:56.171 "trsvcid": "4420", 00:17:56.171 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:56.171 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:56.171 "prchk_reftag": false, 00:17:56.171 "prchk_guard": false, 00:17:56.171 "hdgst": false, 00:17:56.171 "ddgst": false, 00:17:56.171 "psk": "key0", 00:17:56.171 "allow_unrecognized_csi": false, 00:17:56.171 "method": "bdev_nvme_attach_controller", 00:17:56.171 "req_id": 1 00:17:56.171 } 00:17:56.171 Got JSON-RPC error response 00:17:56.171 response: 00:17:56.171 { 00:17:56.171 "code": -126, 00:17:56.171 "message": "Required key not available" 00:17:56.171 } 00:17:56.431 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2631270 00:17:56.431 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2631270 ']' 00:17:56.431 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2631270 00:17:56.431 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:56.431 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:56.431 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2631270 00:17:56.431 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:56.431 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:56.431 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2631270' 00:17:56.431 killing process with pid 2631270 00:17:56.431 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2631270 00:17:56.431 Received shutdown signal, test time was about 10.000000 seconds 00:17:56.431 00:17:56.431 Latency(us) 00:17:56.431 [2024-12-06T02:26:16.572Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.431 [2024-12-06T02:26:16.572Z] =================================================================================================================== 00:17:56.431 [2024-12-06T02:26:16.572Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:56.431 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2631270 00:17:56.431 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:56.431 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:56.431 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:56.431 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:56.431 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:56.431 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2626603 00:17:56.431 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2626603 ']' 00:17:56.431 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2626603 00:17:56.431 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:56.431 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:56.431 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2626603 00:17:56.691 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:56.691 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:56.691 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2626603' 00:17:56.691 killing process with pid 2626603 00:17:56.691 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2626603 00:17:56.691 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2626603 00:17:56.691 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:56.691 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:56.691 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:56.691 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:56.691 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:56.691 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:17:56.691 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:56.691 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:56.691 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:17:56.691 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.yx0YCZglnE 00:17:56.691 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:56.691 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.yx0YCZglnE 00:17:56.691 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:17:56.691 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:56.691 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:56.691 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:56.691 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2631514 00:17:56.691 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2631514 00:17:56.691 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:56.691 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2631514 ']' 00:17:56.691 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.691 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:56.691 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.691 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:56.691 03:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:56.950 [2024-12-06 03:26:16.868062] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:17:56.950 [2024-12-06 03:26:16.868111] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:56.950 [2024-12-06 03:26:16.934380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.950 [2024-12-06 03:26:16.974675] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:56.950 [2024-12-06 03:26:16.974716] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:56.950 [2024-12-06 03:26:16.974723] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:56.950 [2024-12-06 03:26:16.974729] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:56.950 [2024-12-06 03:26:16.974737] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:56.950 [2024-12-06 03:26:16.975276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:56.950 03:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:56.950 03:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:56.950 03:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:56.950 03:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:56.950 03:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:57.208 03:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:57.208 03:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.yx0YCZglnE 00:17:57.208 03:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.yx0YCZglnE 00:17:57.208 03:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:57.208 [2024-12-06 03:26:17.275231] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:57.208 03:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:57.466 03:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:57.724 [2024-12-06 03:26:17.652193] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:57.724 [2024-12-06 03:26:17.652417] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:57.724 03:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:57.724 malloc0 00:17:57.724 03:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:57.982 03:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.yx0YCZglnE 00:17:58.242 03:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:58.501 03:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yx0YCZglnE 00:17:58.501 03:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:58.501 03:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:58.501 03:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:58.501 03:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yx0YCZglnE 00:17:58.501 03:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:58.501 03:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2631776 00:17:58.501 03:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:58.501 03:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:58.501 03:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2631776 /var/tmp/bdevperf.sock 00:17:58.501 03:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2631776 ']' 00:17:58.501 03:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:58.501 03:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:58.501 03:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:58.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:58.501 03:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:58.501 03:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:58.501 [2024-12-06 03:26:18.448806] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:17:58.501 [2024-12-06 03:26:18.448855] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2631776 ] 00:17:58.501 [2024-12-06 03:26:18.506867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.501 [2024-12-06 03:26:18.548570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:58.501 03:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:58.501 03:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:58.501 03:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yx0YCZglnE 00:17:58.759 03:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:59.018 [2024-12-06 03:26:18.988598] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:59.018 TLSTESTn1 00:17:59.018 03:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:59.277 Running I/O for 10 seconds... 00:18:01.151 5318.00 IOPS, 20.77 MiB/s [2024-12-06T02:26:22.231Z] 5375.00 IOPS, 21.00 MiB/s [2024-12-06T02:26:23.609Z] 5433.33 IOPS, 21.22 MiB/s [2024-12-06T02:26:24.547Z] 5461.75 IOPS, 21.33 MiB/s [2024-12-06T02:26:25.484Z] 5457.40 IOPS, 21.32 MiB/s [2024-12-06T02:26:26.416Z] 5446.83 IOPS, 21.28 MiB/s [2024-12-06T02:26:27.350Z] 5442.57 IOPS, 21.26 MiB/s [2024-12-06T02:26:28.285Z] 5460.00 IOPS, 21.33 MiB/s [2024-12-06T02:26:29.222Z] 5478.11 IOPS, 21.40 MiB/s [2024-12-06T02:26:29.223Z] 5483.10 IOPS, 21.42 MiB/s 00:18:09.082 Latency(us) 00:18:09.082 [2024-12-06T02:26:29.223Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.082 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:09.082 Verification LBA range: start 0x0 length 0x2000 00:18:09.082 TLSTESTn1 : 10.01 5487.97 21.44 0.00 0.00 23287.63 5271.37 21541.40 00:18:09.082 [2024-12-06T02:26:29.223Z] =================================================================================================================== 00:18:09.082 [2024-12-06T02:26:29.223Z] Total : 5487.97 21.44 0.00 0.00 23287.63 5271.37 21541.40 00:18:09.082 { 00:18:09.082 "results": [ 00:18:09.082 { 00:18:09.082 "job": "TLSTESTn1", 00:18:09.082 "core_mask": "0x4", 00:18:09.082 "workload": "verify", 00:18:09.082 "status": "finished", 00:18:09.082 "verify_range": { 00:18:09.082 "start": 0, 00:18:09.082 "length": 8192 00:18:09.082 }, 00:18:09.082 "queue_depth": 128, 00:18:09.082 "io_size": 4096, 00:18:09.082 "runtime": 10.014092, 00:18:09.082 "iops": 5487.9663578085765, 00:18:09.082 "mibps": 21.437368585189752, 00:18:09.082 "io_failed": 0, 00:18:09.082 "io_timeout": 0, 00:18:09.082 "avg_latency_us": 23287.62815184362, 00:18:09.082 "min_latency_us": 5271.373913043478, 00:18:09.082 "max_latency_us": 21541.398260869566 00:18:09.082 } 00:18:09.082 ], 00:18:09.082 "core_count": 1 00:18:09.082 } 00:18:09.341 03:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:09.342 03:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2631776 00:18:09.342 03:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2631776 ']' 00:18:09.342 03:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2631776 00:18:09.342 03:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:09.342 03:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:09.342 03:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2631776 00:18:09.342 03:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:09.342 03:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:09.342 03:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2631776' 00:18:09.342 killing process with pid 2631776 00:18:09.342 03:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2631776 00:18:09.342 Received shutdown signal, test time was about 10.000000 seconds 00:18:09.342 00:18:09.342 Latency(us) 00:18:09.342 [2024-12-06T02:26:29.483Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.342 [2024-12-06T02:26:29.483Z] =================================================================================================================== 00:18:09.342 [2024-12-06T02:26:29.483Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:09.342 03:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2631776 00:18:09.342 03:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.yx0YCZglnE 00:18:09.342 03:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yx0YCZglnE 00:18:09.342 03:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:09.342 03:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yx0YCZglnE 00:18:09.342 03:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:09.342 03:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.342 03:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:09.342 03:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.342 03:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yx0YCZglnE 00:18:09.342 03:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:09.342 03:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:09.342 03:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:09.342 03:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yx0YCZglnE 00:18:09.342 03:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:09.342 03:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2633573 00:18:09.342 03:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:09.342 03:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:09.342 03:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2633573 /var/tmp/bdevperf.sock 00:18:09.342 03:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2633573 ']' 00:18:09.342 03:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:09.342 03:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:09.342 03:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:09.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:09.342 03:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:09.342 03:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.601 [2024-12-06 03:26:29.496780] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:18:09.601 [2024-12-06 03:26:29.496829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2633573 ] 00:18:09.601 [2024-12-06 03:26:29.554749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.601 [2024-12-06 03:26:29.597432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:09.601 03:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:09.602 03:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:09.602 03:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yx0YCZglnE 00:18:09.861 [2024-12-06 03:26:29.860847] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.yx0YCZglnE': 0100666 00:18:09.861 [2024-12-06 03:26:29.860876] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:09.861 request: 00:18:09.861 { 00:18:09.861 "name": "key0", 00:18:09.861 "path": "/tmp/tmp.yx0YCZglnE", 00:18:09.861 "method": "keyring_file_add_key", 00:18:09.861 "req_id": 1 00:18:09.861 } 00:18:09.861 Got JSON-RPC error response 00:18:09.861 response: 00:18:09.861 { 00:18:09.861 "code": -1, 00:18:09.862 "message": "Operation not permitted" 00:18:09.862 } 00:18:09.862 03:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:10.120 [2024-12-06 03:26:30.053453] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:10.121 [2024-12-06 03:26:30.053503] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:10.121 request: 00:18:10.121 { 00:18:10.121 "name": "TLSTEST", 00:18:10.121 "trtype": "tcp", 00:18:10.121 "traddr": "10.0.0.2", 00:18:10.121 "adrfam": "ipv4", 00:18:10.121 "trsvcid": "4420", 00:18:10.121 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:10.121 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:10.121 "prchk_reftag": false, 00:18:10.121 "prchk_guard": false, 00:18:10.121 "hdgst": false, 00:18:10.121 "ddgst": false, 00:18:10.121 "psk": "key0", 00:18:10.121 "allow_unrecognized_csi": false, 00:18:10.121 "method": "bdev_nvme_attach_controller", 00:18:10.121 "req_id": 1 00:18:10.121 } 00:18:10.121 Got JSON-RPC error response 00:18:10.121 response: 00:18:10.121 { 00:18:10.121 "code": -126, 00:18:10.121 "message": "Required key not available" 00:18:10.121 } 00:18:10.121 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2633573 00:18:10.121 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2633573 ']' 00:18:10.121 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2633573 00:18:10.121 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:10.121 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.121 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2633573 00:18:10.121 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:10.121 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:10.121 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2633573' 00:18:10.121 killing process with pid 2633573 00:18:10.121 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2633573 00:18:10.121 Received shutdown signal, test time was about 10.000000 seconds 00:18:10.121 00:18:10.121 Latency(us) 00:18:10.121 [2024-12-06T02:26:30.262Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.121 [2024-12-06T02:26:30.262Z] =================================================================================================================== 00:18:10.121 [2024-12-06T02:26:30.262Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:10.121 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2633573 00:18:10.412 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:10.412 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:10.412 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:10.412 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:10.412 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:10.412 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2631514 00:18:10.412 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2631514 ']' 00:18:10.412 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2631514 00:18:10.412 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:10.412 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.412 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2631514 00:18:10.412 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:10.412 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:10.412 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2631514' 00:18:10.412 killing process with pid 2631514 00:18:10.412 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2631514 00:18:10.412 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2631514 00:18:10.412 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:18:10.412 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:10.412 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:10.412 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.412 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2633636 00:18:10.412 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:10.412 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2633636 00:18:10.413 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2633636 ']' 00:18:10.413 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.413 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:10.413 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.413 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:10.413 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.702 [2024-12-06 03:26:30.560101] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:18:10.702 [2024-12-06 03:26:30.560149] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.702 [2024-12-06 03:26:30.628935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.702 [2024-12-06 03:26:30.670208] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:10.702 [2024-12-06 03:26:30.670248] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:10.702 [2024-12-06 03:26:30.670256] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:10.702 [2024-12-06 03:26:30.670262] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:10.702 [2024-12-06 03:26:30.670267] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:10.702 [2024-12-06 03:26:30.670837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.702 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:10.702 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:10.702 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:10.702 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:10.702 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.702 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:10.702 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.yx0YCZglnE 00:18:10.702 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:10.702 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.yx0YCZglnE 00:18:10.702 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:18:10.702 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.702 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:18:10.702 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.702 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.yx0YCZglnE 00:18:10.702 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.yx0YCZglnE 00:18:10.702 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:10.982 [2024-12-06 03:26:30.980649] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:10.982 03:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:11.284 03:26:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:11.284 [2024-12-06 03:26:31.349603] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:11.284 [2024-12-06 03:26:31.349822] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:11.284 03:26:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:11.629 malloc0 00:18:11.629 03:26:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:11.629 03:26:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.yx0YCZglnE 00:18:11.887 [2024-12-06 03:26:31.899100] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.yx0YCZglnE': 0100666 00:18:11.887 [2024-12-06 03:26:31.899132] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:11.887 request: 00:18:11.887 { 00:18:11.887 "name": "key0", 00:18:11.887 "path": "/tmp/tmp.yx0YCZglnE", 00:18:11.887 "method": "keyring_file_add_key", 00:18:11.887 "req_id": 1 00:18:11.887 } 00:18:11.887 Got JSON-RPC error response 00:18:11.887 response: 00:18:11.887 { 00:18:11.887 "code": -1, 00:18:11.887 "message": "Operation not permitted" 00:18:11.887 } 00:18:11.887 03:26:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:12.145 [2024-12-06 03:26:32.103657] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:12.145 [2024-12-06 03:26:32.103693] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:12.145 request: 00:18:12.145 { 00:18:12.145 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:12.145 "host": "nqn.2016-06.io.spdk:host1", 00:18:12.145 "psk": "key0", 00:18:12.145 "method": "nvmf_subsystem_add_host", 00:18:12.145 "req_id": 1 00:18:12.145 } 00:18:12.145 Got JSON-RPC error response 00:18:12.145 response: 00:18:12.145 { 00:18:12.145 "code": -32603, 00:18:12.145 "message": "Internal error" 00:18:12.145 } 00:18:12.145 03:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:12.145 03:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:12.145 03:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:12.145 03:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:12.145 03:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2633636 00:18:12.145 03:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2633636 ']' 00:18:12.145 03:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2633636 00:18:12.145 03:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:12.145 03:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:12.145 03:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2633636 00:18:12.145 03:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:12.145 03:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:12.145 03:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2633636' 00:18:12.145 killing process with pid 2633636 00:18:12.145 03:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2633636 00:18:12.145 03:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2633636 00:18:12.403 03:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.yx0YCZglnE 00:18:12.403 03:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:18:12.403 03:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:12.403 03:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:12.404 03:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:12.404 03:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2634121 00:18:12.404 03:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2634121 00:18:12.404 03:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:12.404 03:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2634121 ']' 00:18:12.404 03:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:12.404 03:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:12.404 03:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:12.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:12.404 03:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:12.404 03:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:12.404 [2024-12-06 03:26:32.399182] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:18:12.404 [2024-12-06 03:26:32.399230] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:12.404 [2024-12-06 03:26:32.464898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.404 [2024-12-06 03:26:32.505809] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:12.404 [2024-12-06 03:26:32.505844] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:12.404 [2024-12-06 03:26:32.505851] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:12.404 [2024-12-06 03:26:32.505857] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:12.404 [2024-12-06 03:26:32.505862] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:12.404 [2024-12-06 03:26:32.506398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:12.662 03:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:12.662 03:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:12.662 03:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:12.662 03:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:12.662 03:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:12.662 03:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:12.662 03:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.yx0YCZglnE 00:18:12.662 03:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.yx0YCZglnE 00:18:12.662 03:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:12.920 [2024-12-06 03:26:32.816019] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:12.920 03:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:12.920 03:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:13.179 [2024-12-06 03:26:33.188989] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:13.179 [2024-12-06 03:26:33.189201] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:13.179 03:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:13.439 malloc0 00:18:13.439 03:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:13.698 03:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.yx0YCZglnE 00:18:13.698 03:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:13.957 03:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:13.957 03:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2634380 00:18:13.957 03:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:13.957 03:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2634380 /var/tmp/bdevperf.sock 00:18:13.957 03:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2634380 ']' 00:18:13.957 03:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:13.957 03:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:13.957 03:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:13.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:13.957 03:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:13.957 03:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.957 [2024-12-06 03:26:33.997819] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:18:13.957 [2024-12-06 03:26:33.997866] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2634380 ] 00:18:13.957 [2024-12-06 03:26:34.055903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.217 [2024-12-06 03:26:34.097413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:14.217 03:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:14.217 03:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:14.217 03:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yx0YCZglnE 00:18:14.476 03:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:14.476 [2024-12-06 03:26:34.549741] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:14.735 TLSTESTn1 00:18:14.735 03:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:14.994 03:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:18:14.994 "subsystems": [ 00:18:14.994 { 00:18:14.994 "subsystem": "keyring", 00:18:14.994 "config": [ 00:18:14.994 { 00:18:14.994 "method": "keyring_file_add_key", 00:18:14.994 "params": { 00:18:14.994 "name": "key0", 00:18:14.994 "path": "/tmp/tmp.yx0YCZglnE" 00:18:14.994 } 00:18:14.994 } 00:18:14.994 ] 00:18:14.994 }, 00:18:14.994 { 00:18:14.994 "subsystem": "iobuf", 00:18:14.994 "config": [ 00:18:14.994 { 00:18:14.994 "method": "iobuf_set_options", 00:18:14.994 "params": { 00:18:14.994 "small_pool_count": 8192, 00:18:14.994 "large_pool_count": 1024, 00:18:14.995 "small_bufsize": 8192, 00:18:14.995 "large_bufsize": 135168, 00:18:14.995 "enable_numa": false 00:18:14.995 } 00:18:14.995 } 00:18:14.995 ] 00:18:14.995 }, 00:18:14.995 { 00:18:14.995 "subsystem": "sock", 00:18:14.995 "config": [ 00:18:14.995 { 00:18:14.995 "method": "sock_set_default_impl", 00:18:14.995 "params": { 00:18:14.995 "impl_name": "posix" 00:18:14.995 } 00:18:14.995 }, 00:18:14.995 { 00:18:14.995 "method": "sock_impl_set_options", 00:18:14.995 "params": { 00:18:14.995 "impl_name": "ssl", 00:18:14.995 "recv_buf_size": 4096, 00:18:14.995 "send_buf_size": 4096, 00:18:14.995 "enable_recv_pipe": true, 00:18:14.995 "enable_quickack": false, 00:18:14.995 "enable_placement_id": 0, 00:18:14.995 "enable_zerocopy_send_server": true, 00:18:14.995 "enable_zerocopy_send_client": false, 00:18:14.995 "zerocopy_threshold": 0, 00:18:14.995 "tls_version": 0, 00:18:14.995 "enable_ktls": false 00:18:14.995 } 00:18:14.995 }, 00:18:14.995 { 00:18:14.995 "method": "sock_impl_set_options", 00:18:14.995 "params": { 00:18:14.995 "impl_name": "posix", 00:18:14.995 "recv_buf_size": 2097152, 00:18:14.995 "send_buf_size": 2097152, 00:18:14.995 "enable_recv_pipe": true, 00:18:14.995 "enable_quickack": false, 00:18:14.995 "enable_placement_id": 0, 00:18:14.995 "enable_zerocopy_send_server": true, 00:18:14.995 "enable_zerocopy_send_client": false, 00:18:14.995 "zerocopy_threshold": 0, 00:18:14.995 "tls_version": 0, 00:18:14.995 "enable_ktls": false 00:18:14.995 } 00:18:14.995 } 00:18:14.995 ] 00:18:14.995 }, 00:18:14.995 { 00:18:14.995 "subsystem": "vmd", 00:18:14.995 "config": [] 00:18:14.995 }, 00:18:14.995 { 00:18:14.995 "subsystem": "accel", 00:18:14.995 "config": [ 00:18:14.995 { 00:18:14.995 "method": "accel_set_options", 00:18:14.995 "params": { 00:18:14.995 "small_cache_size": 128, 00:18:14.995 "large_cache_size": 16, 00:18:14.995 "task_count": 2048, 00:18:14.995 "sequence_count": 2048, 00:18:14.995 "buf_count": 2048 00:18:14.995 } 00:18:14.995 } 00:18:14.995 ] 00:18:14.995 }, 00:18:14.995 { 00:18:14.995 "subsystem": "bdev", 00:18:14.995 "config": [ 00:18:14.995 { 00:18:14.995 "method": "bdev_set_options", 00:18:14.995 "params": { 00:18:14.995 "bdev_io_pool_size": 65535, 00:18:14.995 "bdev_io_cache_size": 256, 00:18:14.995 "bdev_auto_examine": true, 00:18:14.995 "iobuf_small_cache_size": 128, 00:18:14.995 "iobuf_large_cache_size": 16 00:18:14.995 } 00:18:14.995 }, 00:18:14.995 { 00:18:14.995 "method": "bdev_raid_set_options", 00:18:14.995 "params": { 00:18:14.995 "process_window_size_kb": 1024, 00:18:14.995 "process_max_bandwidth_mb_sec": 0 00:18:14.995 } 00:18:14.995 }, 00:18:14.995 { 00:18:14.995 "method": "bdev_iscsi_set_options", 00:18:14.995 "params": { 00:18:14.995 "timeout_sec": 30 00:18:14.995 } 00:18:14.995 }, 00:18:14.995 { 00:18:14.995 "method": "bdev_nvme_set_options", 00:18:14.995 "params": { 00:18:14.995 "action_on_timeout": "none", 00:18:14.995 "timeout_us": 0, 00:18:14.995 "timeout_admin_us": 0, 00:18:14.995 "keep_alive_timeout_ms": 10000, 00:18:14.995 "arbitration_burst": 0, 00:18:14.995 "low_priority_weight": 0, 00:18:14.995 "medium_priority_weight": 0, 00:18:14.995 "high_priority_weight": 0, 00:18:14.995 "nvme_adminq_poll_period_us": 10000, 00:18:14.995 "nvme_ioq_poll_period_us": 0, 00:18:14.995 "io_queue_requests": 0, 00:18:14.995 "delay_cmd_submit": true, 00:18:14.995 "transport_retry_count": 4, 00:18:14.995 "bdev_retry_count": 3, 00:18:14.995 "transport_ack_timeout": 0, 00:18:14.995 "ctrlr_loss_timeout_sec": 0, 00:18:14.995 "reconnect_delay_sec": 0, 00:18:14.995 "fast_io_fail_timeout_sec": 0, 00:18:14.995 "disable_auto_failback": false, 00:18:14.995 "generate_uuids": false, 00:18:14.995 "transport_tos": 0, 00:18:14.995 "nvme_error_stat": false, 00:18:14.995 "rdma_srq_size": 0, 00:18:14.995 "io_path_stat": false, 00:18:14.995 "allow_accel_sequence": false, 00:18:14.995 "rdma_max_cq_size": 0, 00:18:14.995 "rdma_cm_event_timeout_ms": 0, 00:18:14.995 "dhchap_digests": [ 00:18:14.995 "sha256", 00:18:14.995 "sha384", 00:18:14.995 "sha512" 00:18:14.995 ], 00:18:14.995 "dhchap_dhgroups": [ 00:18:14.995 "null", 00:18:14.995 "ffdhe2048", 00:18:14.995 "ffdhe3072", 00:18:14.995 "ffdhe4096", 00:18:14.995 "ffdhe6144", 00:18:14.995 "ffdhe8192" 00:18:14.995 ] 00:18:14.995 } 00:18:14.995 }, 00:18:14.995 { 00:18:14.995 "method": "bdev_nvme_set_hotplug", 00:18:14.995 "params": { 00:18:14.995 "period_us": 100000, 00:18:14.995 "enable": false 00:18:14.995 } 00:18:14.995 }, 00:18:14.995 { 00:18:14.995 "method": "bdev_malloc_create", 00:18:14.995 "params": { 00:18:14.995 "name": "malloc0", 00:18:14.995 "num_blocks": 8192, 00:18:14.995 "block_size": 4096, 00:18:14.995 "physical_block_size": 4096, 00:18:14.995 "uuid": "b8841044-85ec-4728-9dc5-8479d3200f8c", 00:18:14.995 "optimal_io_boundary": 0, 00:18:14.995 "md_size": 0, 00:18:14.995 "dif_type": 0, 00:18:14.995 "dif_is_head_of_md": false, 00:18:14.995 "dif_pi_format": 0 00:18:14.995 } 00:18:14.995 }, 00:18:14.995 { 00:18:14.995 "method": "bdev_wait_for_examine" 00:18:14.995 } 00:18:14.995 ] 00:18:14.995 }, 00:18:14.995 { 00:18:14.995 "subsystem": "nbd", 00:18:14.995 "config": [] 00:18:14.995 }, 00:18:14.995 { 00:18:14.995 "subsystem": "scheduler", 00:18:14.995 "config": [ 00:18:14.996 { 00:18:14.996 "method": "framework_set_scheduler", 00:18:14.996 "params": { 00:18:14.996 "name": "static" 00:18:14.996 } 00:18:14.996 } 00:18:14.996 ] 00:18:14.996 }, 00:18:14.996 { 00:18:14.996 "subsystem": "nvmf", 00:18:14.996 "config": [ 00:18:14.996 { 00:18:14.996 "method": "nvmf_set_config", 00:18:14.996 "params": { 00:18:14.996 "discovery_filter": "match_any", 00:18:14.996 "admin_cmd_passthru": { 00:18:14.996 "identify_ctrlr": false 00:18:14.996 }, 00:18:14.996 "dhchap_digests": [ 00:18:14.996 "sha256", 00:18:14.996 "sha384", 00:18:14.996 "sha512" 00:18:14.996 ], 00:18:14.996 "dhchap_dhgroups": [ 00:18:14.996 "null", 00:18:14.996 "ffdhe2048", 00:18:14.996 "ffdhe3072", 00:18:14.996 "ffdhe4096", 00:18:14.996 "ffdhe6144", 00:18:14.996 "ffdhe8192" 00:18:14.996 ] 00:18:14.996 } 00:18:14.996 }, 00:18:14.996 { 00:18:14.996 "method": "nvmf_set_max_subsystems", 00:18:14.996 "params": { 00:18:14.996 "max_subsystems": 1024 00:18:14.996 } 00:18:14.996 }, 00:18:14.996 { 00:18:14.996 "method": "nvmf_set_crdt", 00:18:14.996 "params": { 00:18:14.996 "crdt1": 0, 00:18:14.996 "crdt2": 0, 00:18:14.996 "crdt3": 0 00:18:14.996 } 00:18:14.996 }, 00:18:14.996 { 00:18:14.996 "method": "nvmf_create_transport", 00:18:14.996 "params": { 00:18:14.996 "trtype": "TCP", 00:18:14.996 "max_queue_depth": 128, 00:18:14.996 "max_io_qpairs_per_ctrlr": 127, 00:18:14.996 "in_capsule_data_size": 4096, 00:18:14.996 "max_io_size": 131072, 00:18:14.996 "io_unit_size": 131072, 00:18:14.996 "max_aq_depth": 128, 00:18:14.996 "num_shared_buffers": 511, 00:18:14.996 "buf_cache_size": 4294967295, 00:18:14.996 "dif_insert_or_strip": false, 00:18:14.996 "zcopy": false, 00:18:14.996 "c2h_success": false, 00:18:14.996 "sock_priority": 0, 00:18:14.996 "abort_timeout_sec": 1, 00:18:14.996 "ack_timeout": 0, 00:18:14.996 "data_wr_pool_size": 0 00:18:14.996 } 00:18:14.996 }, 00:18:14.996 { 00:18:14.996 "method": "nvmf_create_subsystem", 00:18:14.996 "params": { 00:18:14.996 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:14.996 "allow_any_host": false, 00:18:14.996 "serial_number": "SPDK00000000000001", 00:18:14.996 "model_number": "SPDK bdev Controller", 00:18:14.996 "max_namespaces": 10, 00:18:14.996 "min_cntlid": 1, 00:18:14.996 "max_cntlid": 65519, 00:18:14.996 "ana_reporting": false 00:18:14.996 } 00:18:14.996 }, 00:18:14.996 { 00:18:14.996 "method": "nvmf_subsystem_add_host", 00:18:14.996 "params": { 00:18:14.996 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:14.996 "host": "nqn.2016-06.io.spdk:host1", 00:18:14.996 "psk": "key0" 00:18:14.996 } 00:18:14.996 }, 00:18:14.996 { 00:18:14.996 "method": "nvmf_subsystem_add_ns", 00:18:14.996 "params": { 00:18:14.996 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:14.996 "namespace": { 00:18:14.996 "nsid": 1, 00:18:14.996 "bdev_name": "malloc0", 00:18:14.996 "nguid": "B884104485EC47289DC58479D3200F8C", 00:18:14.996 "uuid": "b8841044-85ec-4728-9dc5-8479d3200f8c", 00:18:14.996 "no_auto_visible": false 00:18:14.996 } 00:18:14.996 } 00:18:14.996 }, 00:18:14.996 { 00:18:14.996 "method": "nvmf_subsystem_add_listener", 00:18:14.996 "params": { 00:18:14.996 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:14.996 "listen_address": { 00:18:14.996 "trtype": "TCP", 00:18:14.996 "adrfam": "IPv4", 00:18:14.996 "traddr": "10.0.0.2", 00:18:14.996 "trsvcid": "4420" 00:18:14.996 }, 00:18:14.996 "secure_channel": true 00:18:14.996 } 00:18:14.996 } 00:18:14.996 ] 00:18:14.996 } 00:18:14.996 ] 00:18:14.996 }' 00:18:14.996 03:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:15.255 03:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:18:15.255 "subsystems": [ 00:18:15.255 { 00:18:15.255 "subsystem": "keyring", 00:18:15.255 "config": [ 00:18:15.255 { 00:18:15.255 "method": "keyring_file_add_key", 00:18:15.255 "params": { 00:18:15.255 "name": "key0", 00:18:15.255 "path": "/tmp/tmp.yx0YCZglnE" 00:18:15.255 } 00:18:15.255 } 00:18:15.255 ] 00:18:15.255 }, 00:18:15.255 { 00:18:15.255 "subsystem": "iobuf", 00:18:15.255 "config": [ 00:18:15.255 { 00:18:15.255 "method": "iobuf_set_options", 00:18:15.255 "params": { 00:18:15.255 "small_pool_count": 8192, 00:18:15.255 "large_pool_count": 1024, 00:18:15.256 "small_bufsize": 8192, 00:18:15.256 "large_bufsize": 135168, 00:18:15.256 "enable_numa": false 00:18:15.256 } 00:18:15.256 } 00:18:15.256 ] 00:18:15.256 }, 00:18:15.256 { 00:18:15.256 "subsystem": "sock", 00:18:15.256 "config": [ 00:18:15.256 { 00:18:15.256 "method": "sock_set_default_impl", 00:18:15.256 "params": { 00:18:15.256 "impl_name": "posix" 00:18:15.256 } 00:18:15.256 }, 00:18:15.256 { 00:18:15.256 "method": "sock_impl_set_options", 00:18:15.256 "params": { 00:18:15.256 "impl_name": "ssl", 00:18:15.256 "recv_buf_size": 4096, 00:18:15.256 "send_buf_size": 4096, 00:18:15.256 "enable_recv_pipe": true, 00:18:15.256 "enable_quickack": false, 00:18:15.256 "enable_placement_id": 0, 00:18:15.256 "enable_zerocopy_send_server": true, 00:18:15.256 "enable_zerocopy_send_client": false, 00:18:15.256 "zerocopy_threshold": 0, 00:18:15.256 "tls_version": 0, 00:18:15.256 "enable_ktls": false 00:18:15.256 } 00:18:15.256 }, 00:18:15.256 { 00:18:15.256 "method": "sock_impl_set_options", 00:18:15.256 "params": { 00:18:15.256 "impl_name": "posix", 00:18:15.256 "recv_buf_size": 2097152, 00:18:15.256 "send_buf_size": 2097152, 00:18:15.256 "enable_recv_pipe": true, 00:18:15.256 "enable_quickack": false, 00:18:15.256 "enable_placement_id": 0, 00:18:15.256 "enable_zerocopy_send_server": true, 00:18:15.256 "enable_zerocopy_send_client": false, 00:18:15.256 "zerocopy_threshold": 0, 00:18:15.256 "tls_version": 0, 00:18:15.256 "enable_ktls": false 00:18:15.256 } 00:18:15.256 } 00:18:15.256 ] 00:18:15.256 }, 00:18:15.256 { 00:18:15.256 "subsystem": "vmd", 00:18:15.256 "config": [] 00:18:15.256 }, 00:18:15.256 { 00:18:15.256 "subsystem": "accel", 00:18:15.256 "config": [ 00:18:15.256 { 00:18:15.256 "method": "accel_set_options", 00:18:15.256 "params": { 00:18:15.256 "small_cache_size": 128, 00:18:15.256 "large_cache_size": 16, 00:18:15.256 "task_count": 2048, 00:18:15.256 "sequence_count": 2048, 00:18:15.256 "buf_count": 2048 00:18:15.256 } 00:18:15.256 } 00:18:15.256 ] 00:18:15.256 }, 00:18:15.256 { 00:18:15.256 "subsystem": "bdev", 00:18:15.256 "config": [ 00:18:15.256 { 00:18:15.256 "method": "bdev_set_options", 00:18:15.256 "params": { 00:18:15.256 "bdev_io_pool_size": 65535, 00:18:15.256 "bdev_io_cache_size": 256, 00:18:15.256 "bdev_auto_examine": true, 00:18:15.256 "iobuf_small_cache_size": 128, 00:18:15.256 "iobuf_large_cache_size": 16 00:18:15.256 } 00:18:15.256 }, 00:18:15.256 { 00:18:15.256 "method": "bdev_raid_set_options", 00:18:15.256 "params": { 00:18:15.256 "process_window_size_kb": 1024, 00:18:15.256 "process_max_bandwidth_mb_sec": 0 00:18:15.256 } 00:18:15.256 }, 00:18:15.256 { 00:18:15.256 "method": "bdev_iscsi_set_options", 00:18:15.256 "params": { 00:18:15.256 "timeout_sec": 30 00:18:15.256 } 00:18:15.256 }, 00:18:15.256 { 00:18:15.256 "method": "bdev_nvme_set_options", 00:18:15.256 "params": { 00:18:15.256 "action_on_timeout": "none", 00:18:15.256 "timeout_us": 0, 00:18:15.256 "timeout_admin_us": 0, 00:18:15.256 "keep_alive_timeout_ms": 10000, 00:18:15.256 "arbitration_burst": 0, 00:18:15.256 "low_priority_weight": 0, 00:18:15.256 "medium_priority_weight": 0, 00:18:15.256 "high_priority_weight": 0, 00:18:15.256 "nvme_adminq_poll_period_us": 10000, 00:18:15.256 "nvme_ioq_poll_period_us": 0, 00:18:15.256 "io_queue_requests": 512, 00:18:15.256 "delay_cmd_submit": true, 00:18:15.256 "transport_retry_count": 4, 00:18:15.256 "bdev_retry_count": 3, 00:18:15.256 "transport_ack_timeout": 0, 00:18:15.256 "ctrlr_loss_timeout_sec": 0, 00:18:15.256 "reconnect_delay_sec": 0, 00:18:15.256 "fast_io_fail_timeout_sec": 0, 00:18:15.256 "disable_auto_failback": false, 00:18:15.256 "generate_uuids": false, 00:18:15.256 "transport_tos": 0, 00:18:15.256 "nvme_error_stat": false, 00:18:15.256 "rdma_srq_size": 0, 00:18:15.256 "io_path_stat": false, 00:18:15.256 "allow_accel_sequence": false, 00:18:15.256 "rdma_max_cq_size": 0, 00:18:15.256 "rdma_cm_event_timeout_ms": 0, 00:18:15.256 "dhchap_digests": [ 00:18:15.256 "sha256", 00:18:15.256 "sha384", 00:18:15.256 "sha512" 00:18:15.256 ], 00:18:15.256 "dhchap_dhgroups": [ 00:18:15.256 "null", 00:18:15.256 "ffdhe2048", 00:18:15.256 "ffdhe3072", 00:18:15.256 "ffdhe4096", 00:18:15.256 "ffdhe6144", 00:18:15.256 "ffdhe8192" 00:18:15.256 ] 00:18:15.256 } 00:18:15.256 }, 00:18:15.256 { 00:18:15.256 "method": "bdev_nvme_attach_controller", 00:18:15.256 "params": { 00:18:15.256 "name": "TLSTEST", 00:18:15.256 "trtype": "TCP", 00:18:15.256 "adrfam": "IPv4", 00:18:15.256 "traddr": "10.0.0.2", 00:18:15.256 "trsvcid": "4420", 00:18:15.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.256 "prchk_reftag": false, 00:18:15.256 "prchk_guard": false, 00:18:15.256 "ctrlr_loss_timeout_sec": 0, 00:18:15.256 "reconnect_delay_sec": 0, 00:18:15.256 "fast_io_fail_timeout_sec": 0, 00:18:15.256 "psk": "key0", 00:18:15.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:15.256 "hdgst": false, 00:18:15.256 "ddgst": false, 00:18:15.256 "multipath": "multipath" 00:18:15.256 } 00:18:15.256 }, 00:18:15.256 { 00:18:15.256 "method": "bdev_nvme_set_hotplug", 00:18:15.256 "params": { 00:18:15.256 "period_us": 100000, 00:18:15.256 "enable": false 00:18:15.256 } 00:18:15.256 }, 00:18:15.256 { 00:18:15.256 "method": "bdev_wait_for_examine" 00:18:15.256 } 00:18:15.256 ] 00:18:15.256 }, 00:18:15.256 { 00:18:15.256 "subsystem": "nbd", 00:18:15.256 "config": [] 00:18:15.256 } 00:18:15.256 ] 00:18:15.256 }' 00:18:15.256 03:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2634380 00:18:15.256 03:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2634380 ']' 00:18:15.256 03:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2634380 00:18:15.256 03:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:15.256 03:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:15.256 03:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2634380 00:18:15.256 03:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:15.256 03:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:15.256 03:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2634380' 00:18:15.256 killing process with pid 2634380 00:18:15.256 03:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2634380 00:18:15.256 Received shutdown signal, test time was about 10.000000 seconds 00:18:15.256 00:18:15.256 Latency(us) 00:18:15.256 [2024-12-06T02:26:35.397Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.256 [2024-12-06T02:26:35.397Z] =================================================================================================================== 00:18:15.256 [2024-12-06T02:26:35.397Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:15.256 03:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2634380 00:18:15.256 03:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2634121 00:18:15.257 03:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2634121 ']' 00:18:15.257 03:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2634121 00:18:15.257 03:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:15.257 03:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:15.257 03:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2634121 00:18:15.515 03:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:15.516 03:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:15.516 03:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2634121' 00:18:15.516 killing process with pid 2634121 00:18:15.516 03:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2634121 00:18:15.516 03:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2634121 00:18:15.516 03:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:15.516 03:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:15.516 03:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:15.516 03:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:18:15.516 "subsystems": [ 00:18:15.516 { 00:18:15.516 "subsystem": "keyring", 00:18:15.516 "config": [ 00:18:15.516 { 00:18:15.516 "method": "keyring_file_add_key", 00:18:15.516 "params": { 00:18:15.516 "name": "key0", 00:18:15.516 "path": "/tmp/tmp.yx0YCZglnE" 00:18:15.516 } 00:18:15.516 } 00:18:15.516 ] 00:18:15.516 }, 00:18:15.516 { 00:18:15.516 "subsystem": "iobuf", 00:18:15.516 "config": [ 00:18:15.516 { 00:18:15.516 "method": "iobuf_set_options", 00:18:15.516 "params": { 00:18:15.516 "small_pool_count": 8192, 00:18:15.516 "large_pool_count": 1024, 00:18:15.516 "small_bufsize": 8192, 00:18:15.516 "large_bufsize": 135168, 00:18:15.516 "enable_numa": false 00:18:15.516 } 00:18:15.516 } 00:18:15.516 ] 00:18:15.516 }, 00:18:15.516 { 00:18:15.516 "subsystem": "sock", 00:18:15.516 "config": [ 00:18:15.516 { 00:18:15.516 "method": "sock_set_default_impl", 00:18:15.516 "params": { 00:18:15.516 "impl_name": "posix" 00:18:15.516 } 00:18:15.516 }, 00:18:15.516 { 00:18:15.516 "method": "sock_impl_set_options", 00:18:15.516 "params": { 00:18:15.516 "impl_name": "ssl", 00:18:15.516 "recv_buf_size": 4096, 00:18:15.516 "send_buf_size": 4096, 00:18:15.516 "enable_recv_pipe": true, 00:18:15.516 "enable_quickack": false, 00:18:15.516 "enable_placement_id": 0, 00:18:15.516 "enable_zerocopy_send_server": true, 00:18:15.516 "enable_zerocopy_send_client": false, 00:18:15.516 "zerocopy_threshold": 0, 00:18:15.516 "tls_version": 0, 00:18:15.516 "enable_ktls": false 00:18:15.516 } 00:18:15.516 }, 00:18:15.516 { 00:18:15.516 "method": "sock_impl_set_options", 00:18:15.516 "params": { 00:18:15.516 "impl_name": "posix", 00:18:15.516 "recv_buf_size": 2097152, 00:18:15.516 "send_buf_size": 2097152, 00:18:15.516 "enable_recv_pipe": true, 00:18:15.516 "enable_quickack": false, 00:18:15.516 "enable_placement_id": 0, 00:18:15.516 "enable_zerocopy_send_server": true, 00:18:15.516 "enable_zerocopy_send_client": false, 00:18:15.516 "zerocopy_threshold": 0, 00:18:15.516 "tls_version": 0, 00:18:15.516 "enable_ktls": false 00:18:15.516 } 00:18:15.516 } 00:18:15.516 ] 00:18:15.516 }, 00:18:15.516 { 00:18:15.516 "subsystem": "vmd", 00:18:15.516 "config": [] 00:18:15.516 }, 00:18:15.516 { 00:18:15.516 "subsystem": "accel", 00:18:15.516 "config": [ 00:18:15.516 { 00:18:15.516 "method": "accel_set_options", 00:18:15.516 "params": { 00:18:15.516 "small_cache_size": 128, 00:18:15.516 "large_cache_size": 16, 00:18:15.516 "task_count": 2048, 00:18:15.516 "sequence_count": 2048, 00:18:15.516 "buf_count": 2048 00:18:15.516 } 00:18:15.516 } 00:18:15.516 ] 00:18:15.516 }, 00:18:15.516 { 00:18:15.516 "subsystem": "bdev", 00:18:15.516 "config": [ 00:18:15.516 { 00:18:15.516 "method": "bdev_set_options", 00:18:15.516 "params": { 00:18:15.516 "bdev_io_pool_size": 65535, 00:18:15.516 "bdev_io_cache_size": 256, 00:18:15.516 "bdev_auto_examine": true, 00:18:15.516 "iobuf_small_cache_size": 128, 00:18:15.516 "iobuf_large_cache_size": 16 00:18:15.516 } 00:18:15.516 }, 00:18:15.516 { 00:18:15.516 "method": "bdev_raid_set_options", 00:18:15.516 "params": { 00:18:15.516 "process_window_size_kb": 1024, 00:18:15.516 "process_max_bandwidth_mb_sec": 0 00:18:15.516 } 00:18:15.516 }, 00:18:15.516 { 00:18:15.516 "method": "bdev_iscsi_set_options", 00:18:15.516 "params": { 00:18:15.516 "timeout_sec": 30 00:18:15.516 } 00:18:15.516 }, 00:18:15.516 { 00:18:15.516 "method": "bdev_nvme_set_options", 00:18:15.516 "params": { 00:18:15.516 "action_on_timeout": "none", 00:18:15.516 "timeout_us": 0, 00:18:15.516 "timeout_admin_us": 0, 00:18:15.516 "keep_alive_timeout_ms": 10000, 00:18:15.516 "arbitration_burst": 0, 00:18:15.516 "low_priority_weight": 0, 00:18:15.516 "medium_priority_weight": 0, 00:18:15.516 "high_priority_weight": 0, 00:18:15.516 "nvme_adminq_poll_period_us": 10000, 00:18:15.516 "nvme_ioq_poll_period_us": 0, 00:18:15.516 "io_queue_requests": 0, 00:18:15.516 "delay_cmd_submit": true, 00:18:15.516 "transport_retry_count": 4, 00:18:15.516 "bdev_retry_count": 3, 00:18:15.516 "transport_ack_timeout": 0, 00:18:15.516 "ctrlr_loss_timeout_sec": 0, 00:18:15.516 "reconnect_delay_sec": 0, 00:18:15.516 "fast_io_fail_timeout_sec": 0, 00:18:15.516 "disable_auto_failback": false, 00:18:15.516 "generate_uuids": false, 00:18:15.516 "transport_tos": 0, 00:18:15.516 "nvme_error_stat": false, 00:18:15.516 "rdma_srq_size": 0, 00:18:15.516 "io_path_stat": false, 00:18:15.516 "allow_accel_sequence": false, 00:18:15.516 "rdma_max_cq_size": 0, 00:18:15.516 "rdma_cm_event_timeout_ms": 0, 00:18:15.516 "dhchap_digests": [ 00:18:15.516 "sha256", 00:18:15.516 "sha384", 00:18:15.516 "sha512" 00:18:15.516 ], 00:18:15.516 "dhchap_dhgroups": [ 00:18:15.516 "null", 00:18:15.516 "ffdhe2048", 00:18:15.516 "ffdhe3072", 00:18:15.516 "ffdhe4096", 00:18:15.516 "ffdhe6144", 00:18:15.516 "ffdhe8192" 00:18:15.516 ] 00:18:15.516 } 00:18:15.516 }, 00:18:15.516 { 00:18:15.516 "method": "bdev_nvme_set_hotplug", 00:18:15.516 "params": { 00:18:15.516 "period_us": 100000, 00:18:15.516 "enable": false 00:18:15.516 } 00:18:15.516 }, 00:18:15.516 { 00:18:15.516 "method": "bdev_malloc_create", 00:18:15.516 "params": { 00:18:15.516 "name": "malloc0", 00:18:15.516 "num_blocks": 8192, 00:18:15.516 "block_size": 4096, 00:18:15.516 "physical_block_size": 4096, 00:18:15.516 "uuid": "b8841044-85ec-4728-9dc5-8479d3200f8c", 00:18:15.516 "optimal_io_boundary": 0, 00:18:15.516 "md_size": 0, 00:18:15.516 "dif_type": 0, 00:18:15.516 "dif_is_head_of_md": false, 00:18:15.516 "dif_pi_format": 0 00:18:15.516 } 00:18:15.516 }, 00:18:15.516 { 00:18:15.516 "method": "bdev_wait_for_examine" 00:18:15.516 } 00:18:15.516 ] 00:18:15.516 }, 00:18:15.516 { 00:18:15.516 "subsystem": "nbd", 00:18:15.516 "config": [] 00:18:15.516 }, 00:18:15.516 { 00:18:15.516 "subsystem": "scheduler", 00:18:15.516 "config": [ 00:18:15.516 { 00:18:15.516 "method": "framework_set_scheduler", 00:18:15.516 "params": { 00:18:15.516 "name": "static" 00:18:15.516 } 00:18:15.516 } 00:18:15.516 ] 00:18:15.516 }, 00:18:15.516 { 00:18:15.516 "subsystem": "nvmf", 00:18:15.516 "config": [ 00:18:15.516 { 00:18:15.517 "method": "nvmf_set_config", 00:18:15.517 "params": { 00:18:15.517 "discovery_filter": "match_any", 00:18:15.517 "admin_cmd_passthru": { 00:18:15.517 "identify_ctrlr": false 00:18:15.517 }, 00:18:15.517 "dhchap_digests": [ 00:18:15.517 "sha256", 00:18:15.517 "sha384", 00:18:15.517 "sha512" 00:18:15.517 ], 00:18:15.517 "dhchap_dhgroups": [ 00:18:15.517 "null", 00:18:15.517 "ffdhe2048", 00:18:15.517 "ffdhe3072", 00:18:15.517 "ffdhe4096", 00:18:15.517 "ffdhe6144", 00:18:15.517 "ffdhe8192" 00:18:15.517 ] 00:18:15.517 } 00:18:15.517 }, 00:18:15.517 { 00:18:15.517 "method": "nvmf_set_max_subsystems", 00:18:15.517 "params": { 00:18:15.517 "max_subsystems": 1024 00:18:15.517 } 00:18:15.517 }, 00:18:15.517 { 00:18:15.517 "method": "nvmf_set_crdt", 00:18:15.517 "params": { 00:18:15.517 "crdt1": 0, 00:18:15.517 "crdt2": 0, 00:18:15.517 "crdt3": 0 00:18:15.517 } 00:18:15.517 }, 00:18:15.517 { 00:18:15.517 "method": "nvmf_create_transport", 00:18:15.517 "params": { 00:18:15.517 "trtype": "TCP", 00:18:15.517 "max_queue_depth": 128, 00:18:15.517 "max_io_qpairs_per_ctrlr": 127, 00:18:15.517 "in_capsule_data_size": 4096, 00:18:15.517 "max_io_size": 131072, 00:18:15.517 "io_unit_size": 131072, 00:18:15.517 "max_aq_depth": 128, 00:18:15.517 "num_shared_buffers": 511, 00:18:15.517 "buf_cache_size": 4294967295, 00:18:15.517 "dif_insert_or_strip": false, 00:18:15.517 "zcopy": false, 00:18:15.517 "c2h_success": false, 00:18:15.517 "sock_priority": 0, 00:18:15.517 "abort_timeout_sec": 1, 00:18:15.517 "ack_timeout": 0, 00:18:15.517 "data_wr_pool_size": 0 00:18:15.517 } 00:18:15.517 }, 00:18:15.517 { 00:18:15.517 "method": "nvmf_create_subsystem", 00:18:15.517 "params": { 00:18:15.517 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.517 "allow_any_host": false, 00:18:15.517 "serial_number": "SPDK00000000000001", 00:18:15.517 "model_number": "SPDK bdev Controller", 00:18:15.517 "max_namespaces": 10, 00:18:15.517 "min_cntlid": 1, 00:18:15.517 "max_cntlid": 65519, 00:18:15.517 "ana_reporting": false 00:18:15.517 } 00:18:15.517 }, 00:18:15.517 { 00:18:15.517 "method": "nvmf_subsystem_add_host", 00:18:15.517 "params": { 00:18:15.517 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.517 "host": "nqn.2016-06.io.spdk:host1", 00:18:15.517 "psk": "key0" 00:18:15.517 } 00:18:15.517 }, 00:18:15.517 { 00:18:15.517 "method": "nvmf_subsystem_add_ns", 00:18:15.517 "params": { 00:18:15.517 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.517 "namespace": { 00:18:15.517 "nsid": 1, 00:18:15.517 "bdev_name": "malloc0", 00:18:15.517 "nguid": "B884104485EC47289DC58479D3200F8C", 00:18:15.517 "uuid": "b8841044-85ec-4728-9dc5-8479d3200f8c", 00:18:15.517 "no_auto_visible": false 00:18:15.517 } 00:18:15.517 } 00:18:15.517 }, 00:18:15.517 { 00:18:15.517 "method": "nvmf_subsystem_add_listener", 00:18:15.517 "params": { 00:18:15.517 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.517 "listen_address": { 00:18:15.517 "trtype": "TCP", 00:18:15.517 "adrfam": "IPv4", 00:18:15.517 "traddr": "10.0.0.2", 00:18:15.517 "trsvcid": "4420" 00:18:15.517 }, 00:18:15.517 "secure_channel": true 00:18:15.517 } 00:18:15.517 } 00:18:15.517 ] 00:18:15.517 } 00:18:15.517 ] 00:18:15.517 }' 00:18:15.517 03:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.517 03:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2634634 00:18:15.517 03:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:15.517 03:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2634634 00:18:15.517 03:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2634634 ']' 00:18:15.517 03:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.517 03:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:15.517 03:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.517 03:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:15.517 03:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.775 [2024-12-06 03:26:35.664940] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:18:15.775 [2024-12-06 03:26:35.664989] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.775 [2024-12-06 03:26:35.730749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.775 [2024-12-06 03:26:35.771712] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.775 [2024-12-06 03:26:35.771748] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.775 [2024-12-06 03:26:35.771755] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:15.775 [2024-12-06 03:26:35.771761] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:15.775 [2024-12-06 03:26:35.771766] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.775 [2024-12-06 03:26:35.772356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.035 [2024-12-06 03:26:35.987075] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:16.035 [2024-12-06 03:26:36.019091] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:16.035 [2024-12-06 03:26:36.019311] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:16.605 03:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:16.605 03:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:16.605 03:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:16.605 03:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:16.605 03:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.605 03:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:16.605 03:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2634882 00:18:16.605 03:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2634882 /var/tmp/bdevperf.sock 00:18:16.605 03:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2634882 ']' 00:18:16.605 03:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:16.605 03:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:16.605 03:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:16.605 03:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:16.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:16.605 03:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:18:16.605 "subsystems": [ 00:18:16.605 { 00:18:16.605 "subsystem": "keyring", 00:18:16.605 "config": [ 00:18:16.605 { 00:18:16.605 "method": "keyring_file_add_key", 00:18:16.605 "params": { 00:18:16.605 "name": "key0", 00:18:16.605 "path": "/tmp/tmp.yx0YCZglnE" 00:18:16.605 } 00:18:16.605 } 00:18:16.605 ] 00:18:16.605 }, 00:18:16.605 { 00:18:16.605 "subsystem": "iobuf", 00:18:16.605 "config": [ 00:18:16.605 { 00:18:16.605 "method": "iobuf_set_options", 00:18:16.605 "params": { 00:18:16.605 "small_pool_count": 8192, 00:18:16.605 "large_pool_count": 1024, 00:18:16.605 "small_bufsize": 8192, 00:18:16.605 "large_bufsize": 135168, 00:18:16.605 "enable_numa": false 00:18:16.605 } 00:18:16.605 } 00:18:16.605 ] 00:18:16.605 }, 00:18:16.605 { 00:18:16.605 "subsystem": "sock", 00:18:16.605 "config": [ 00:18:16.605 { 00:18:16.605 "method": "sock_set_default_impl", 00:18:16.605 "params": { 00:18:16.605 "impl_name": "posix" 00:18:16.605 } 00:18:16.605 }, 00:18:16.605 { 00:18:16.605 "method": "sock_impl_set_options", 00:18:16.605 "params": { 00:18:16.605 "impl_name": "ssl", 00:18:16.605 "recv_buf_size": 4096, 00:18:16.605 "send_buf_size": 4096, 00:18:16.605 "enable_recv_pipe": true, 00:18:16.605 "enable_quickack": false, 00:18:16.605 "enable_placement_id": 0, 00:18:16.605 "enable_zerocopy_send_server": true, 00:18:16.605 "enable_zerocopy_send_client": false, 00:18:16.605 "zerocopy_threshold": 0, 00:18:16.605 "tls_version": 0, 00:18:16.605 "enable_ktls": false 00:18:16.605 } 00:18:16.605 }, 00:18:16.605 { 00:18:16.605 "method": "sock_impl_set_options", 00:18:16.605 "params": { 00:18:16.605 "impl_name": "posix", 00:18:16.605 "recv_buf_size": 2097152, 00:18:16.605 "send_buf_size": 2097152, 00:18:16.605 "enable_recv_pipe": true, 00:18:16.605 "enable_quickack": false, 00:18:16.605 "enable_placement_id": 0, 00:18:16.605 "enable_zerocopy_send_server": true, 00:18:16.605 "enable_zerocopy_send_client": false, 00:18:16.605 "zerocopy_threshold": 0, 00:18:16.605 "tls_version": 0, 00:18:16.605 "enable_ktls": false 00:18:16.605 } 00:18:16.605 } 00:18:16.605 ] 00:18:16.605 }, 00:18:16.605 { 00:18:16.605 "subsystem": "vmd", 00:18:16.605 "config": [] 00:18:16.605 }, 00:18:16.605 { 00:18:16.605 "subsystem": "accel", 00:18:16.605 "config": [ 00:18:16.605 { 00:18:16.605 "method": "accel_set_options", 00:18:16.605 "params": { 00:18:16.605 "small_cache_size": 128, 00:18:16.605 "large_cache_size": 16, 00:18:16.605 "task_count": 2048, 00:18:16.605 "sequence_count": 2048, 00:18:16.605 "buf_count": 2048 00:18:16.605 } 00:18:16.605 } 00:18:16.605 ] 00:18:16.605 }, 00:18:16.605 { 00:18:16.605 "subsystem": "bdev", 00:18:16.605 "config": [ 00:18:16.605 { 00:18:16.605 "method": "bdev_set_options", 00:18:16.605 "params": { 00:18:16.605 "bdev_io_pool_size": 65535, 00:18:16.605 "bdev_io_cache_size": 256, 00:18:16.605 "bdev_auto_examine": true, 00:18:16.605 "iobuf_small_cache_size": 128, 00:18:16.605 "iobuf_large_cache_size": 16 00:18:16.605 } 00:18:16.605 }, 00:18:16.605 { 00:18:16.605 "method": "bdev_raid_set_options", 00:18:16.605 "params": { 00:18:16.605 "process_window_size_kb": 1024, 00:18:16.605 "process_max_bandwidth_mb_sec": 0 00:18:16.605 } 00:18:16.605 }, 00:18:16.605 { 00:18:16.605 "method": "bdev_iscsi_set_options", 00:18:16.605 "params": { 00:18:16.605 "timeout_sec": 30 00:18:16.605 } 00:18:16.605 }, 00:18:16.605 { 00:18:16.605 "method": "bdev_nvme_set_options", 00:18:16.605 "params": { 00:18:16.605 "action_on_timeout": "none", 00:18:16.605 "timeout_us": 0, 00:18:16.605 "timeout_admin_us": 0, 00:18:16.605 "keep_alive_timeout_ms": 10000, 00:18:16.605 "arbitration_burst": 0, 00:18:16.605 "low_priority_weight": 0, 00:18:16.605 "medium_priority_weight": 0, 00:18:16.605 "high_priority_weight": 0, 00:18:16.605 "nvme_adminq_poll_period_us": 10000, 00:18:16.605 "nvme_ioq_poll_period_us": 0, 00:18:16.605 "io_queue_requests": 512, 00:18:16.605 "delay_cmd_submit": true, 00:18:16.605 "transport_retry_count": 4, 00:18:16.605 "bdev_retry_count": 3, 00:18:16.605 "transport_ack_timeout": 0, 00:18:16.605 "ctrlr_loss_timeout_sec": 0, 00:18:16.605 "reconnect_delay_sec": 0, 00:18:16.605 "fast_io_fail_timeout_sec": 0, 00:18:16.606 "disable_auto_failback": false, 00:18:16.606 "generate_uuids": false, 00:18:16.606 "transport_tos": 0, 00:18:16.606 "nvme_error_stat": false, 00:18:16.606 "rdma_srq_size": 0, 00:18:16.606 "io_path_stat": false, 00:18:16.606 "allow_accel_sequence": false, 00:18:16.606 "rdma_max_cq_size": 0, 00:18:16.606 "rdma_cm_event_timeout_ms": 0, 00:18:16.606 "dhchap_digests": [ 00:18:16.606 "sha256", 00:18:16.606 "sha384", 00:18:16.606 "sha512" 00:18:16.606 ], 00:18:16.606 "dhchap_dhgroups": [ 00:18:16.606 "null", 00:18:16.606 "ffdhe2048", 00:18:16.606 "ffdhe3072", 00:18:16.606 "ffdhe4096", 00:18:16.606 "ffdhe6144", 00:18:16.606 "ffdhe8192" 00:18:16.606 ] 00:18:16.606 } 00:18:16.606 }, 00:18:16.606 { 00:18:16.606 "method": "bdev_nvme_attach_controller", 00:18:16.606 "params": { 00:18:16.606 "name": "TLSTEST", 00:18:16.606 "trtype": "TCP", 00:18:16.606 "adrfam": "IPv4", 00:18:16.606 "traddr": "10.0.0.2", 00:18:16.606 "trsvcid": "4420", 00:18:16.606 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:16.606 "prchk_reftag": false, 00:18:16.606 "prchk_guard": false, 00:18:16.606 "ctrlr_loss_timeout_sec": 0, 00:18:16.606 "reconnect_delay_sec": 0, 00:18:16.606 "fast_io_fail_timeout_sec": 0, 00:18:16.606 "psk": "key0", 00:18:16.606 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:16.606 "hdgst": false, 00:18:16.606 "ddgst": false, 00:18:16.606 "multipath": "multipath" 00:18:16.606 } 00:18:16.606 }, 00:18:16.606 { 00:18:16.606 "method": "bdev_nvme_set_hotplug", 00:18:16.606 "params": { 00:18:16.606 "period_us": 100000, 00:18:16.606 "enable": false 00:18:16.606 } 00:18:16.606 }, 00:18:16.606 { 00:18:16.606 "method": "bdev_wait_for_examine" 00:18:16.606 } 00:18:16.606 ] 00:18:16.606 }, 00:18:16.606 { 00:18:16.606 "subsystem": "nbd", 00:18:16.606 "config": [] 00:18:16.606 } 00:18:16.606 ] 00:18:16.606 }' 00:18:16.606 03:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:16.606 03:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.606 [2024-12-06 03:26:36.579417] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:18:16.606 [2024-12-06 03:26:36.579465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2634882 ] 00:18:16.606 [2024-12-06 03:26:36.637694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.606 [2024-12-06 03:26:36.680228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:16.865 [2024-12-06 03:26:36.833118] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:17.432 03:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:17.432 03:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:17.432 03:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:17.432 Running I/O for 10 seconds... 00:18:19.749 5498.00 IOPS, 21.48 MiB/s [2024-12-06T02:26:40.825Z] 5360.00 IOPS, 20.94 MiB/s [2024-12-06T02:26:41.759Z] 5429.33 IOPS, 21.21 MiB/s [2024-12-06T02:26:42.693Z] 5470.00 IOPS, 21.37 MiB/s [2024-12-06T02:26:43.628Z] 5493.80 IOPS, 21.46 MiB/s [2024-12-06T02:26:44.564Z] 5517.50 IOPS, 21.55 MiB/s [2024-12-06T02:26:45.939Z] 5501.29 IOPS, 21.49 MiB/s [2024-12-06T02:26:46.897Z] 5496.00 IOPS, 21.47 MiB/s [2024-12-06T02:26:47.834Z] 5505.56 IOPS, 21.51 MiB/s [2024-12-06T02:26:47.834Z] 5496.80 IOPS, 21.47 MiB/s 00:18:27.693 Latency(us) 00:18:27.693 [2024-12-06T02:26:47.834Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.693 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:27.693 Verification LBA range: start 0x0 length 0x2000 00:18:27.693 TLSTESTn1 : 10.01 5501.52 21.49 0.00 0.00 23230.36 6097.70 31685.23 00:18:27.693 [2024-12-06T02:26:47.834Z] =================================================================================================================== 00:18:27.693 [2024-12-06T02:26:47.834Z] Total : 5501.52 21.49 0.00 0.00 23230.36 6097.70 31685.23 00:18:27.693 { 00:18:27.693 "results": [ 00:18:27.693 { 00:18:27.693 "job": "TLSTESTn1", 00:18:27.693 "core_mask": "0x4", 00:18:27.693 "workload": "verify", 00:18:27.693 "status": "finished", 00:18:27.693 "verify_range": { 00:18:27.693 "start": 0, 00:18:27.694 "length": 8192 00:18:27.694 }, 00:18:27.694 "queue_depth": 128, 00:18:27.694 "io_size": 4096, 00:18:27.694 "runtime": 10.014326, 00:18:27.694 "iops": 5501.518524561713, 00:18:27.694 "mibps": 21.49030673656919, 00:18:27.694 "io_failed": 0, 00:18:27.694 "io_timeout": 0, 00:18:27.694 "avg_latency_us": 23230.358484108583, 00:18:27.694 "min_latency_us": 6097.697391304348, 00:18:27.694 "max_latency_us": 31685.231304347824 00:18:27.694 } 00:18:27.694 ], 00:18:27.694 "core_count": 1 00:18:27.694 } 00:18:27.694 03:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:27.694 03:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2634882 00:18:27.694 03:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2634882 ']' 00:18:27.694 03:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2634882 00:18:27.694 03:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:27.694 03:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:27.694 03:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2634882 00:18:27.694 03:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:27.694 03:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:27.694 03:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2634882' 00:18:27.694 killing process with pid 2634882 00:18:27.694 03:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2634882 00:18:27.694 Received shutdown signal, test time was about 10.000000 seconds 00:18:27.694 00:18:27.694 Latency(us) 00:18:27.694 [2024-12-06T02:26:47.835Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.694 [2024-12-06T02:26:47.835Z] =================================================================================================================== 00:18:27.694 [2024-12-06T02:26:47.835Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:27.694 03:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2634882 00:18:27.694 03:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2634634 00:18:27.694 03:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2634634 ']' 00:18:27.694 03:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2634634 00:18:27.694 03:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:27.694 03:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:27.694 03:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2634634 00:18:27.953 03:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:27.953 03:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:27.953 03:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2634634' 00:18:27.953 killing process with pid 2634634 00:18:27.953 03:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2634634 00:18:27.953 03:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2634634 00:18:27.953 03:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:18:27.954 03:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:27.954 03:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:27.954 03:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.954 03:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2636729 00:18:27.954 03:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2636729 00:18:27.954 03:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:27.954 03:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2636729 ']' 00:18:27.954 03:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.954 03:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:27.954 03:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.954 03:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:27.954 03:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.954 [2024-12-06 03:26:48.072583] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:18:27.954 [2024-12-06 03:26:48.072628] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.213 [2024-12-06 03:26:48.137864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.213 [2024-12-06 03:26:48.178742] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:28.213 [2024-12-06 03:26:48.178777] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:28.213 [2024-12-06 03:26:48.178785] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:28.213 [2024-12-06 03:26:48.178791] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:28.213 [2024-12-06 03:26:48.178796] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:28.213 [2024-12-06 03:26:48.179395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.213 03:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:28.213 03:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:28.213 03:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:28.213 03:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:28.213 03:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.213 03:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.213 03:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.yx0YCZglnE 00:18:28.213 03:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.yx0YCZglnE 00:18:28.213 03:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:28.473 [2024-12-06 03:26:48.484023] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:28.473 03:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:28.732 03:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:28.732 [2024-12-06 03:26:48.848972] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:28.732 [2024-12-06 03:26:48.849176] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:28.732 03:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:28.991 malloc0 00:18:28.991 03:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:29.250 03:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.yx0YCZglnE 00:18:29.509 03:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:29.509 03:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:29.509 03:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2636984 00:18:29.509 03:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:29.509 03:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2636984 /var/tmp/bdevperf.sock 00:18:29.509 03:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2636984 ']' 00:18:29.509 03:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:29.509 03:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:29.509 03:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:29.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:29.509 03:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:29.509 03:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.509 [2024-12-06 03:26:49.646441] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:18:29.509 [2024-12-06 03:26:49.646487] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2636984 ] 00:18:29.769 [2024-12-06 03:26:49.708684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.769 [2024-12-06 03:26:49.752317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:29.769 03:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:29.769 03:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:29.769 03:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yx0YCZglnE 00:18:30.028 03:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:30.287 [2024-12-06 03:26:50.213574] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:30.287 nvme0n1 00:18:30.287 03:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:30.287 Running I/O for 1 seconds... 00:18:31.664 5275.00 IOPS, 20.61 MiB/s 00:18:31.664 Latency(us) 00:18:31.664 [2024-12-06T02:26:51.805Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.664 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:31.664 Verification LBA range: start 0x0 length 0x2000 00:18:31.664 nvme0n1 : 1.02 5319.87 20.78 0.00 0.00 23878.40 6753.06 25644.52 00:18:31.664 [2024-12-06T02:26:51.805Z] =================================================================================================================== 00:18:31.664 [2024-12-06T02:26:51.805Z] Total : 5319.87 20.78 0.00 0.00 23878.40 6753.06 25644.52 00:18:31.664 { 00:18:31.664 "results": [ 00:18:31.664 { 00:18:31.664 "job": "nvme0n1", 00:18:31.664 "core_mask": "0x2", 00:18:31.664 "workload": "verify", 00:18:31.664 "status": "finished", 00:18:31.664 "verify_range": { 00:18:31.664 "start": 0, 00:18:31.664 "length": 8192 00:18:31.664 }, 00:18:31.664 "queue_depth": 128, 00:18:31.664 "io_size": 4096, 00:18:31.664 "runtime": 1.015627, 00:18:31.664 "iops": 5319.866447032227, 00:18:31.664 "mibps": 20.780728308719638, 00:18:31.664 "io_failed": 0, 00:18:31.664 "io_timeout": 0, 00:18:31.664 "avg_latency_us": 23878.402770763423, 00:18:31.664 "min_latency_us": 6753.057391304348, 00:18:31.664 "max_latency_us": 25644.521739130436 00:18:31.664 } 00:18:31.664 ], 00:18:31.664 "core_count": 1 00:18:31.664 } 00:18:31.664 03:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2636984 00:18:31.664 03:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2636984 ']' 00:18:31.664 03:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2636984 00:18:31.664 03:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:31.664 03:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:31.664 03:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2636984 00:18:31.664 03:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:31.664 03:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:31.664 03:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2636984' 00:18:31.664 killing process with pid 2636984 00:18:31.664 03:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2636984 00:18:31.664 Received shutdown signal, test time was about 1.000000 seconds 00:18:31.664 00:18:31.664 Latency(us) 00:18:31.664 [2024-12-06T02:26:51.805Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.664 [2024-12-06T02:26:51.805Z] =================================================================================================================== 00:18:31.664 [2024-12-06T02:26:51.805Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:31.664 03:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2636984 00:18:31.664 03:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2636729 00:18:31.664 03:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2636729 ']' 00:18:31.664 03:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2636729 00:18:31.664 03:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:31.664 03:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:31.664 03:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2636729 00:18:31.664 03:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:31.664 03:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:31.665 03:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2636729' 00:18:31.665 killing process with pid 2636729 00:18:31.665 03:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2636729 00:18:31.665 03:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2636729 00:18:31.924 03:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:18:31.924 03:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:31.924 03:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:31.924 03:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.924 03:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2637361 00:18:31.924 03:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2637361 00:18:31.925 03:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:31.925 03:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2637361 ']' 00:18:31.925 03:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.925 03:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.925 03:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.925 03:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.925 03:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.925 [2024-12-06 03:26:51.914105] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:18:31.925 [2024-12-06 03:26:51.914154] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.925 [2024-12-06 03:26:51.979820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.925 [2024-12-06 03:26:52.021064] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.925 [2024-12-06 03:26:52.021100] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.925 [2024-12-06 03:26:52.021107] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:31.925 [2024-12-06 03:26:52.021113] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:31.925 [2024-12-06 03:26:52.021119] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.925 [2024-12-06 03:26:52.021682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.185 03:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:32.185 03:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:32.185 03:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:32.185 03:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:32.185 03:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.185 03:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:32.185 03:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:18:32.185 03:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.185 03:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.185 [2024-12-06 03:26:52.158115] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:32.185 malloc0 00:18:32.185 [2024-12-06 03:26:52.186275] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:32.185 [2024-12-06 03:26:52.186491] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:32.185 03:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.185 03:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2637473 00:18:32.185 03:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2637473 /var/tmp/bdevperf.sock 00:18:32.185 03:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:32.185 03:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2637473 ']' 00:18:32.185 03:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:32.185 03:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:32.185 03:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:32.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:32.185 03:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:32.185 03:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.185 [2024-12-06 03:26:52.262887] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:18:32.185 [2024-12-06 03:26:52.262926] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2637473 ] 00:18:32.445 [2024-12-06 03:26:52.324102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.445 [2024-12-06 03:26:52.364932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.445 03:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:32.445 03:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:32.445 03:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yx0YCZglnE 00:18:32.705 03:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:32.705 [2024-12-06 03:26:52.838438] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:32.964 nvme0n1 00:18:32.964 03:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:32.964 Running I/O for 1 seconds... 00:18:33.901 5511.00 IOPS, 21.53 MiB/s 00:18:33.902 Latency(us) 00:18:33.902 [2024-12-06T02:26:54.043Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.902 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:33.902 Verification LBA range: start 0x0 length 0x2000 00:18:33.902 nvme0n1 : 1.02 5538.98 21.64 0.00 0.00 22915.50 6525.11 24390.79 00:18:33.902 [2024-12-06T02:26:54.043Z] =================================================================================================================== 00:18:33.902 [2024-12-06T02:26:54.043Z] Total : 5538.98 21.64 0.00 0.00 22915.50 6525.11 24390.79 00:18:33.902 { 00:18:33.902 "results": [ 00:18:33.902 { 00:18:33.902 "job": "nvme0n1", 00:18:33.902 "core_mask": "0x2", 00:18:33.902 "workload": "verify", 00:18:33.902 "status": "finished", 00:18:33.902 "verify_range": { 00:18:33.902 "start": 0, 00:18:33.902 "length": 8192 00:18:33.902 }, 00:18:33.902 "queue_depth": 128, 00:18:33.902 "io_size": 4096, 00:18:33.902 "runtime": 1.018057, 00:18:33.902 "iops": 5538.982591348029, 00:18:33.902 "mibps": 21.63665074745324, 00:18:33.902 "io_failed": 0, 00:18:33.902 "io_timeout": 0, 00:18:33.902 "avg_latency_us": 22915.495615781398, 00:18:33.902 "min_latency_us": 6525.106086956522, 00:18:33.902 "max_latency_us": 24390.78956521739 00:18:33.902 } 00:18:33.902 ], 00:18:33.902 "core_count": 1 00:18:33.902 } 00:18:34.161 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:18:34.161 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.161 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.161 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.161 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:18:34.161 "subsystems": [ 00:18:34.161 { 00:18:34.161 "subsystem": "keyring", 00:18:34.161 "config": [ 00:18:34.161 { 00:18:34.161 "method": "keyring_file_add_key", 00:18:34.161 "params": { 00:18:34.161 "name": "key0", 00:18:34.161 "path": "/tmp/tmp.yx0YCZglnE" 00:18:34.161 } 00:18:34.161 } 00:18:34.161 ] 00:18:34.161 }, 00:18:34.161 { 00:18:34.161 "subsystem": "iobuf", 00:18:34.161 "config": [ 00:18:34.161 { 00:18:34.161 "method": "iobuf_set_options", 00:18:34.161 "params": { 00:18:34.161 "small_pool_count": 8192, 00:18:34.161 "large_pool_count": 1024, 00:18:34.161 "small_bufsize": 8192, 00:18:34.161 "large_bufsize": 135168, 00:18:34.161 "enable_numa": false 00:18:34.161 } 00:18:34.161 } 00:18:34.161 ] 00:18:34.161 }, 00:18:34.161 { 00:18:34.161 "subsystem": "sock", 00:18:34.161 "config": [ 00:18:34.161 { 00:18:34.161 "method": "sock_set_default_impl", 00:18:34.161 "params": { 00:18:34.161 "impl_name": "posix" 00:18:34.161 } 00:18:34.161 }, 00:18:34.161 { 00:18:34.161 "method": "sock_impl_set_options", 00:18:34.161 "params": { 00:18:34.161 "impl_name": "ssl", 00:18:34.161 "recv_buf_size": 4096, 00:18:34.161 "send_buf_size": 4096, 00:18:34.161 "enable_recv_pipe": true, 00:18:34.161 "enable_quickack": false, 00:18:34.161 "enable_placement_id": 0, 00:18:34.161 "enable_zerocopy_send_server": true, 00:18:34.161 "enable_zerocopy_send_client": false, 00:18:34.161 "zerocopy_threshold": 0, 00:18:34.161 "tls_version": 0, 00:18:34.161 "enable_ktls": false 00:18:34.161 } 00:18:34.161 }, 00:18:34.161 { 00:18:34.161 "method": "sock_impl_set_options", 00:18:34.161 "params": { 00:18:34.161 "impl_name": "posix", 00:18:34.161 "recv_buf_size": 2097152, 00:18:34.161 "send_buf_size": 2097152, 00:18:34.161 "enable_recv_pipe": true, 00:18:34.161 "enable_quickack": false, 00:18:34.161 "enable_placement_id": 0, 00:18:34.161 "enable_zerocopy_send_server": true, 00:18:34.161 "enable_zerocopy_send_client": false, 00:18:34.161 "zerocopy_threshold": 0, 00:18:34.161 "tls_version": 0, 00:18:34.161 "enable_ktls": false 00:18:34.161 } 00:18:34.161 } 00:18:34.161 ] 00:18:34.161 }, 00:18:34.161 { 00:18:34.161 "subsystem": "vmd", 00:18:34.161 "config": [] 00:18:34.161 }, 00:18:34.161 { 00:18:34.161 "subsystem": "accel", 00:18:34.161 "config": [ 00:18:34.161 { 00:18:34.161 "method": "accel_set_options", 00:18:34.161 "params": { 00:18:34.161 "small_cache_size": 128, 00:18:34.161 "large_cache_size": 16, 00:18:34.161 "task_count": 2048, 00:18:34.161 "sequence_count": 2048, 00:18:34.161 "buf_count": 2048 00:18:34.161 } 00:18:34.161 } 00:18:34.161 ] 00:18:34.161 }, 00:18:34.161 { 00:18:34.161 "subsystem": "bdev", 00:18:34.161 "config": [ 00:18:34.161 { 00:18:34.161 "method": "bdev_set_options", 00:18:34.161 "params": { 00:18:34.161 "bdev_io_pool_size": 65535, 00:18:34.161 "bdev_io_cache_size": 256, 00:18:34.161 "bdev_auto_examine": true, 00:18:34.161 "iobuf_small_cache_size": 128, 00:18:34.161 "iobuf_large_cache_size": 16 00:18:34.161 } 00:18:34.161 }, 00:18:34.161 { 00:18:34.161 "method": "bdev_raid_set_options", 00:18:34.161 "params": { 00:18:34.161 "process_window_size_kb": 1024, 00:18:34.161 "process_max_bandwidth_mb_sec": 0 00:18:34.161 } 00:18:34.161 }, 00:18:34.161 { 00:18:34.161 "method": "bdev_iscsi_set_options", 00:18:34.161 "params": { 00:18:34.161 "timeout_sec": 30 00:18:34.161 } 00:18:34.161 }, 00:18:34.161 { 00:18:34.161 "method": "bdev_nvme_set_options", 00:18:34.161 "params": { 00:18:34.161 "action_on_timeout": "none", 00:18:34.161 "timeout_us": 0, 00:18:34.161 "timeout_admin_us": 0, 00:18:34.161 "keep_alive_timeout_ms": 10000, 00:18:34.161 "arbitration_burst": 0, 00:18:34.161 "low_priority_weight": 0, 00:18:34.161 "medium_priority_weight": 0, 00:18:34.161 "high_priority_weight": 0, 00:18:34.161 "nvme_adminq_poll_period_us": 10000, 00:18:34.161 "nvme_ioq_poll_period_us": 0, 00:18:34.161 "io_queue_requests": 0, 00:18:34.161 "delay_cmd_submit": true, 00:18:34.161 "transport_retry_count": 4, 00:18:34.161 "bdev_retry_count": 3, 00:18:34.161 "transport_ack_timeout": 0, 00:18:34.161 "ctrlr_loss_timeout_sec": 0, 00:18:34.161 "reconnect_delay_sec": 0, 00:18:34.161 "fast_io_fail_timeout_sec": 0, 00:18:34.161 "disable_auto_failback": false, 00:18:34.161 "generate_uuids": false, 00:18:34.161 "transport_tos": 0, 00:18:34.161 "nvme_error_stat": false, 00:18:34.161 "rdma_srq_size": 0, 00:18:34.161 "io_path_stat": false, 00:18:34.161 "allow_accel_sequence": false, 00:18:34.161 "rdma_max_cq_size": 0, 00:18:34.161 "rdma_cm_event_timeout_ms": 0, 00:18:34.161 "dhchap_digests": [ 00:18:34.161 "sha256", 00:18:34.161 "sha384", 00:18:34.161 "sha512" 00:18:34.161 ], 00:18:34.161 "dhchap_dhgroups": [ 00:18:34.161 "null", 00:18:34.161 "ffdhe2048", 00:18:34.161 "ffdhe3072", 00:18:34.161 "ffdhe4096", 00:18:34.161 "ffdhe6144", 00:18:34.161 "ffdhe8192" 00:18:34.161 ] 00:18:34.161 } 00:18:34.161 }, 00:18:34.161 { 00:18:34.161 "method": "bdev_nvme_set_hotplug", 00:18:34.161 "params": { 00:18:34.161 "period_us": 100000, 00:18:34.161 "enable": false 00:18:34.161 } 00:18:34.161 }, 00:18:34.161 { 00:18:34.161 "method": "bdev_malloc_create", 00:18:34.161 "params": { 00:18:34.161 "name": "malloc0", 00:18:34.161 "num_blocks": 8192, 00:18:34.161 "block_size": 4096, 00:18:34.161 "physical_block_size": 4096, 00:18:34.161 "uuid": "d2553f42-50a5-4652-820c-fddf293d0e72", 00:18:34.162 "optimal_io_boundary": 0, 00:18:34.162 "md_size": 0, 00:18:34.162 "dif_type": 0, 00:18:34.162 "dif_is_head_of_md": false, 00:18:34.162 "dif_pi_format": 0 00:18:34.162 } 00:18:34.162 }, 00:18:34.162 { 00:18:34.162 "method": "bdev_wait_for_examine" 00:18:34.162 } 00:18:34.162 ] 00:18:34.162 }, 00:18:34.162 { 00:18:34.162 "subsystem": "nbd", 00:18:34.162 "config": [] 00:18:34.162 }, 00:18:34.162 { 00:18:34.162 "subsystem": "scheduler", 00:18:34.162 "config": [ 00:18:34.162 { 00:18:34.162 "method": "framework_set_scheduler", 00:18:34.162 "params": { 00:18:34.162 "name": "static" 00:18:34.162 } 00:18:34.162 } 00:18:34.162 ] 00:18:34.162 }, 00:18:34.162 { 00:18:34.162 "subsystem": "nvmf", 00:18:34.162 "config": [ 00:18:34.162 { 00:18:34.162 "method": "nvmf_set_config", 00:18:34.162 "params": { 00:18:34.162 "discovery_filter": "match_any", 00:18:34.162 "admin_cmd_passthru": { 00:18:34.162 "identify_ctrlr": false 00:18:34.162 }, 00:18:34.162 "dhchap_digests": [ 00:18:34.162 "sha256", 00:18:34.162 "sha384", 00:18:34.162 "sha512" 00:18:34.162 ], 00:18:34.162 "dhchap_dhgroups": [ 00:18:34.162 "null", 00:18:34.162 "ffdhe2048", 00:18:34.162 "ffdhe3072", 00:18:34.162 "ffdhe4096", 00:18:34.162 "ffdhe6144", 00:18:34.162 "ffdhe8192" 00:18:34.162 ] 00:18:34.162 } 00:18:34.162 }, 00:18:34.162 { 00:18:34.162 "method": "nvmf_set_max_subsystems", 00:18:34.162 "params": { 00:18:34.162 "max_subsystems": 1024 00:18:34.162 } 00:18:34.162 }, 00:18:34.162 { 00:18:34.162 "method": "nvmf_set_crdt", 00:18:34.162 "params": { 00:18:34.162 "crdt1": 0, 00:18:34.162 "crdt2": 0, 00:18:34.162 "crdt3": 0 00:18:34.162 } 00:18:34.162 }, 00:18:34.162 { 00:18:34.162 "method": "nvmf_create_transport", 00:18:34.162 "params": { 00:18:34.162 "trtype": "TCP", 00:18:34.162 "max_queue_depth": 128, 00:18:34.162 "max_io_qpairs_per_ctrlr": 127, 00:18:34.162 "in_capsule_data_size": 4096, 00:18:34.162 "max_io_size": 131072, 00:18:34.162 "io_unit_size": 131072, 00:18:34.162 "max_aq_depth": 128, 00:18:34.162 "num_shared_buffers": 511, 00:18:34.162 "buf_cache_size": 4294967295, 00:18:34.162 "dif_insert_or_strip": false, 00:18:34.162 "zcopy": false, 00:18:34.162 "c2h_success": false, 00:18:34.162 "sock_priority": 0, 00:18:34.162 "abort_timeout_sec": 1, 00:18:34.162 "ack_timeout": 0, 00:18:34.162 "data_wr_pool_size": 0 00:18:34.162 } 00:18:34.162 }, 00:18:34.162 { 00:18:34.162 "method": "nvmf_create_subsystem", 00:18:34.162 "params": { 00:18:34.162 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.162 "allow_any_host": false, 00:18:34.162 "serial_number": "00000000000000000000", 00:18:34.162 "model_number": "SPDK bdev Controller", 00:18:34.162 "max_namespaces": 32, 00:18:34.162 "min_cntlid": 1, 00:18:34.162 "max_cntlid": 65519, 00:18:34.162 "ana_reporting": false 00:18:34.162 } 00:18:34.162 }, 00:18:34.162 { 00:18:34.162 "method": "nvmf_subsystem_add_host", 00:18:34.162 "params": { 00:18:34.162 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.162 "host": "nqn.2016-06.io.spdk:host1", 00:18:34.162 "psk": "key0" 00:18:34.162 } 00:18:34.162 }, 00:18:34.162 { 00:18:34.162 "method": "nvmf_subsystem_add_ns", 00:18:34.162 "params": { 00:18:34.162 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.162 "namespace": { 00:18:34.162 "nsid": 1, 00:18:34.162 "bdev_name": "malloc0", 00:18:34.162 "nguid": "D2553F4250A54652820CFDDF293D0E72", 00:18:34.162 "uuid": "d2553f42-50a5-4652-820c-fddf293d0e72", 00:18:34.162 "no_auto_visible": false 00:18:34.162 } 00:18:34.162 } 00:18:34.162 }, 00:18:34.162 { 00:18:34.162 "method": "nvmf_subsystem_add_listener", 00:18:34.162 "params": { 00:18:34.162 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.162 "listen_address": { 00:18:34.162 "trtype": "TCP", 00:18:34.162 "adrfam": "IPv4", 00:18:34.162 "traddr": "10.0.0.2", 00:18:34.162 "trsvcid": "4420" 00:18:34.162 }, 00:18:34.162 "secure_channel": false, 00:18:34.162 "sock_impl": "ssl" 00:18:34.162 } 00:18:34.162 } 00:18:34.162 ] 00:18:34.162 } 00:18:34.162 ] 00:18:34.162 }' 00:18:34.162 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:34.422 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:18:34.422 "subsystems": [ 00:18:34.422 { 00:18:34.422 "subsystem": "keyring", 00:18:34.422 "config": [ 00:18:34.422 { 00:18:34.422 "method": "keyring_file_add_key", 00:18:34.422 "params": { 00:18:34.422 "name": "key0", 00:18:34.423 "path": "/tmp/tmp.yx0YCZglnE" 00:18:34.423 } 00:18:34.423 } 00:18:34.423 ] 00:18:34.423 }, 00:18:34.423 { 00:18:34.423 "subsystem": "iobuf", 00:18:34.423 "config": [ 00:18:34.423 { 00:18:34.423 "method": "iobuf_set_options", 00:18:34.423 "params": { 00:18:34.423 "small_pool_count": 8192, 00:18:34.423 "large_pool_count": 1024, 00:18:34.423 "small_bufsize": 8192, 00:18:34.423 "large_bufsize": 135168, 00:18:34.423 "enable_numa": false 00:18:34.423 } 00:18:34.423 } 00:18:34.423 ] 00:18:34.423 }, 00:18:34.423 { 00:18:34.423 "subsystem": "sock", 00:18:34.423 "config": [ 00:18:34.423 { 00:18:34.423 "method": "sock_set_default_impl", 00:18:34.423 "params": { 00:18:34.423 "impl_name": "posix" 00:18:34.423 } 00:18:34.423 }, 00:18:34.423 { 00:18:34.423 "method": "sock_impl_set_options", 00:18:34.423 "params": { 00:18:34.423 "impl_name": "ssl", 00:18:34.423 "recv_buf_size": 4096, 00:18:34.423 "send_buf_size": 4096, 00:18:34.423 "enable_recv_pipe": true, 00:18:34.423 "enable_quickack": false, 00:18:34.423 "enable_placement_id": 0, 00:18:34.423 "enable_zerocopy_send_server": true, 00:18:34.423 "enable_zerocopy_send_client": false, 00:18:34.423 "zerocopy_threshold": 0, 00:18:34.423 "tls_version": 0, 00:18:34.423 "enable_ktls": false 00:18:34.423 } 00:18:34.423 }, 00:18:34.423 { 00:18:34.423 "method": "sock_impl_set_options", 00:18:34.423 "params": { 00:18:34.423 "impl_name": "posix", 00:18:34.423 "recv_buf_size": 2097152, 00:18:34.423 "send_buf_size": 2097152, 00:18:34.423 "enable_recv_pipe": true, 00:18:34.423 "enable_quickack": false, 00:18:34.423 "enable_placement_id": 0, 00:18:34.423 "enable_zerocopy_send_server": true, 00:18:34.423 "enable_zerocopy_send_client": false, 00:18:34.423 "zerocopy_threshold": 0, 00:18:34.423 "tls_version": 0, 00:18:34.423 "enable_ktls": false 00:18:34.423 } 00:18:34.423 } 00:18:34.423 ] 00:18:34.423 }, 00:18:34.423 { 00:18:34.423 "subsystem": "vmd", 00:18:34.423 "config": [] 00:18:34.423 }, 00:18:34.423 { 00:18:34.423 "subsystem": "accel", 00:18:34.423 "config": [ 00:18:34.423 { 00:18:34.423 "method": "accel_set_options", 00:18:34.423 "params": { 00:18:34.423 "small_cache_size": 128, 00:18:34.423 "large_cache_size": 16, 00:18:34.423 "task_count": 2048, 00:18:34.423 "sequence_count": 2048, 00:18:34.423 "buf_count": 2048 00:18:34.423 } 00:18:34.423 } 00:18:34.423 ] 00:18:34.423 }, 00:18:34.423 { 00:18:34.423 "subsystem": "bdev", 00:18:34.423 "config": [ 00:18:34.423 { 00:18:34.423 "method": "bdev_set_options", 00:18:34.423 "params": { 00:18:34.423 "bdev_io_pool_size": 65535, 00:18:34.423 "bdev_io_cache_size": 256, 00:18:34.423 "bdev_auto_examine": true, 00:18:34.423 "iobuf_small_cache_size": 128, 00:18:34.423 "iobuf_large_cache_size": 16 00:18:34.423 } 00:18:34.423 }, 00:18:34.423 { 00:18:34.423 "method": "bdev_raid_set_options", 00:18:34.423 "params": { 00:18:34.423 "process_window_size_kb": 1024, 00:18:34.423 "process_max_bandwidth_mb_sec": 0 00:18:34.423 } 00:18:34.423 }, 00:18:34.423 { 00:18:34.423 "method": "bdev_iscsi_set_options", 00:18:34.423 "params": { 00:18:34.423 "timeout_sec": 30 00:18:34.423 } 00:18:34.423 }, 00:18:34.423 { 00:18:34.423 "method": "bdev_nvme_set_options", 00:18:34.423 "params": { 00:18:34.423 "action_on_timeout": "none", 00:18:34.423 "timeout_us": 0, 00:18:34.423 "timeout_admin_us": 0, 00:18:34.423 "keep_alive_timeout_ms": 10000, 00:18:34.423 "arbitration_burst": 0, 00:18:34.423 "low_priority_weight": 0, 00:18:34.423 "medium_priority_weight": 0, 00:18:34.423 "high_priority_weight": 0, 00:18:34.423 "nvme_adminq_poll_period_us": 10000, 00:18:34.423 "nvme_ioq_poll_period_us": 0, 00:18:34.423 "io_queue_requests": 512, 00:18:34.423 "delay_cmd_submit": true, 00:18:34.423 "transport_retry_count": 4, 00:18:34.423 "bdev_retry_count": 3, 00:18:34.423 "transport_ack_timeout": 0, 00:18:34.423 "ctrlr_loss_timeout_sec": 0, 00:18:34.423 "reconnect_delay_sec": 0, 00:18:34.423 "fast_io_fail_timeout_sec": 0, 00:18:34.423 "disable_auto_failback": false, 00:18:34.423 "generate_uuids": false, 00:18:34.423 "transport_tos": 0, 00:18:34.423 "nvme_error_stat": false, 00:18:34.423 "rdma_srq_size": 0, 00:18:34.423 "io_path_stat": false, 00:18:34.423 "allow_accel_sequence": false, 00:18:34.423 "rdma_max_cq_size": 0, 00:18:34.423 "rdma_cm_event_timeout_ms": 0, 00:18:34.423 "dhchap_digests": [ 00:18:34.423 "sha256", 00:18:34.423 "sha384", 00:18:34.423 "sha512" 00:18:34.423 ], 00:18:34.423 "dhchap_dhgroups": [ 00:18:34.423 "null", 00:18:34.423 "ffdhe2048", 00:18:34.423 "ffdhe3072", 00:18:34.423 "ffdhe4096", 00:18:34.423 "ffdhe6144", 00:18:34.423 "ffdhe8192" 00:18:34.423 ] 00:18:34.423 } 00:18:34.423 }, 00:18:34.423 { 00:18:34.423 "method": "bdev_nvme_attach_controller", 00:18:34.423 "params": { 00:18:34.423 "name": "nvme0", 00:18:34.423 "trtype": "TCP", 00:18:34.423 "adrfam": "IPv4", 00:18:34.423 "traddr": "10.0.0.2", 00:18:34.423 "trsvcid": "4420", 00:18:34.423 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.423 "prchk_reftag": false, 00:18:34.423 "prchk_guard": false, 00:18:34.423 "ctrlr_loss_timeout_sec": 0, 00:18:34.423 "reconnect_delay_sec": 0, 00:18:34.423 "fast_io_fail_timeout_sec": 0, 00:18:34.423 "psk": "key0", 00:18:34.423 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:34.423 "hdgst": false, 00:18:34.423 "ddgst": false, 00:18:34.423 "multipath": "multipath" 00:18:34.423 } 00:18:34.423 }, 00:18:34.423 { 00:18:34.423 "method": "bdev_nvme_set_hotplug", 00:18:34.423 "params": { 00:18:34.423 "period_us": 100000, 00:18:34.423 "enable": false 00:18:34.423 } 00:18:34.423 }, 00:18:34.423 { 00:18:34.423 "method": "bdev_enable_histogram", 00:18:34.423 "params": { 00:18:34.423 "name": "nvme0n1", 00:18:34.423 "enable": true 00:18:34.423 } 00:18:34.423 }, 00:18:34.423 { 00:18:34.423 "method": "bdev_wait_for_examine" 00:18:34.423 } 00:18:34.423 ] 00:18:34.423 }, 00:18:34.423 { 00:18:34.423 "subsystem": "nbd", 00:18:34.423 "config": [] 00:18:34.423 } 00:18:34.423 ] 00:18:34.423 }' 00:18:34.423 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2637473 00:18:34.423 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2637473 ']' 00:18:34.423 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2637473 00:18:34.423 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:34.423 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:34.423 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2637473 00:18:34.423 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:34.424 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:34.424 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2637473' 00:18:34.424 killing process with pid 2637473 00:18:34.424 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2637473 00:18:34.424 Received shutdown signal, test time was about 1.000000 seconds 00:18:34.424 00:18:34.424 Latency(us) 00:18:34.424 [2024-12-06T02:26:54.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.424 [2024-12-06T02:26:54.565Z] =================================================================================================================== 00:18:34.424 [2024-12-06T02:26:54.565Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:34.424 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2637473 00:18:34.684 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2637361 00:18:34.684 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2637361 ']' 00:18:34.684 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2637361 00:18:34.684 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:34.684 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:34.684 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2637361 00:18:34.684 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:34.684 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:34.684 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2637361' 00:18:34.684 killing process with pid 2637361 00:18:34.684 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2637361 00:18:34.684 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2637361 00:18:34.944 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:18:34.944 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:34.944 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:18:34.944 "subsystems": [ 00:18:34.944 { 00:18:34.944 "subsystem": "keyring", 00:18:34.944 "config": [ 00:18:34.944 { 00:18:34.944 "method": "keyring_file_add_key", 00:18:34.944 "params": { 00:18:34.944 "name": "key0", 00:18:34.944 "path": "/tmp/tmp.yx0YCZglnE" 00:18:34.944 } 00:18:34.944 } 00:18:34.944 ] 00:18:34.944 }, 00:18:34.944 { 00:18:34.944 "subsystem": "iobuf", 00:18:34.944 "config": [ 00:18:34.944 { 00:18:34.944 "method": "iobuf_set_options", 00:18:34.944 "params": { 00:18:34.944 "small_pool_count": 8192, 00:18:34.944 "large_pool_count": 1024, 00:18:34.944 "small_bufsize": 8192, 00:18:34.944 "large_bufsize": 135168, 00:18:34.944 "enable_numa": false 00:18:34.944 } 00:18:34.944 } 00:18:34.944 ] 00:18:34.944 }, 00:18:34.944 { 00:18:34.944 "subsystem": "sock", 00:18:34.944 "config": [ 00:18:34.944 { 00:18:34.944 "method": "sock_set_default_impl", 00:18:34.944 "params": { 00:18:34.944 "impl_name": "posix" 00:18:34.944 } 00:18:34.944 }, 00:18:34.944 { 00:18:34.944 "method": "sock_impl_set_options", 00:18:34.944 "params": { 00:18:34.944 "impl_name": "ssl", 00:18:34.944 "recv_buf_size": 4096, 00:18:34.944 "send_buf_size": 4096, 00:18:34.944 "enable_recv_pipe": true, 00:18:34.944 "enable_quickack": false, 00:18:34.944 "enable_placement_id": 0, 00:18:34.944 "enable_zerocopy_send_server": true, 00:18:34.944 "enable_zerocopy_send_client": false, 00:18:34.944 "zerocopy_threshold": 0, 00:18:34.944 "tls_version": 0, 00:18:34.944 "enable_ktls": false 00:18:34.944 } 00:18:34.944 }, 00:18:34.944 { 00:18:34.944 "method": "sock_impl_set_options", 00:18:34.944 "params": { 00:18:34.944 "impl_name": "posix", 00:18:34.944 "recv_buf_size": 2097152, 00:18:34.944 "send_buf_size": 2097152, 00:18:34.944 "enable_recv_pipe": true, 00:18:34.944 "enable_quickack": false, 00:18:34.944 "enable_placement_id": 0, 00:18:34.944 "enable_zerocopy_send_server": true, 00:18:34.944 "enable_zerocopy_send_client": false, 00:18:34.944 "zerocopy_threshold": 0, 00:18:34.944 "tls_version": 0, 00:18:34.944 "enable_ktls": false 00:18:34.944 } 00:18:34.944 } 00:18:34.944 ] 00:18:34.944 }, 00:18:34.944 { 00:18:34.944 "subsystem": "vmd", 00:18:34.944 "config": [] 00:18:34.944 }, 00:18:34.944 { 00:18:34.944 "subsystem": "accel", 00:18:34.944 "config": [ 00:18:34.944 { 00:18:34.944 "method": "accel_set_options", 00:18:34.944 "params": { 00:18:34.944 "small_cache_size": 128, 00:18:34.944 "large_cache_size": 16, 00:18:34.944 "task_count": 2048, 00:18:34.944 "sequence_count": 2048, 00:18:34.944 "buf_count": 2048 00:18:34.944 } 00:18:34.944 } 00:18:34.944 ] 00:18:34.944 }, 00:18:34.944 { 00:18:34.944 "subsystem": "bdev", 00:18:34.944 "config": [ 00:18:34.944 { 00:18:34.944 "method": "bdev_set_options", 00:18:34.944 "params": { 00:18:34.944 "bdev_io_pool_size": 65535, 00:18:34.944 "bdev_io_cache_size": 256, 00:18:34.944 "bdev_auto_examine": true, 00:18:34.944 "iobuf_small_cache_size": 128, 00:18:34.944 "iobuf_large_cache_size": 16 00:18:34.944 } 00:18:34.944 }, 00:18:34.944 { 00:18:34.944 "method": "bdev_raid_set_options", 00:18:34.944 "params": { 00:18:34.944 "process_window_size_kb": 1024, 00:18:34.944 "process_max_bandwidth_mb_sec": 0 00:18:34.944 } 00:18:34.944 }, 00:18:34.944 { 00:18:34.945 "method": "bdev_iscsi_set_options", 00:18:34.945 "params": { 00:18:34.945 "timeout_sec": 30 00:18:34.945 } 00:18:34.945 }, 00:18:34.945 { 00:18:34.945 "method": "bdev_nvme_set_options", 00:18:34.945 "params": { 00:18:34.945 "action_on_timeout": "none", 00:18:34.945 "timeout_us": 0, 00:18:34.945 "timeout_admin_us": 0, 00:18:34.945 "keep_alive_timeout_ms": 10000, 00:18:34.945 "arbitration_burst": 0, 00:18:34.945 "low_priority_weight": 0, 00:18:34.945 "medium_priority_weight": 0, 00:18:34.945 "high_priority_weight": 0, 00:18:34.945 "nvme_adminq_poll_period_us": 10000, 00:18:34.945 "nvme_ioq_poll_period_us": 0, 00:18:34.945 "io_queue_requests": 0, 00:18:34.945 "delay_cmd_submit": true, 00:18:34.945 "transport_retry_count": 4, 00:18:34.945 "bdev_retry_count": 3, 00:18:34.945 "transport_ack_timeout": 0, 00:18:34.945 "ctrlr_loss_timeout_sec": 0, 00:18:34.945 "reconnect_delay_sec": 0, 00:18:34.945 "fast_io_fail_timeout_sec": 0, 00:18:34.945 "disable_auto_failback": false, 00:18:34.945 "generate_uuids": false, 00:18:34.945 "transport_tos": 0, 00:18:34.945 "nvme_error_stat": false, 00:18:34.945 "rdma_srq_size": 0, 00:18:34.945 "io_path_stat": false, 00:18:34.945 "allow_accel_sequence": false, 00:18:34.945 "rdma_max_cq_size": 0, 00:18:34.945 "rdma_cm_event_timeout_ms": 0, 00:18:34.945 "dhchap_digests": [ 00:18:34.945 "sha256", 00:18:34.945 "sha384", 00:18:34.945 "sha512" 00:18:34.945 ], 00:18:34.945 "dhchap_dhgroups": [ 00:18:34.945 "null", 00:18:34.945 "ffdhe2048", 00:18:34.945 "ffdhe3072", 00:18:34.945 "ffdhe4096", 00:18:34.945 "ffdhe6144", 00:18:34.945 "ffdhe8192" 00:18:34.945 ] 00:18:34.945 } 00:18:34.945 }, 00:18:34.945 { 00:18:34.945 "method": "bdev_nvme_set_hotplug", 00:18:34.945 "params": { 00:18:34.945 "period_us": 100000, 00:18:34.945 "enable": false 00:18:34.945 } 00:18:34.945 }, 00:18:34.945 { 00:18:34.945 "method": "bdev_malloc_create", 00:18:34.945 "params": { 00:18:34.945 "name": "malloc0", 00:18:34.945 "num_blocks": 8192, 00:18:34.945 "block_size": 4096, 00:18:34.945 "physical_block_size": 4096, 00:18:34.945 "uuid": "d2553f42-50a5-4652-820c-fddf293d0e72", 00:18:34.945 "optimal_io_boundary": 0, 00:18:34.945 "md_size": 0, 00:18:34.945 "dif_type": 0, 00:18:34.945 "dif_is_head_of_md": false, 00:18:34.945 "dif_pi_format": 0 00:18:34.945 } 00:18:34.945 }, 00:18:34.945 { 00:18:34.945 "method": "bdev_wait_for_examine" 00:18:34.945 } 00:18:34.945 ] 00:18:34.945 }, 00:18:34.945 { 00:18:34.945 "subsystem": "nbd", 00:18:34.945 "config": [] 00:18:34.945 }, 00:18:34.945 { 00:18:34.945 "subsystem": "scheduler", 00:18:34.945 "config": [ 00:18:34.945 { 00:18:34.945 "method": "framework_set_scheduler", 00:18:34.945 "params": { 00:18:34.945 "name": "static" 00:18:34.945 } 00:18:34.945 } 00:18:34.945 ] 00:18:34.945 }, 00:18:34.945 { 00:18:34.945 "subsystem": "nvmf", 00:18:34.945 "config": [ 00:18:34.945 { 00:18:34.945 "method": "nvmf_set_config", 00:18:34.945 "params": { 00:18:34.945 "discovery_filter": "match_any", 00:18:34.945 "admin_cmd_passthru": { 00:18:34.945 "identify_ctrlr": false 00:18:34.945 }, 00:18:34.945 "dhchap_digests": [ 00:18:34.945 "sha256", 00:18:34.945 "sha384", 00:18:34.945 "sha512" 00:18:34.945 ], 00:18:34.945 "dhchap_dhgroups": [ 00:18:34.945 "null", 00:18:34.945 "ffdhe2048", 00:18:34.945 "ffdhe3072", 00:18:34.945 "ffdhe4096", 00:18:34.945 "ffdhe6144", 00:18:34.945 "ffdhe8192" 00:18:34.945 ] 00:18:34.945 } 00:18:34.945 }, 00:18:34.945 { 00:18:34.945 "method": "nvmf_set_max_subsystems", 00:18:34.945 "params": { 00:18:34.945 "max_subsystems": 1024 00:18:34.945 } 00:18:34.945 }, 00:18:34.945 { 00:18:34.945 "method": "nvmf_set_crdt", 00:18:34.945 "params": { 00:18:34.945 "crdt1": 0, 00:18:34.945 "crdt2": 0, 00:18:34.945 "crdt3": 0 00:18:34.945 } 00:18:34.945 }, 00:18:34.945 { 00:18:34.945 "method": "nvmf_create_transport", 00:18:34.945 "params": { 00:18:34.945 "trtype": "TCP", 00:18:34.945 "max_queue_depth": 128, 00:18:34.945 "max_io_qpairs_per_ctrlr": 127, 00:18:34.945 "in_capsule_data_size": 4096, 00:18:34.945 "max_io_size": 131072, 00:18:34.945 "io_unit_size": 131072, 00:18:34.945 "max_aq_depth": 128, 00:18:34.945 "num_shared_buffers": 511, 00:18:34.945 "buf_cache_size": 4294967295, 00:18:34.945 "dif_insert_or_strip": false, 00:18:34.945 "zcopy": false, 00:18:34.945 "c2h_success": false, 00:18:34.945 "sock_priority": 0, 00:18:34.945 "abort_timeout_sec": 1, 00:18:34.945 "ack_timeout": 0, 00:18:34.945 "data_wr_pool_size": 0 00:18:34.945 } 00:18:34.945 }, 00:18:34.945 { 00:18:34.945 "method": "nvmf_create_subsystem", 00:18:34.945 "params": { 00:18:34.945 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.945 "allow_any_host": false, 00:18:34.945 "serial_number": "00000000000000000000", 00:18:34.945 "model_number": "SPDK bdev Controller", 00:18:34.945 "max_namespaces": 32, 00:18:34.945 "min_cntlid": 1, 00:18:34.945 "max_cntlid": 65519, 00:18:34.945 "ana_reporting": false 00:18:34.945 } 00:18:34.945 }, 00:18:34.945 { 00:18:34.945 "method": "nvmf_subsystem_add_host", 00:18:34.945 "params": { 00:18:34.945 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.945 "host": "nqn.2016-06.io.spdk:host1", 00:18:34.945 "psk": "key0" 00:18:34.945 } 00:18:34.945 }, 00:18:34.945 { 00:18:34.945 "method": "nvmf_subsystem_add_ns", 00:18:34.945 "params": { 00:18:34.945 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.945 "namespace": { 00:18:34.945 "nsid": 1, 00:18:34.945 "bdev_name": "malloc0", 00:18:34.945 "nguid": "D2553F4250A54652820CFDDF293D0E72", 00:18:34.945 "uuid": "d2553f42-50a5-4652-820c-fddf293d0e72", 00:18:34.945 "no_auto_visible": false 00:18:34.945 } 00:18:34.945 } 00:18:34.945 }, 00:18:34.945 { 00:18:34.945 "method": "nvmf_subsystem_add_listener", 00:18:34.945 "params": { 00:18:34.945 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.945 "listen_address": { 00:18:34.945 "trtype": "TCP", 00:18:34.945 "adrfam": "IPv4", 00:18:34.945 "traddr": "10.0.0.2", 00:18:34.945 "trsvcid": "4420" 00:18:34.945 }, 00:18:34.945 "secure_channel": false, 00:18:34.945 "sock_impl": "ssl" 00:18:34.945 } 00:18:34.945 } 00:18:34.945 ] 00:18:34.945 } 00:18:34.945 ] 00:18:34.945 }' 00:18:34.945 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:34.945 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.945 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2637938 00:18:34.945 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:34.945 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2637938 00:18:34.945 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2637938 ']' 00:18:34.945 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.945 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:34.945 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.945 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:34.945 03:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.945 [2024-12-06 03:26:54.900739] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:18:34.945 [2024-12-06 03:26:54.900788] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.945 [2024-12-06 03:26:54.966107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.945 [2024-12-06 03:26:55.003188] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:34.945 [2024-12-06 03:26:55.003223] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:34.945 [2024-12-06 03:26:55.003231] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:34.945 [2024-12-06 03:26:55.003237] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:34.945 [2024-12-06 03:26:55.003242] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:34.945 [2024-12-06 03:26:55.003821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.204 [2024-12-06 03:26:55.216558] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:35.204 [2024-12-06 03:26:55.248595] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:35.204 [2024-12-06 03:26:55.248813] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:35.771 03:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:35.771 03:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:35.771 03:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:35.771 03:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:35.771 03:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:35.771 03:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:35.771 03:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2637975 00:18:35.771 03:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2637975 /var/tmp/bdevperf.sock 00:18:35.771 03:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2637975 ']' 00:18:35.771 03:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:35.771 03:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:35.771 03:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:35.771 03:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:35.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:35.771 03:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:18:35.771 "subsystems": [ 00:18:35.771 { 00:18:35.771 "subsystem": "keyring", 00:18:35.771 "config": [ 00:18:35.771 { 00:18:35.771 "method": "keyring_file_add_key", 00:18:35.771 "params": { 00:18:35.771 "name": "key0", 00:18:35.771 "path": "/tmp/tmp.yx0YCZglnE" 00:18:35.771 } 00:18:35.771 } 00:18:35.771 ] 00:18:35.771 }, 00:18:35.771 { 00:18:35.771 "subsystem": "iobuf", 00:18:35.771 "config": [ 00:18:35.771 { 00:18:35.771 "method": "iobuf_set_options", 00:18:35.771 "params": { 00:18:35.771 "small_pool_count": 8192, 00:18:35.771 "large_pool_count": 1024, 00:18:35.771 "small_bufsize": 8192, 00:18:35.771 "large_bufsize": 135168, 00:18:35.771 "enable_numa": false 00:18:35.771 } 00:18:35.771 } 00:18:35.771 ] 00:18:35.771 }, 00:18:35.771 { 00:18:35.771 "subsystem": "sock", 00:18:35.771 "config": [ 00:18:35.771 { 00:18:35.771 "method": "sock_set_default_impl", 00:18:35.771 "params": { 00:18:35.771 "impl_name": "posix" 00:18:35.771 } 00:18:35.771 }, 00:18:35.771 { 00:18:35.771 "method": "sock_impl_set_options", 00:18:35.771 "params": { 00:18:35.771 "impl_name": "ssl", 00:18:35.771 "recv_buf_size": 4096, 00:18:35.771 "send_buf_size": 4096, 00:18:35.771 "enable_recv_pipe": true, 00:18:35.771 "enable_quickack": false, 00:18:35.771 "enable_placement_id": 0, 00:18:35.771 "enable_zerocopy_send_server": true, 00:18:35.771 "enable_zerocopy_send_client": false, 00:18:35.771 "zerocopy_threshold": 0, 00:18:35.771 "tls_version": 0, 00:18:35.771 "enable_ktls": false 00:18:35.771 } 00:18:35.771 }, 00:18:35.771 { 00:18:35.771 "method": "sock_impl_set_options", 00:18:35.771 "params": { 00:18:35.771 "impl_name": "posix", 00:18:35.771 "recv_buf_size": 2097152, 00:18:35.771 "send_buf_size": 2097152, 00:18:35.771 "enable_recv_pipe": true, 00:18:35.771 "enable_quickack": false, 00:18:35.771 "enable_placement_id": 0, 00:18:35.771 "enable_zerocopy_send_server": true, 00:18:35.771 "enable_zerocopy_send_client": false, 00:18:35.771 "zerocopy_threshold": 0, 00:18:35.771 "tls_version": 0, 00:18:35.771 "enable_ktls": false 00:18:35.771 } 00:18:35.771 } 00:18:35.771 ] 00:18:35.771 }, 00:18:35.771 { 00:18:35.771 "subsystem": "vmd", 00:18:35.771 "config": [] 00:18:35.771 }, 00:18:35.771 { 00:18:35.771 "subsystem": "accel", 00:18:35.771 "config": [ 00:18:35.771 { 00:18:35.771 "method": "accel_set_options", 00:18:35.771 "params": { 00:18:35.771 "small_cache_size": 128, 00:18:35.771 "large_cache_size": 16, 00:18:35.771 "task_count": 2048, 00:18:35.771 "sequence_count": 2048, 00:18:35.772 "buf_count": 2048 00:18:35.772 } 00:18:35.772 } 00:18:35.772 ] 00:18:35.772 }, 00:18:35.772 { 00:18:35.772 "subsystem": "bdev", 00:18:35.772 "config": [ 00:18:35.772 { 00:18:35.772 "method": "bdev_set_options", 00:18:35.772 "params": { 00:18:35.772 "bdev_io_pool_size": 65535, 00:18:35.772 "bdev_io_cache_size": 256, 00:18:35.772 "bdev_auto_examine": true, 00:18:35.772 "iobuf_small_cache_size": 128, 00:18:35.772 "iobuf_large_cache_size": 16 00:18:35.772 } 00:18:35.772 }, 00:18:35.772 { 00:18:35.772 "method": "bdev_raid_set_options", 00:18:35.772 "params": { 00:18:35.772 "process_window_size_kb": 1024, 00:18:35.772 "process_max_bandwidth_mb_sec": 0 00:18:35.772 } 00:18:35.772 }, 00:18:35.772 { 00:18:35.772 "method": "bdev_iscsi_set_options", 00:18:35.772 "params": { 00:18:35.772 "timeout_sec": 30 00:18:35.772 } 00:18:35.772 }, 00:18:35.772 { 00:18:35.772 "method": "bdev_nvme_set_options", 00:18:35.772 "params": { 00:18:35.772 "action_on_timeout": "none", 00:18:35.772 "timeout_us": 0, 00:18:35.772 "timeout_admin_us": 0, 00:18:35.772 "keep_alive_timeout_ms": 10000, 00:18:35.772 "arbitration_burst": 0, 00:18:35.772 "low_priority_weight": 0, 00:18:35.772 "medium_priority_weight": 0, 00:18:35.772 "high_priority_weight": 0, 00:18:35.772 "nvme_adminq_poll_period_us": 10000, 00:18:35.772 "nvme_ioq_poll_period_us": 0, 00:18:35.772 "io_queue_requests": 512, 00:18:35.772 "delay_cmd_submit": true, 00:18:35.772 "transport_retry_count": 4, 00:18:35.772 "bdev_retry_count": 3, 00:18:35.772 "transport_ack_timeout": 0, 00:18:35.772 "ctrlr_loss_timeout_sec": 0, 00:18:35.772 "reconnect_delay_sec": 0, 00:18:35.772 "fast_io_fail_timeout_sec": 0, 00:18:35.772 "disable_auto_failback": false, 00:18:35.772 "generate_uuids": false, 00:18:35.772 "transport_tos": 0, 00:18:35.772 "nvme_error_stat": false, 00:18:35.772 "rdma_srq_size": 0, 00:18:35.772 "io_path_stat": false, 00:18:35.772 "allow_accel_sequence": false, 00:18:35.772 "rdma_max_cq_size": 0, 00:18:35.772 "rdma_cm_event_timeout_ms": 0, 00:18:35.772 "dhchap_digests": [ 00:18:35.772 "sha256", 00:18:35.772 "sha384", 00:18:35.772 "sha512" 00:18:35.772 ], 00:18:35.772 "dhchap_dhgroups": [ 00:18:35.772 "null", 00:18:35.772 "ffdhe2048", 00:18:35.772 "ffdhe3072", 00:18:35.772 "ffdhe4096", 00:18:35.772 "ffdhe6144", 00:18:35.772 "ffdhe8192" 00:18:35.772 ] 00:18:35.772 } 00:18:35.772 }, 00:18:35.772 { 00:18:35.772 "method": "bdev_nvme_attach_controller", 00:18:35.772 "params": { 00:18:35.772 "name": "nvme0", 00:18:35.772 "trtype": "TCP", 00:18:35.772 "adrfam": "IPv4", 00:18:35.772 "traddr": "10.0.0.2", 00:18:35.772 "trsvcid": "4420", 00:18:35.772 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:35.772 "prchk_reftag": false, 00:18:35.772 "prchk_guard": false, 00:18:35.772 "ctrlr_loss_timeout_sec": 0, 00:18:35.772 "reconnect_delay_sec": 0, 00:18:35.772 "fast_io_fail_timeout_sec": 0, 00:18:35.772 "psk": "key0", 00:18:35.772 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:35.772 "hdgst": false, 00:18:35.772 "ddgst": false, 00:18:35.772 "multipath": "multipath" 00:18:35.772 } 00:18:35.772 }, 00:18:35.772 { 00:18:35.772 "method": "bdev_nvme_set_hotplug", 00:18:35.772 "params": { 00:18:35.772 "period_us": 100000, 00:18:35.772 "enable": false 00:18:35.772 } 00:18:35.772 }, 00:18:35.772 { 00:18:35.772 "method": "bdev_enable_histogram", 00:18:35.772 "params": { 00:18:35.772 "name": "nvme0n1", 00:18:35.772 "enable": true 00:18:35.772 } 00:18:35.772 }, 00:18:35.772 { 00:18:35.772 "method": "bdev_wait_for_examine" 00:18:35.772 } 00:18:35.772 ] 00:18:35.772 }, 00:18:35.772 { 00:18:35.772 "subsystem": "nbd", 00:18:35.772 "config": [] 00:18:35.772 } 00:18:35.772 ] 00:18:35.772 }' 00:18:35.772 03:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:35.772 03:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:35.772 [2024-12-06 03:26:55.824732] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:18:35.772 [2024-12-06 03:26:55.824781] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2637975 ] 00:18:35.772 [2024-12-06 03:26:55.887928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.030 [2024-12-06 03:26:55.932416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:36.030 [2024-12-06 03:26:56.087541] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:36.598 03:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:36.598 03:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:36.598 03:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:36.598 03:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:18:36.857 03:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.857 03:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:36.857 Running I/O for 1 seconds... 00:18:38.236 5161.00 IOPS, 20.16 MiB/s 00:18:38.236 Latency(us) 00:18:38.236 [2024-12-06T02:26:58.377Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.236 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:38.236 Verification LBA range: start 0x0 length 0x2000 00:18:38.236 nvme0n1 : 1.02 5210.26 20.35 0.00 0.00 24378.53 6639.08 54024.46 00:18:38.236 [2024-12-06T02:26:58.377Z] =================================================================================================================== 00:18:38.236 [2024-12-06T02:26:58.377Z] Total : 5210.26 20.35 0.00 0.00 24378.53 6639.08 54024.46 00:18:38.236 { 00:18:38.236 "results": [ 00:18:38.236 { 00:18:38.236 "job": "nvme0n1", 00:18:38.236 "core_mask": "0x2", 00:18:38.236 "workload": "verify", 00:18:38.236 "status": "finished", 00:18:38.236 "verify_range": { 00:18:38.236 "start": 0, 00:18:38.236 "length": 8192 00:18:38.236 }, 00:18:38.236 "queue_depth": 128, 00:18:38.236 "io_size": 4096, 00:18:38.236 "runtime": 1.015112, 00:18:38.236 "iops": 5210.26251290498, 00:18:38.236 "mibps": 20.35258794103508, 00:18:38.236 "io_failed": 0, 00:18:38.236 "io_timeout": 0, 00:18:38.236 "avg_latency_us": 24378.53055611729, 00:18:38.236 "min_latency_us": 6639.0817391304345, 00:18:38.236 "max_latency_us": 54024.459130434785 00:18:38.236 } 00:18:38.236 ], 00:18:38.236 "core_count": 1 00:18:38.236 } 00:18:38.236 03:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:18:38.236 03:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:18:38.236 03:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:38.236 03:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:18:38.236 03:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:18:38.236 03:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:18:38.236 03:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:38.236 03:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:18:38.236 03:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:18:38.236 03:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:18:38.236 03:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:38.236 nvmf_trace.0 00:18:38.236 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:18:38.236 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2637975 00:18:38.236 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2637975 ']' 00:18:38.236 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2637975 00:18:38.236 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:38.236 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:38.236 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2637975 00:18:38.236 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:38.236 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:38.236 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2637975' 00:18:38.236 killing process with pid 2637975 00:18:38.236 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2637975 00:18:38.236 Received shutdown signal, test time was about 1.000000 seconds 00:18:38.236 00:18:38.237 Latency(us) 00:18:38.237 [2024-12-06T02:26:58.378Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.237 [2024-12-06T02:26:58.378Z] =================================================================================================================== 00:18:38.237 [2024-12-06T02:26:58.378Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:38.237 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2637975 00:18:38.237 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:38.237 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:38.237 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:18:38.237 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:38.237 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:18:38.237 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:38.237 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:38.237 rmmod nvme_tcp 00:18:38.237 rmmod nvme_fabrics 00:18:38.237 rmmod nvme_keyring 00:18:38.237 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:38.237 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:18:38.237 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:18:38.237 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2637938 ']' 00:18:38.237 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2637938 00:18:38.237 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2637938 ']' 00:18:38.237 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2637938 00:18:38.237 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:38.237 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:38.237 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2637938 00:18:38.496 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:38.496 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:38.496 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2637938' 00:18:38.496 killing process with pid 2637938 00:18:38.497 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2637938 00:18:38.497 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2637938 00:18:38.497 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:38.497 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:38.497 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:38.497 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:18:38.497 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:38.497 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:18:38.497 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:18:38.497 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:38.497 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:38.497 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:38.497 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:38.497 03:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.D12tBfAMKg /tmp/tmp.1uLNxe6jKv /tmp/tmp.yx0YCZglnE 00:18:41.034 00:18:41.034 real 1m18.652s 00:18:41.034 user 2m2.086s 00:18:41.034 sys 0m28.748s 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.034 ************************************ 00:18:41.034 END TEST nvmf_tls 00:18:41.034 ************************************ 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:41.034 ************************************ 00:18:41.034 START TEST nvmf_fips 00:18:41.034 ************************************ 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:41.034 * Looking for test storage... 00:18:41.034 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:41.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.034 --rc genhtml_branch_coverage=1 00:18:41.034 --rc genhtml_function_coverage=1 00:18:41.034 --rc genhtml_legend=1 00:18:41.034 --rc geninfo_all_blocks=1 00:18:41.034 --rc geninfo_unexecuted_blocks=1 00:18:41.034 00:18:41.034 ' 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:41.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.034 --rc genhtml_branch_coverage=1 00:18:41.034 --rc genhtml_function_coverage=1 00:18:41.034 --rc genhtml_legend=1 00:18:41.034 --rc geninfo_all_blocks=1 00:18:41.034 --rc geninfo_unexecuted_blocks=1 00:18:41.034 00:18:41.034 ' 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:41.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.034 --rc genhtml_branch_coverage=1 00:18:41.034 --rc genhtml_function_coverage=1 00:18:41.034 --rc genhtml_legend=1 00:18:41.034 --rc geninfo_all_blocks=1 00:18:41.034 --rc geninfo_unexecuted_blocks=1 00:18:41.034 00:18:41.034 ' 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:41.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.034 --rc genhtml_branch_coverage=1 00:18:41.034 --rc genhtml_function_coverage=1 00:18:41.034 --rc genhtml_legend=1 00:18:41.034 --rc geninfo_all_blocks=1 00:18:41.034 --rc geninfo_unexecuted_blocks=1 00:18:41.034 00:18:41.034 ' 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:41.034 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:41.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:18:41.035 03:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:18:41.035 Error setting digest 00:18:41.035 40128A55FE7E0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:18:41.035 40128A55FE7E0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:18:41.035 03:27:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:18:41.035 03:27:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:41.035 03:27:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:41.035 03:27:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:41.035 03:27:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:18:41.035 03:27:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:41.035 03:27:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:41.035 03:27:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:41.035 03:27:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:41.036 03:27:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:41.036 03:27:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.036 03:27:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:41.036 03:27:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.036 03:27:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:41.036 03:27:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:41.036 03:27:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:18:41.036 03:27:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:46.303 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:46.303 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:46.303 Found net devices under 0000:86:00.0: cvl_0_0 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:46.303 Found net devices under 0000:86:00.1: cvl_0_1 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:46.303 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:46.562 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:46.562 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:46.562 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:46.562 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:46.562 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:46.562 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:18:46.562 00:18:46.562 --- 10.0.0.2 ping statistics --- 00:18:46.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.562 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:18:46.562 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:46.562 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:46.562 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:18:46.562 00:18:46.562 --- 10.0.0.1 ping statistics --- 00:18:46.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.562 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:18:46.562 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:46.562 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:18:46.562 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:46.562 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:46.562 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:46.562 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:46.562 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:46.562 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:46.562 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:46.562 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:18:46.562 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:46.562 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:46.562 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:46.562 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2642119 00:18:46.562 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2642119 00:18:46.562 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:46.562 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2642119 ']' 00:18:46.562 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.562 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:46.562 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.562 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:46.562 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:46.562 [2024-12-06 03:27:06.641700] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:18:46.562 [2024-12-06 03:27:06.641748] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:46.821 [2024-12-06 03:27:06.709242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.821 [2024-12-06 03:27:06.750744] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:46.821 [2024-12-06 03:27:06.750781] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:46.821 [2024-12-06 03:27:06.750789] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:46.821 [2024-12-06 03:27:06.750795] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:46.821 [2024-12-06 03:27:06.750800] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:46.821 [2024-12-06 03:27:06.751372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:46.821 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:46.821 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:18:46.821 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:46.821 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:46.821 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:46.821 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:46.821 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:18:46.821 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:46.821 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:18:46.821 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.ehW 00:18:46.821 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:46.821 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.ehW 00:18:46.821 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.ehW 00:18:46.821 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.ehW 00:18:46.821 03:27:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:47.080 [2024-12-06 03:27:07.069064] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:47.080 [2024-12-06 03:27:07.085068] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:47.080 [2024-12-06 03:27:07.085261] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:47.080 malloc0 00:18:47.080 03:27:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:47.080 03:27:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2642201 00:18:47.080 03:27:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:47.080 03:27:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2642201 /var/tmp/bdevperf.sock 00:18:47.080 03:27:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2642201 ']' 00:18:47.080 03:27:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:47.080 03:27:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:47.080 03:27:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:47.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:47.080 03:27:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:47.080 03:27:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:47.080 [2024-12-06 03:27:07.204062] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:18:47.080 [2024-12-06 03:27:07.204112] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2642201 ] 00:18:47.339 [2024-12-06 03:27:07.264429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.339 [2024-12-06 03:27:07.307674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:47.339 03:27:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:47.339 03:27:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:18:47.339 03:27:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.ehW 00:18:47.598 03:27:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:47.858 [2024-12-06 03:27:07.768351] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:47.858 TLSTESTn1 00:18:47.858 03:27:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:47.858 Running I/O for 10 seconds... 00:18:50.168 5262.00 IOPS, 20.55 MiB/s [2024-12-06T02:27:11.243Z] 5327.50 IOPS, 20.81 MiB/s [2024-12-06T02:27:12.178Z] 5362.33 IOPS, 20.95 MiB/s [2024-12-06T02:27:13.113Z] 5416.25 IOPS, 21.16 MiB/s [2024-12-06T02:27:14.050Z] 5372.80 IOPS, 20.99 MiB/s [2024-12-06T02:27:14.987Z] 5393.67 IOPS, 21.07 MiB/s [2024-12-06T02:27:16.371Z] 5374.57 IOPS, 20.99 MiB/s [2024-12-06T02:27:17.516Z] 5395.50 IOPS, 21.08 MiB/s [2024-12-06T02:27:18.202Z] 5393.00 IOPS, 21.07 MiB/s [2024-12-06T02:27:18.202Z] 5413.10 IOPS, 21.14 MiB/s 00:18:58.061 Latency(us) 00:18:58.061 [2024-12-06T02:27:18.202Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.061 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:58.061 Verification LBA range: start 0x0 length 0x2000 00:18:58.061 TLSTESTn1 : 10.01 5418.33 21.17 0.00 0.00 23588.08 5157.40 24048.86 00:18:58.061 [2024-12-06T02:27:18.202Z] =================================================================================================================== 00:18:58.061 [2024-12-06T02:27:18.202Z] Total : 5418.33 21.17 0.00 0.00 23588.08 5157.40 24048.86 00:18:58.061 { 00:18:58.061 "results": [ 00:18:58.062 { 00:18:58.062 "job": "TLSTESTn1", 00:18:58.062 "core_mask": "0x4", 00:18:58.062 "workload": "verify", 00:18:58.062 "status": "finished", 00:18:58.062 "verify_range": { 00:18:58.062 "start": 0, 00:18:58.062 "length": 8192 00:18:58.062 }, 00:18:58.062 "queue_depth": 128, 00:18:58.062 "io_size": 4096, 00:18:58.062 "runtime": 10.013783, 00:18:58.062 "iops": 5418.331913124141, 00:18:58.062 "mibps": 21.165359035641174, 00:18:58.062 "io_failed": 0, 00:18:58.062 "io_timeout": 0, 00:18:58.062 "avg_latency_us": 23588.083139012157, 00:18:58.062 "min_latency_us": 5157.398260869565, 00:18:58.062 "max_latency_us": 24048.862608695654 00:18:58.062 } 00:18:58.062 ], 00:18:58.062 "core_count": 1 00:18:58.062 } 00:18:58.062 03:27:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:18:58.062 03:27:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:18:58.062 03:27:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:18:58.062 03:27:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:18:58.062 03:27:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:18:58.062 03:27:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:58.062 03:27:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:18:58.062 03:27:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:18:58.062 03:27:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:18:58.062 03:27:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:58.062 nvmf_trace.0 00:18:58.062 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:18:58.062 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2642201 00:18:58.062 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2642201 ']' 00:18:58.062 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2642201 00:18:58.062 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:18:58.062 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:58.062 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2642201 00:18:58.062 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:58.062 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:58.062 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2642201' 00:18:58.062 killing process with pid 2642201 00:18:58.062 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2642201 00:18:58.062 Received shutdown signal, test time was about 10.000000 seconds 00:18:58.062 00:18:58.062 Latency(us) 00:18:58.062 [2024-12-06T02:27:18.203Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.062 [2024-12-06T02:27:18.203Z] =================================================================================================================== 00:18:58.062 [2024-12-06T02:27:18.203Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:58.062 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2642201 00:18:58.322 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:18:58.322 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:58.322 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:18:58.322 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:58.322 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:18:58.322 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:58.322 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:58.322 rmmod nvme_tcp 00:18:58.322 rmmod nvme_fabrics 00:18:58.322 rmmod nvme_keyring 00:18:58.322 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:58.322 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:18:58.322 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:18:58.322 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2642119 ']' 00:18:58.322 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2642119 00:18:58.322 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2642119 ']' 00:18:58.322 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2642119 00:18:58.322 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:18:58.322 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:58.322 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2642119 00:18:58.322 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:58.322 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:58.322 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2642119' 00:18:58.322 killing process with pid 2642119 00:18:58.322 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2642119 00:18:58.322 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2642119 00:18:58.582 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:58.582 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:58.582 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:58.582 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:18:58.582 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:18:58.582 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:58.582 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:18:58.582 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:58.582 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:58.582 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.582 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:58.582 03:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.ehW 00:19:01.122 00:19:01.122 real 0m19.946s 00:19:01.122 user 0m21.027s 00:19:01.122 sys 0m9.265s 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:01.122 ************************************ 00:19:01.122 END TEST nvmf_fips 00:19:01.122 ************************************ 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:01.122 ************************************ 00:19:01.122 START TEST nvmf_control_msg_list 00:19:01.122 ************************************ 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:01.122 * Looking for test storage... 00:19:01.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:01.122 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:01.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.123 --rc genhtml_branch_coverage=1 00:19:01.123 --rc genhtml_function_coverage=1 00:19:01.123 --rc genhtml_legend=1 00:19:01.123 --rc geninfo_all_blocks=1 00:19:01.123 --rc geninfo_unexecuted_blocks=1 00:19:01.123 00:19:01.123 ' 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:01.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.123 --rc genhtml_branch_coverage=1 00:19:01.123 --rc genhtml_function_coverage=1 00:19:01.123 --rc genhtml_legend=1 00:19:01.123 --rc geninfo_all_blocks=1 00:19:01.123 --rc geninfo_unexecuted_blocks=1 00:19:01.123 00:19:01.123 ' 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:01.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.123 --rc genhtml_branch_coverage=1 00:19:01.123 --rc genhtml_function_coverage=1 00:19:01.123 --rc genhtml_legend=1 00:19:01.123 --rc geninfo_all_blocks=1 00:19:01.123 --rc geninfo_unexecuted_blocks=1 00:19:01.123 00:19:01.123 ' 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:01.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.123 --rc genhtml_branch_coverage=1 00:19:01.123 --rc genhtml_function_coverage=1 00:19:01.123 --rc genhtml_legend=1 00:19:01.123 --rc geninfo_all_blocks=1 00:19:01.123 --rc geninfo_unexecuted_blocks=1 00:19:01.123 00:19:01.123 ' 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:19:01.123 03:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:06.405 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:06.405 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:06.405 Found net devices under 0000:86:00.0: cvl_0_0 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:06.405 Found net devices under 0000:86:00.1: cvl_0_1 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:06.405 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:06.406 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:06.406 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:06.406 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:06.406 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:06.406 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:06.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.455 ms 00:19:06.406 00:19:06.406 --- 10.0.0.2 ping statistics --- 00:19:06.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.406 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:19:06.406 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:06.406 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:06.406 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:19:06.406 00:19:06.406 --- 10.0.0.1 ping statistics --- 00:19:06.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.406 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:19:06.406 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:06.406 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:19:06.406 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:06.666 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:06.666 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:06.666 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:06.666 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:06.666 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:06.666 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:06.666 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:06.666 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:06.666 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:06.666 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:06.666 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2647902 00:19:06.666 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:06.666 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2647902 00:19:06.666 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2647902 ']' 00:19:06.666 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.666 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:06.666 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.666 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:06.666 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:06.666 [2024-12-06 03:27:26.642816] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:19:06.666 [2024-12-06 03:27:26.642867] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:06.666 [2024-12-06 03:27:26.709303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.666 [2024-12-06 03:27:26.752550] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:06.666 [2024-12-06 03:27:26.752580] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:06.666 [2024-12-06 03:27:26.752588] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:06.666 [2024-12-06 03:27:26.752594] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:06.666 [2024-12-06 03:27:26.752599] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:06.666 [2024-12-06 03:27:26.753169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.926 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:06.926 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:19:06.926 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:06.926 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:06.926 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:06.926 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:06.926 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:06.926 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:06.926 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:06.926 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.926 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:06.926 [2024-12-06 03:27:26.889844] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:06.926 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.926 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:06.926 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.926 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:06.926 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.926 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:06.926 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.926 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:06.926 Malloc0 00:19:06.926 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.926 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:06.926 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.926 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:06.926 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.926 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:06.926 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.926 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:06.926 [2024-12-06 03:27:26.930173] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:06.926 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.926 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2648027 00:19:06.926 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:06.926 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2648029 00:19:06.926 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:06.926 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:06.926 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2648031 00:19:06.926 03:27:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2648027 00:19:06.926 [2024-12-06 03:27:26.988772] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:06.926 [2024-12-06 03:27:26.988967] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:06.926 [2024-12-06 03:27:26.989126] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:08.305 Initializing NVMe Controllers 00:19:08.305 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:08.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:08.305 Initialization complete. Launching workers. 00:19:08.305 ======================================================== 00:19:08.305 Latency(us) 00:19:08.305 Device Information : IOPS MiB/s Average min max 00:19:08.305 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 5992.00 23.41 166.54 140.83 359.18 00:19:08.305 ======================================================== 00:19:08.305 Total : 5992.00 23.41 166.54 140.83 359.18 00:19:08.305 00:19:08.305 Initializing NVMe Controllers 00:19:08.305 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:08.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:08.305 Initialization complete. Launching workers. 00:19:08.305 ======================================================== 00:19:08.305 Latency(us) 00:19:08.305 Device Information : IOPS MiB/s Average min max 00:19:08.305 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 226.00 0.88 4557.56 201.87 41978.25 00:19:08.305 ======================================================== 00:19:08.305 Total : 226.00 0.88 4557.56 201.87 41978.25 00:19:08.305 00:19:08.305 Initializing NVMe Controllers 00:19:08.305 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:08.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:08.305 Initialization complete. Launching workers. 00:19:08.305 ======================================================== 00:19:08.305 Latency(us) 00:19:08.305 Device Information : IOPS MiB/s Average min max 00:19:08.305 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 6016.00 23.50 165.85 146.05 349.95 00:19:08.305 ======================================================== 00:19:08.305 Total : 6016.00 23.50 165.85 146.05 349.95 00:19:08.305 00:19:08.305 03:27:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2648029 00:19:08.305 03:27:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2648031 00:19:08.305 03:27:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:08.305 03:27:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:08.305 03:27:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:08.305 03:27:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:08.305 03:27:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:08.305 03:27:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:08.305 03:27:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:08.305 03:27:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:08.305 rmmod nvme_tcp 00:19:08.305 rmmod nvme_fabrics 00:19:08.305 rmmod nvme_keyring 00:19:08.305 03:27:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:08.305 03:27:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:08.305 03:27:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:08.305 03:27:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2647902 ']' 00:19:08.305 03:27:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2647902 00:19:08.305 03:27:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2647902 ']' 00:19:08.305 03:27:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2647902 00:19:08.305 03:27:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:19:08.305 03:27:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:08.305 03:27:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2647902 00:19:08.305 03:27:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:08.305 03:27:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:08.305 03:27:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2647902' 00:19:08.305 killing process with pid 2647902 00:19:08.305 03:27:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2647902 00:19:08.305 03:27:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2647902 00:19:08.305 03:27:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:08.305 03:27:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:08.305 03:27:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:08.305 03:27:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:19:08.305 03:27:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:19:08.305 03:27:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:08.305 03:27:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:19:08.305 03:27:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:08.305 03:27:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:08.305 03:27:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.305 03:27:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:08.305 03:27:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:10.839 00:19:10.839 real 0m9.758s 00:19:10.839 user 0m6.382s 00:19:10.839 sys 0m5.278s 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:10.839 ************************************ 00:19:10.839 END TEST nvmf_control_msg_list 00:19:10.839 ************************************ 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:10.839 ************************************ 00:19:10.839 START TEST nvmf_wait_for_buf 00:19:10.839 ************************************ 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:10.839 * Looking for test storage... 00:19:10.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:10.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.839 --rc genhtml_branch_coverage=1 00:19:10.839 --rc genhtml_function_coverage=1 00:19:10.839 --rc genhtml_legend=1 00:19:10.839 --rc geninfo_all_blocks=1 00:19:10.839 --rc geninfo_unexecuted_blocks=1 00:19:10.839 00:19:10.839 ' 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:10.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.839 --rc genhtml_branch_coverage=1 00:19:10.839 --rc genhtml_function_coverage=1 00:19:10.839 --rc genhtml_legend=1 00:19:10.839 --rc geninfo_all_blocks=1 00:19:10.839 --rc geninfo_unexecuted_blocks=1 00:19:10.839 00:19:10.839 ' 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:10.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.839 --rc genhtml_branch_coverage=1 00:19:10.839 --rc genhtml_function_coverage=1 00:19:10.839 --rc genhtml_legend=1 00:19:10.839 --rc geninfo_all_blocks=1 00:19:10.839 --rc geninfo_unexecuted_blocks=1 00:19:10.839 00:19:10.839 ' 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:10.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.839 --rc genhtml_branch_coverage=1 00:19:10.839 --rc genhtml_function_coverage=1 00:19:10.839 --rc genhtml_legend=1 00:19:10.839 --rc geninfo_all_blocks=1 00:19:10.839 --rc geninfo_unexecuted_blocks=1 00:19:10.839 00:19:10.839 ' 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:10.839 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.840 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.840 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.840 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:10.840 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.840 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:19:10.840 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:10.840 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:10.840 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:10.840 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:10.840 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:10.840 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:10.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:10.840 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:10.840 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:10.840 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:10.840 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:10.840 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:10.840 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:10.840 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:10.840 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:10.840 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:10.840 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.840 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:10.840 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.840 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:10.840 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:10.840 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:19:10.840 03:27:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:16.132 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:16.132 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:16.132 Found net devices under 0000:86:00.0: cvl_0_0 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:16.132 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:16.133 Found net devices under 0000:86:00.1: cvl_0_1 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:16.133 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:16.133 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:19:16.133 00:19:16.133 --- 10.0.0.2 ping statistics --- 00:19:16.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.133 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:16.133 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:16.133 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:19:16.133 00:19:16.133 --- 10.0.0.1 ping statistics --- 00:19:16.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.133 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2651673 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2651673 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2651673 ']' 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:16.133 03:27:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:16.133 [2024-12-06 03:27:36.008591] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:19:16.133 [2024-12-06 03:27:36.008638] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.133 [2024-12-06 03:27:36.074740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.133 [2024-12-06 03:27:36.115525] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:16.133 [2024-12-06 03:27:36.115560] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:16.133 [2024-12-06 03:27:36.115567] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:16.133 [2024-12-06 03:27:36.115577] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:16.133 [2024-12-06 03:27:36.115598] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:16.133 [2024-12-06 03:27:36.116143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:16.133 03:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:16.133 03:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:19:16.133 03:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:16.133 03:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:16.133 03:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:16.133 03:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:16.133 03:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:16.133 03:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:16.133 03:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:16.133 03:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.133 03:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:16.133 03:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.133 03:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:16.133 03:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.133 03:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:16.133 03:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.133 03:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:16.133 03:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.133 03:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:16.392 03:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.392 03:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:16.392 03:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.392 03:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:16.392 Malloc0 00:19:16.392 03:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.392 03:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:16.392 03:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.392 03:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:16.392 [2024-12-06 03:27:36.298212] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:16.392 03:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.392 03:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:16.392 03:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.392 03:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:16.392 03:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.392 03:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:16.392 03:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.392 03:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:16.392 03:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.392 03:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:16.392 03:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.392 03:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:16.392 [2024-12-06 03:27:36.326390] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:16.392 03:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.392 03:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:16.392 [2024-12-06 03:27:36.415026] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:18.294 Initializing NVMe Controllers 00:19:18.294 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:18.294 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:18.294 Initialization complete. Launching workers. 00:19:18.294 ======================================================== 00:19:18.294 Latency(us) 00:19:18.294 Device Information : IOPS MiB/s Average min max 00:19:18.294 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 29.00 3.62 147253.16 7264.78 192523.12 00:19:18.294 ======================================================== 00:19:18.294 Total : 29.00 3.62 147253.16 7264.78 192523.12 00:19:18.294 00:19:18.294 03:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:18.294 03:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:18.294 03:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.294 03:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:18.294 03:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.294 03:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=438 00:19:18.294 03:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 438 -eq 0 ]] 00:19:18.294 03:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:18.294 03:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:18.294 03:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:18.294 03:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:19:18.294 03:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:18.294 03:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:19:18.294 03:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:18.294 03:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:18.294 rmmod nvme_tcp 00:19:18.294 rmmod nvme_fabrics 00:19:18.294 rmmod nvme_keyring 00:19:18.294 03:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:18.294 03:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:19:18.294 03:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:19:18.294 03:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2651673 ']' 00:19:18.294 03:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2651673 00:19:18.294 03:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2651673 ']' 00:19:18.294 03:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2651673 00:19:18.294 03:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:19:18.294 03:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:18.294 03:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2651673 00:19:18.294 03:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:18.294 03:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:18.294 03:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2651673' 00:19:18.294 killing process with pid 2651673 00:19:18.294 03:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2651673 00:19:18.294 03:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2651673 00:19:18.294 03:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:18.294 03:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:18.294 03:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:18.294 03:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:19:18.294 03:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:19:18.294 03:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:18.295 03:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:19:18.295 03:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:18.295 03:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:18.295 03:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.295 03:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:18.295 03:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.827 03:27:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:20.827 00:19:20.827 real 0m9.836s 00:19:20.827 user 0m3.918s 00:19:20.827 sys 0m4.348s 00:19:20.827 03:27:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:20.827 03:27:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:20.827 ************************************ 00:19:20.827 END TEST nvmf_wait_for_buf 00:19:20.827 ************************************ 00:19:20.827 03:27:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:19:20.827 03:27:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:19:20.827 03:27:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:19:20.827 03:27:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:19:20.827 03:27:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:19:20.827 03:27:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:26.105 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:26.105 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:19:26.105 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:26.105 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:26.105 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:26.105 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:26.105 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:26.105 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:19:26.105 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:26.105 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:19:26.105 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:19:26.105 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:19:26.105 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:19:26.105 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:19:26.105 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:19:26.105 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:26.105 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:26.105 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:26.105 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:26.105 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:26.106 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:26.106 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:26.106 Found net devices under 0000:86:00.0: cvl_0_0 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:26.106 Found net devices under 0000:86:00.1: cvl_0_1 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:26.106 ************************************ 00:19:26.106 START TEST nvmf_perf_adq 00:19:26.106 ************************************ 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:26.106 * Looking for test storage... 00:19:26.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:26.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.106 --rc genhtml_branch_coverage=1 00:19:26.106 --rc genhtml_function_coverage=1 00:19:26.106 --rc genhtml_legend=1 00:19:26.106 --rc geninfo_all_blocks=1 00:19:26.106 --rc geninfo_unexecuted_blocks=1 00:19:26.106 00:19:26.106 ' 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:26.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.106 --rc genhtml_branch_coverage=1 00:19:26.106 --rc genhtml_function_coverage=1 00:19:26.106 --rc genhtml_legend=1 00:19:26.106 --rc geninfo_all_blocks=1 00:19:26.106 --rc geninfo_unexecuted_blocks=1 00:19:26.106 00:19:26.106 ' 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:26.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.106 --rc genhtml_branch_coverage=1 00:19:26.106 --rc genhtml_function_coverage=1 00:19:26.106 --rc genhtml_legend=1 00:19:26.106 --rc geninfo_all_blocks=1 00:19:26.106 --rc geninfo_unexecuted_blocks=1 00:19:26.106 00:19:26.106 ' 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:26.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.106 --rc genhtml_branch_coverage=1 00:19:26.106 --rc genhtml_function_coverage=1 00:19:26.106 --rc genhtml_legend=1 00:19:26.106 --rc geninfo_all_blocks=1 00:19:26.106 --rc geninfo_unexecuted_blocks=1 00:19:26.106 00:19:26.106 ' 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:26.106 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:26.106 03:27:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.375 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:31.375 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:31.375 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:31.375 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:31.375 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:31.375 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:31.375 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:31.375 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:31.375 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:31.375 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:31.375 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:31.375 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:31.375 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:31.375 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:31.375 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:31.375 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:31.375 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:31.375 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:31.375 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:31.375 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:31.375 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:31.375 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:31.375 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:31.375 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:31.375 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:31.375 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:31.375 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:31.375 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:31.375 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:31.375 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:31.375 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:31.375 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:31.375 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:31.375 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:31.375 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:31.375 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:31.375 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:31.375 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:31.375 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:31.376 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:31.376 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:31.376 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:31.376 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:31.376 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:31.376 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:31.376 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:31.376 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:31.376 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:31.376 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:31.376 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:31.376 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:31.376 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:31.376 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:31.376 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:31.376 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:31.376 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:31.376 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:31.376 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:31.376 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:31.376 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:31.376 Found net devices under 0000:86:00.0: cvl_0_0 00:19:31.376 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:31.376 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:31.376 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:31.376 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:31.376 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:31.376 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:31.376 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:31.376 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:31.376 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:31.376 Found net devices under 0000:86:00.1: cvl_0_1 00:19:31.376 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:31.376 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:31.376 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:31.376 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:31.376 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:31.376 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:19:31.376 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:19:31.376 03:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:19:31.635 03:27:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:19:34.171 03:27:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:19:39.437 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:19:39.437 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:39.437 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:39.437 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:39.437 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:39.437 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:39.437 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.437 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:39.437 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.437 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:39.437 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:39.437 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:39.437 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:39.437 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:39.437 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:39.437 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:39.437 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:39.437 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:39.437 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:39.437 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:39.437 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:39.437 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:39.437 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:39.437 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:39.437 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:39.437 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:39.437 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:39.437 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:39.437 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:39.437 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:39.437 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:39.438 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:39.438 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:39.438 Found net devices under 0000:86:00.0: cvl_0_0 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:39.438 Found net devices under 0000:86:00.1: cvl_0_1 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:39.438 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:39.438 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:19:39.438 00:19:39.438 --- 10.0.0.2 ping statistics --- 00:19:39.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.438 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:39.438 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:39.438 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:19:39.438 00:19:39.438 --- 10.0.0.1 ping statistics --- 00:19:39.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.438 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2659787 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2659787 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2659787 ']' 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:39.438 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.439 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:39.439 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:39.439 03:27:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:39.439 [2024-12-06 03:27:58.999527] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:19:39.439 [2024-12-06 03:27:58.999571] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:39.439 [2024-12-06 03:27:59.067602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:39.439 [2024-12-06 03:27:59.112393] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:39.439 [2024-12-06 03:27:59.112429] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:39.439 [2024-12-06 03:27:59.112437] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:39.439 [2024-12-06 03:27:59.112443] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:39.439 [2024-12-06 03:27:59.112449] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:39.439 [2024-12-06 03:27:59.113898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:39.439 [2024-12-06 03:27:59.113997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:39.439 [2024-12-06 03:27:59.114071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:39.439 [2024-12-06 03:27:59.114073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:39.439 [2024-12-06 03:27:59.333351] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:39.439 Malloc1 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:39.439 [2024-12-06 03:27:59.397808] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2659978 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:19:39.439 03:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:41.337 03:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:19:41.337 03:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.337 03:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:41.337 03:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.337 03:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:19:41.337 "tick_rate": 2300000000, 00:19:41.337 "poll_groups": [ 00:19:41.337 { 00:19:41.337 "name": "nvmf_tgt_poll_group_000", 00:19:41.337 "admin_qpairs": 1, 00:19:41.337 "io_qpairs": 1, 00:19:41.337 "current_admin_qpairs": 1, 00:19:41.337 "current_io_qpairs": 1, 00:19:41.337 "pending_bdev_io": 0, 00:19:41.337 "completed_nvme_io": 19973, 00:19:41.337 "transports": [ 00:19:41.337 { 00:19:41.337 "trtype": "TCP" 00:19:41.337 } 00:19:41.337 ] 00:19:41.337 }, 00:19:41.337 { 00:19:41.337 "name": "nvmf_tgt_poll_group_001", 00:19:41.337 "admin_qpairs": 0, 00:19:41.337 "io_qpairs": 1, 00:19:41.337 "current_admin_qpairs": 0, 00:19:41.337 "current_io_qpairs": 1, 00:19:41.337 "pending_bdev_io": 0, 00:19:41.337 "completed_nvme_io": 20147, 00:19:41.337 "transports": [ 00:19:41.337 { 00:19:41.337 "trtype": "TCP" 00:19:41.337 } 00:19:41.337 ] 00:19:41.337 }, 00:19:41.337 { 00:19:41.337 "name": "nvmf_tgt_poll_group_002", 00:19:41.337 "admin_qpairs": 0, 00:19:41.337 "io_qpairs": 1, 00:19:41.337 "current_admin_qpairs": 0, 00:19:41.337 "current_io_qpairs": 1, 00:19:41.337 "pending_bdev_io": 0, 00:19:41.337 "completed_nvme_io": 20087, 00:19:41.337 "transports": [ 00:19:41.337 { 00:19:41.337 "trtype": "TCP" 00:19:41.337 } 00:19:41.337 ] 00:19:41.337 }, 00:19:41.337 { 00:19:41.337 "name": "nvmf_tgt_poll_group_003", 00:19:41.337 "admin_qpairs": 0, 00:19:41.337 "io_qpairs": 1, 00:19:41.337 "current_admin_qpairs": 0, 00:19:41.337 "current_io_qpairs": 1, 00:19:41.337 "pending_bdev_io": 0, 00:19:41.337 "completed_nvme_io": 20120, 00:19:41.337 "transports": [ 00:19:41.337 { 00:19:41.337 "trtype": "TCP" 00:19:41.337 } 00:19:41.337 ] 00:19:41.337 } 00:19:41.337 ] 00:19:41.337 }' 00:19:41.337 03:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:41.337 03:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:19:41.337 03:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:19:41.337 03:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:19:41.337 03:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2659978 00:19:49.454 Initializing NVMe Controllers 00:19:49.454 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:49.454 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:49.454 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:49.454 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:49.454 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:49.454 Initialization complete. Launching workers. 00:19:49.454 ======================================================== 00:19:49.454 Latency(us) 00:19:49.454 Device Information : IOPS MiB/s Average min max 00:19:49.454 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10669.70 41.68 5997.78 1555.71 10262.06 00:19:49.454 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10811.30 42.23 5919.86 1846.78 10183.34 00:19:49.454 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10757.70 42.02 5949.80 2142.33 10144.55 00:19:49.454 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10735.50 41.94 5960.17 2650.83 10615.25 00:19:49.454 ======================================================== 00:19:49.454 Total : 42974.19 167.87 5956.77 1555.71 10615.25 00:19:49.454 00:19:49.454 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:19:49.454 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:49.454 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:19:49.454 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:49.454 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:19:49.454 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:49.454 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:49.454 rmmod nvme_tcp 00:19:49.714 rmmod nvme_fabrics 00:19:49.714 rmmod nvme_keyring 00:19:49.714 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:49.714 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:19:49.714 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:19:49.714 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2659787 ']' 00:19:49.714 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2659787 00:19:49.714 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2659787 ']' 00:19:49.714 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2659787 00:19:49.714 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:19:49.714 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:49.714 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2659787 00:19:49.714 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:49.714 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:49.714 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2659787' 00:19:49.715 killing process with pid 2659787 00:19:49.715 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2659787 00:19:49.715 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2659787 00:19:49.975 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:49.975 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:49.975 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:49.975 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:19:49.975 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:19:49.975 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:19:49.975 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:49.975 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:49.975 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:49.975 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:49.975 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:49.975 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.883 03:28:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:51.883 03:28:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:19:51.883 03:28:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:19:51.883 03:28:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:19:53.261 03:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:19:55.162 03:28:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:00.462 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:00.462 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:00.462 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:00.463 Found net devices under 0000:86:00.0: cvl_0_0 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:00.463 Found net devices under 0000:86:00.1: cvl_0_1 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:00.463 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:00.463 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.487 ms 00:20:00.463 00:20:00.463 --- 10.0.0.2 ping statistics --- 00:20:00.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.463 rtt min/avg/max/mdev = 0.487/0.487/0.487/0.000 ms 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:00.463 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:00.463 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:20:00.463 00:20:00.463 --- 10.0.0.1 ping statistics --- 00:20:00.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.463 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:00.463 net.core.busy_poll = 1 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:00.463 net.core.busy_read = 1 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2663722 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2663722 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2663722 ']' 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:00.463 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.463 [2024-12-06 03:28:20.596272] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:20:00.463 [2024-12-06 03:28:20.596328] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.724 [2024-12-06 03:28:20.664911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:00.724 [2024-12-06 03:28:20.709874] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.724 [2024-12-06 03:28:20.709912] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.724 [2024-12-06 03:28:20.709920] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:00.724 [2024-12-06 03:28:20.709926] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:00.724 [2024-12-06 03:28:20.709931] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.724 [2024-12-06 03:28:20.711509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.724 [2024-12-06 03:28:20.711604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:00.724 [2024-12-06 03:28:20.711686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:00.724 [2024-12-06 03:28:20.711688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.724 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:00.724 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:00.724 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:00.724 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:00.724 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.724 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.724 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:20:00.724 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:00.724 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:00.724 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.724 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.724 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.724 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:00.724 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:00.724 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.724 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.724 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.724 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:00.724 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.724 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.985 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.985 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:00.985 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.985 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.985 [2024-12-06 03:28:20.917807] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.985 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.985 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:00.985 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.985 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.985 Malloc1 00:20:00.985 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.985 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:00.985 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.985 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.985 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.985 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:00.985 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.985 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.985 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.985 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:00.985 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.985 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.985 [2024-12-06 03:28:20.978891] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:00.985 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.985 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2663851 00:20:00.985 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:20:00.985 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:02.893 03:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:20:02.893 03:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.893 03:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:02.893 03:28:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.893 03:28:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:20:02.893 "tick_rate": 2300000000, 00:20:02.893 "poll_groups": [ 00:20:02.893 { 00:20:02.893 "name": "nvmf_tgt_poll_group_000", 00:20:02.893 "admin_qpairs": 1, 00:20:02.893 "io_qpairs": 2, 00:20:02.893 "current_admin_qpairs": 1, 00:20:02.893 "current_io_qpairs": 2, 00:20:02.893 "pending_bdev_io": 0, 00:20:02.893 "completed_nvme_io": 28075, 00:20:02.893 "transports": [ 00:20:02.893 { 00:20:02.893 "trtype": "TCP" 00:20:02.893 } 00:20:02.893 ] 00:20:02.893 }, 00:20:02.893 { 00:20:02.893 "name": "nvmf_tgt_poll_group_001", 00:20:02.893 "admin_qpairs": 0, 00:20:02.893 "io_qpairs": 2, 00:20:02.893 "current_admin_qpairs": 0, 00:20:02.893 "current_io_qpairs": 2, 00:20:02.893 "pending_bdev_io": 0, 00:20:02.893 "completed_nvme_io": 28260, 00:20:02.893 "transports": [ 00:20:02.893 { 00:20:02.893 "trtype": "TCP" 00:20:02.893 } 00:20:02.893 ] 00:20:02.893 }, 00:20:02.893 { 00:20:02.893 "name": "nvmf_tgt_poll_group_002", 00:20:02.893 "admin_qpairs": 0, 00:20:02.893 "io_qpairs": 0, 00:20:02.893 "current_admin_qpairs": 0, 00:20:02.893 "current_io_qpairs": 0, 00:20:02.893 "pending_bdev_io": 0, 00:20:02.893 "completed_nvme_io": 0, 00:20:02.893 "transports": [ 00:20:02.893 { 00:20:02.893 "trtype": "TCP" 00:20:02.893 } 00:20:02.893 ] 00:20:02.893 }, 00:20:02.893 { 00:20:02.893 "name": "nvmf_tgt_poll_group_003", 00:20:02.893 "admin_qpairs": 0, 00:20:02.893 "io_qpairs": 0, 00:20:02.893 "current_admin_qpairs": 0, 00:20:02.893 "current_io_qpairs": 0, 00:20:02.893 "pending_bdev_io": 0, 00:20:02.893 "completed_nvme_io": 0, 00:20:02.893 "transports": [ 00:20:02.893 { 00:20:02.893 "trtype": "TCP" 00:20:02.893 } 00:20:02.893 ] 00:20:02.893 } 00:20:02.893 ] 00:20:02.893 }' 00:20:02.893 03:28:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:02.893 03:28:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:20:03.153 03:28:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:20:03.153 03:28:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:20:03.153 03:28:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2663851 00:20:11.272 Initializing NVMe Controllers 00:20:11.272 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:11.272 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:11.272 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:11.272 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:11.272 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:11.272 Initialization complete. Launching workers. 00:20:11.272 ======================================================== 00:20:11.272 Latency(us) 00:20:11.272 Device Information : IOPS MiB/s Average min max 00:20:11.272 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7797.60 30.46 8232.34 1463.36 52518.38 00:20:11.272 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 8345.30 32.60 7694.13 1443.97 53095.65 00:20:11.272 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6582.20 25.71 9726.32 1462.69 53646.93 00:20:11.272 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7228.50 28.24 8852.87 1532.24 53594.72 00:20:11.272 ======================================================== 00:20:11.272 Total : 29953.59 117.01 8560.44 1443.97 53646.93 00:20:11.272 00:20:11.272 03:28:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:20:11.272 03:28:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:11.272 03:28:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:11.272 03:28:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:11.272 03:28:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:11.272 03:28:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:11.272 03:28:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:11.272 rmmod nvme_tcp 00:20:11.272 rmmod nvme_fabrics 00:20:11.272 rmmod nvme_keyring 00:20:11.272 03:28:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:11.272 03:28:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:11.272 03:28:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:11.272 03:28:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2663722 ']' 00:20:11.272 03:28:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2663722 00:20:11.272 03:28:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2663722 ']' 00:20:11.272 03:28:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2663722 00:20:11.272 03:28:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:11.272 03:28:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:11.272 03:28:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2663722 00:20:11.272 03:28:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:11.272 03:28:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:11.272 03:28:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2663722' 00:20:11.272 killing process with pid 2663722 00:20:11.272 03:28:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2663722 00:20:11.272 03:28:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2663722 00:20:11.531 03:28:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:11.531 03:28:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:11.531 03:28:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:11.531 03:28:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:11.531 03:28:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:11.531 03:28:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:11.531 03:28:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:11.531 03:28:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:11.531 03:28:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:11.531 03:28:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.531 03:28:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:11.531 03:28:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:20:14.822 00:20:14.822 real 0m49.361s 00:20:14.822 user 2m43.772s 00:20:14.822 sys 0m10.061s 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:14.822 ************************************ 00:20:14.822 END TEST nvmf_perf_adq 00:20:14.822 ************************************ 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:14.822 ************************************ 00:20:14.822 START TEST nvmf_shutdown 00:20:14.822 ************************************ 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:14.822 * Looking for test storage... 00:20:14.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:14.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.822 --rc genhtml_branch_coverage=1 00:20:14.822 --rc genhtml_function_coverage=1 00:20:14.822 --rc genhtml_legend=1 00:20:14.822 --rc geninfo_all_blocks=1 00:20:14.822 --rc geninfo_unexecuted_blocks=1 00:20:14.822 00:20:14.822 ' 00:20:14.822 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:14.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.823 --rc genhtml_branch_coverage=1 00:20:14.823 --rc genhtml_function_coverage=1 00:20:14.823 --rc genhtml_legend=1 00:20:14.823 --rc geninfo_all_blocks=1 00:20:14.823 --rc geninfo_unexecuted_blocks=1 00:20:14.823 00:20:14.823 ' 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:14.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.823 --rc genhtml_branch_coverage=1 00:20:14.823 --rc genhtml_function_coverage=1 00:20:14.823 --rc genhtml_legend=1 00:20:14.823 --rc geninfo_all_blocks=1 00:20:14.823 --rc geninfo_unexecuted_blocks=1 00:20:14.823 00:20:14.823 ' 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:14.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.823 --rc genhtml_branch_coverage=1 00:20:14.823 --rc genhtml_function_coverage=1 00:20:14.823 --rc genhtml_legend=1 00:20:14.823 --rc geninfo_all_blocks=1 00:20:14.823 --rc geninfo_unexecuted_blocks=1 00:20:14.823 00:20:14.823 ' 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:14.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:14.823 ************************************ 00:20:14.823 START TEST nvmf_shutdown_tc1 00:20:14.823 ************************************ 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:14.823 03:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:20.105 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:20.105 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:20.105 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:20.106 Found net devices under 0000:86:00.0: cvl_0_0 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:20.106 Found net devices under 0000:86:00.1: cvl_0_1 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:20.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:20.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.449 ms 00:20:20.106 00:20:20.106 --- 10.0.0.2 ping statistics --- 00:20:20.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.106 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:20.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:20.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:20:20.106 00:20:20.106 --- 10.0.0.1 ping statistics --- 00:20:20.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.106 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2669083 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2669083 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2669083 ']' 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:20.106 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:20.106 [2024-12-06 03:28:40.035614] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:20:20.106 [2024-12-06 03:28:40.035669] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:20.106 [2024-12-06 03:28:40.103333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:20.106 [2024-12-06 03:28:40.146609] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:20.106 [2024-12-06 03:28:40.146646] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:20.106 [2024-12-06 03:28:40.146653] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:20.106 [2024-12-06 03:28:40.146659] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:20.106 [2024-12-06 03:28:40.146665] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:20.106 [2024-12-06 03:28:40.148324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:20.106 [2024-12-06 03:28:40.148408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:20.106 [2024-12-06 03:28:40.148702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:20.106 [2024-12-06 03:28:40.148703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:20.366 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:20.366 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:20.366 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:20.366 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:20.366 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:20.366 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:20.366 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:20.366 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.366 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:20.366 [2024-12-06 03:28:40.287345] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:20.366 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.366 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:20.366 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:20.366 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:20.366 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:20.366 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:20.366 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:20.366 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:20.366 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:20.366 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:20.366 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:20.366 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:20.366 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:20.366 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:20.366 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:20.367 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:20.367 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:20.367 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:20.367 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:20.367 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:20.367 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:20.367 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:20.367 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:20.367 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:20.367 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:20.367 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:20.367 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:20.367 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.367 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:20.367 Malloc1 00:20:20.367 [2024-12-06 03:28:40.404735] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:20.367 Malloc2 00:20:20.367 Malloc3 00:20:20.626 Malloc4 00:20:20.626 Malloc5 00:20:20.626 Malloc6 00:20:20.626 Malloc7 00:20:20.626 Malloc8 00:20:20.626 Malloc9 00:20:20.887 Malloc10 00:20:20.887 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.887 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:20.887 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:20.887 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:20.887 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2669354 00:20:20.887 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2669354 /var/tmp/bdevperf.sock 00:20:20.887 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2669354 ']' 00:20:20.887 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:20.887 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:20.887 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:20.887 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:20.887 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:20.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:20.887 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:20.887 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:20.887 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:20.887 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:20.887 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:20.887 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:20.887 { 00:20:20.887 "params": { 00:20:20.887 "name": "Nvme$subsystem", 00:20:20.887 "trtype": "$TEST_TRANSPORT", 00:20:20.887 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.887 "adrfam": "ipv4", 00:20:20.887 "trsvcid": "$NVMF_PORT", 00:20:20.887 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.887 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.887 "hdgst": ${hdgst:-false}, 00:20:20.887 "ddgst": ${ddgst:-false} 00:20:20.887 }, 00:20:20.887 "method": "bdev_nvme_attach_controller" 00:20:20.887 } 00:20:20.887 EOF 00:20:20.887 )") 00:20:20.887 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:20.887 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:20.887 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:20.887 { 00:20:20.887 "params": { 00:20:20.887 "name": "Nvme$subsystem", 00:20:20.887 "trtype": "$TEST_TRANSPORT", 00:20:20.887 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.887 "adrfam": "ipv4", 00:20:20.887 "trsvcid": "$NVMF_PORT", 00:20:20.887 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.887 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.888 "hdgst": ${hdgst:-false}, 00:20:20.888 "ddgst": ${ddgst:-false} 00:20:20.888 }, 00:20:20.888 "method": "bdev_nvme_attach_controller" 00:20:20.888 } 00:20:20.888 EOF 00:20:20.888 )") 00:20:20.888 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:20.888 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:20.888 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:20.888 { 00:20:20.888 "params": { 00:20:20.888 "name": "Nvme$subsystem", 00:20:20.888 "trtype": "$TEST_TRANSPORT", 00:20:20.888 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.888 "adrfam": "ipv4", 00:20:20.888 "trsvcid": "$NVMF_PORT", 00:20:20.888 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.888 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.888 "hdgst": ${hdgst:-false}, 00:20:20.888 "ddgst": ${ddgst:-false} 00:20:20.888 }, 00:20:20.888 "method": "bdev_nvme_attach_controller" 00:20:20.888 } 00:20:20.888 EOF 00:20:20.888 )") 00:20:20.888 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:20.888 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:20.888 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:20.888 { 00:20:20.888 "params": { 00:20:20.888 "name": "Nvme$subsystem", 00:20:20.888 "trtype": "$TEST_TRANSPORT", 00:20:20.888 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.888 "adrfam": "ipv4", 00:20:20.888 "trsvcid": "$NVMF_PORT", 00:20:20.888 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.888 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.888 "hdgst": ${hdgst:-false}, 00:20:20.888 "ddgst": ${ddgst:-false} 00:20:20.888 }, 00:20:20.888 "method": "bdev_nvme_attach_controller" 00:20:20.888 } 00:20:20.888 EOF 00:20:20.888 )") 00:20:20.888 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:20.888 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:20.888 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:20.888 { 00:20:20.888 "params": { 00:20:20.888 "name": "Nvme$subsystem", 00:20:20.888 "trtype": "$TEST_TRANSPORT", 00:20:20.888 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.888 "adrfam": "ipv4", 00:20:20.888 "trsvcid": "$NVMF_PORT", 00:20:20.888 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.888 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.888 "hdgst": ${hdgst:-false}, 00:20:20.888 "ddgst": ${ddgst:-false} 00:20:20.888 }, 00:20:20.888 "method": "bdev_nvme_attach_controller" 00:20:20.888 } 00:20:20.888 EOF 00:20:20.888 )") 00:20:20.888 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:20.888 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:20.888 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:20.888 { 00:20:20.888 "params": { 00:20:20.888 "name": "Nvme$subsystem", 00:20:20.888 "trtype": "$TEST_TRANSPORT", 00:20:20.888 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.888 "adrfam": "ipv4", 00:20:20.888 "trsvcid": "$NVMF_PORT", 00:20:20.888 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.888 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.888 "hdgst": ${hdgst:-false}, 00:20:20.888 "ddgst": ${ddgst:-false} 00:20:20.888 }, 00:20:20.888 "method": "bdev_nvme_attach_controller" 00:20:20.888 } 00:20:20.888 EOF 00:20:20.888 )") 00:20:20.888 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:20.888 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:20.888 [2024-12-06 03:28:40.884868] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:20:20.888 [2024-12-06 03:28:40.884915] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:20.888 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:20.888 { 00:20:20.888 "params": { 00:20:20.888 "name": "Nvme$subsystem", 00:20:20.888 "trtype": "$TEST_TRANSPORT", 00:20:20.888 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.888 "adrfam": "ipv4", 00:20:20.888 "trsvcid": "$NVMF_PORT", 00:20:20.888 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.888 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.888 "hdgst": ${hdgst:-false}, 00:20:20.888 "ddgst": ${ddgst:-false} 00:20:20.888 }, 00:20:20.888 "method": "bdev_nvme_attach_controller" 00:20:20.888 } 00:20:20.888 EOF 00:20:20.888 )") 00:20:20.888 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:20.888 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:20.888 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:20.888 { 00:20:20.888 "params": { 00:20:20.888 "name": "Nvme$subsystem", 00:20:20.888 "trtype": "$TEST_TRANSPORT", 00:20:20.888 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.888 "adrfam": "ipv4", 00:20:20.888 "trsvcid": "$NVMF_PORT", 00:20:20.888 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.888 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.888 "hdgst": ${hdgst:-false}, 00:20:20.888 "ddgst": ${ddgst:-false} 00:20:20.888 }, 00:20:20.888 "method": "bdev_nvme_attach_controller" 00:20:20.888 } 00:20:20.888 EOF 00:20:20.888 )") 00:20:20.888 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:20.888 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:20.888 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:20.888 { 00:20:20.888 "params": { 00:20:20.888 "name": "Nvme$subsystem", 00:20:20.888 "trtype": "$TEST_TRANSPORT", 00:20:20.888 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.888 "adrfam": "ipv4", 00:20:20.888 "trsvcid": "$NVMF_PORT", 00:20:20.888 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.888 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.888 "hdgst": ${hdgst:-false}, 00:20:20.888 "ddgst": ${ddgst:-false} 00:20:20.888 }, 00:20:20.888 "method": "bdev_nvme_attach_controller" 00:20:20.888 } 00:20:20.888 EOF 00:20:20.888 )") 00:20:20.888 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:20.888 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:20.888 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:20.888 { 00:20:20.888 "params": { 00:20:20.888 "name": "Nvme$subsystem", 00:20:20.888 "trtype": "$TEST_TRANSPORT", 00:20:20.888 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.888 "adrfam": "ipv4", 00:20:20.888 "trsvcid": "$NVMF_PORT", 00:20:20.888 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.888 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.888 "hdgst": ${hdgst:-false}, 00:20:20.888 "ddgst": ${ddgst:-false} 00:20:20.888 }, 00:20:20.888 "method": "bdev_nvme_attach_controller" 00:20:20.888 } 00:20:20.888 EOF 00:20:20.888 )") 00:20:20.888 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:20.888 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:20.888 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:20.888 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:20.888 "params": { 00:20:20.888 "name": "Nvme1", 00:20:20.888 "trtype": "tcp", 00:20:20.888 "traddr": "10.0.0.2", 00:20:20.888 "adrfam": "ipv4", 00:20:20.888 "trsvcid": "4420", 00:20:20.888 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.888 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:20.888 "hdgst": false, 00:20:20.888 "ddgst": false 00:20:20.888 }, 00:20:20.888 "method": "bdev_nvme_attach_controller" 00:20:20.888 },{ 00:20:20.888 "params": { 00:20:20.888 "name": "Nvme2", 00:20:20.888 "trtype": "tcp", 00:20:20.888 "traddr": "10.0.0.2", 00:20:20.888 "adrfam": "ipv4", 00:20:20.888 "trsvcid": "4420", 00:20:20.888 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:20.888 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:20.888 "hdgst": false, 00:20:20.888 "ddgst": false 00:20:20.888 }, 00:20:20.888 "method": "bdev_nvme_attach_controller" 00:20:20.888 },{ 00:20:20.888 "params": { 00:20:20.888 "name": "Nvme3", 00:20:20.888 "trtype": "tcp", 00:20:20.888 "traddr": "10.0.0.2", 00:20:20.888 "adrfam": "ipv4", 00:20:20.888 "trsvcid": "4420", 00:20:20.888 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:20.888 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:20.888 "hdgst": false, 00:20:20.888 "ddgst": false 00:20:20.888 }, 00:20:20.888 "method": "bdev_nvme_attach_controller" 00:20:20.888 },{ 00:20:20.888 "params": { 00:20:20.888 "name": "Nvme4", 00:20:20.888 "trtype": "tcp", 00:20:20.888 "traddr": "10.0.0.2", 00:20:20.888 "adrfam": "ipv4", 00:20:20.888 "trsvcid": "4420", 00:20:20.888 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:20.888 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:20.888 "hdgst": false, 00:20:20.889 "ddgst": false 00:20:20.889 }, 00:20:20.889 "method": "bdev_nvme_attach_controller" 00:20:20.889 },{ 00:20:20.889 "params": { 00:20:20.889 "name": "Nvme5", 00:20:20.889 "trtype": "tcp", 00:20:20.889 "traddr": "10.0.0.2", 00:20:20.889 "adrfam": "ipv4", 00:20:20.889 "trsvcid": "4420", 00:20:20.889 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:20.889 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:20.889 "hdgst": false, 00:20:20.889 "ddgst": false 00:20:20.889 }, 00:20:20.889 "method": "bdev_nvme_attach_controller" 00:20:20.889 },{ 00:20:20.889 "params": { 00:20:20.889 "name": "Nvme6", 00:20:20.889 "trtype": "tcp", 00:20:20.889 "traddr": "10.0.0.2", 00:20:20.889 "adrfam": "ipv4", 00:20:20.889 "trsvcid": "4420", 00:20:20.889 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:20.889 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:20.889 "hdgst": false, 00:20:20.889 "ddgst": false 00:20:20.889 }, 00:20:20.889 "method": "bdev_nvme_attach_controller" 00:20:20.889 },{ 00:20:20.889 "params": { 00:20:20.889 "name": "Nvme7", 00:20:20.889 "trtype": "tcp", 00:20:20.889 "traddr": "10.0.0.2", 00:20:20.889 "adrfam": "ipv4", 00:20:20.889 "trsvcid": "4420", 00:20:20.889 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:20.889 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:20.889 "hdgst": false, 00:20:20.889 "ddgst": false 00:20:20.889 }, 00:20:20.889 "method": "bdev_nvme_attach_controller" 00:20:20.889 },{ 00:20:20.889 "params": { 00:20:20.889 "name": "Nvme8", 00:20:20.889 "trtype": "tcp", 00:20:20.889 "traddr": "10.0.0.2", 00:20:20.889 "adrfam": "ipv4", 00:20:20.889 "trsvcid": "4420", 00:20:20.889 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:20.889 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:20.889 "hdgst": false, 00:20:20.889 "ddgst": false 00:20:20.889 }, 00:20:20.889 "method": "bdev_nvme_attach_controller" 00:20:20.889 },{ 00:20:20.889 "params": { 00:20:20.889 "name": "Nvme9", 00:20:20.889 "trtype": "tcp", 00:20:20.889 "traddr": "10.0.0.2", 00:20:20.889 "adrfam": "ipv4", 00:20:20.889 "trsvcid": "4420", 00:20:20.889 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:20.889 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:20.889 "hdgst": false, 00:20:20.889 "ddgst": false 00:20:20.889 }, 00:20:20.889 "method": "bdev_nvme_attach_controller" 00:20:20.889 },{ 00:20:20.889 "params": { 00:20:20.889 "name": "Nvme10", 00:20:20.889 "trtype": "tcp", 00:20:20.889 "traddr": "10.0.0.2", 00:20:20.889 "adrfam": "ipv4", 00:20:20.889 "trsvcid": "4420", 00:20:20.889 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:20.889 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:20.889 "hdgst": false, 00:20:20.889 "ddgst": false 00:20:20.889 }, 00:20:20.889 "method": "bdev_nvme_attach_controller" 00:20:20.889 }' 00:20:20.889 [2024-12-06 03:28:40.950202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.889 [2024-12-06 03:28:40.991512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.820 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:22.820 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:22.820 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:22.820 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.820 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:22.820 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.820 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2669354 00:20:22.820 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:20:22.820 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:20:23.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2669354 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:23.761 03:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2669083 00:20:23.761 03:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:23.761 03:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:23.761 03:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:23.761 03:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:23.761 03:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:23.761 03:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:23.761 { 00:20:23.761 "params": { 00:20:23.761 "name": "Nvme$subsystem", 00:20:23.761 "trtype": "$TEST_TRANSPORT", 00:20:23.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:23.761 "adrfam": "ipv4", 00:20:23.761 "trsvcid": "$NVMF_PORT", 00:20:23.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:23.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:23.761 "hdgst": ${hdgst:-false}, 00:20:23.761 "ddgst": ${ddgst:-false} 00:20:23.761 }, 00:20:23.761 "method": "bdev_nvme_attach_controller" 00:20:23.761 } 00:20:23.761 EOF 00:20:23.761 )") 00:20:23.761 03:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:23.761 03:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:23.761 03:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:23.761 { 00:20:23.761 "params": { 00:20:23.761 "name": "Nvme$subsystem", 00:20:23.761 "trtype": "$TEST_TRANSPORT", 00:20:23.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:23.761 "adrfam": "ipv4", 00:20:23.761 "trsvcid": "$NVMF_PORT", 00:20:23.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:23.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:23.761 "hdgst": ${hdgst:-false}, 00:20:23.761 "ddgst": ${ddgst:-false} 00:20:23.761 }, 00:20:23.761 "method": "bdev_nvme_attach_controller" 00:20:23.761 } 00:20:23.761 EOF 00:20:23.761 )") 00:20:23.761 03:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:23.761 03:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:23.761 03:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:23.761 { 00:20:23.761 "params": { 00:20:23.761 "name": "Nvme$subsystem", 00:20:23.761 "trtype": "$TEST_TRANSPORT", 00:20:23.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:23.761 "adrfam": "ipv4", 00:20:23.761 "trsvcid": "$NVMF_PORT", 00:20:23.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:23.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:23.761 "hdgst": ${hdgst:-false}, 00:20:23.761 "ddgst": ${ddgst:-false} 00:20:23.761 }, 00:20:23.761 "method": "bdev_nvme_attach_controller" 00:20:23.761 } 00:20:23.761 EOF 00:20:23.761 )") 00:20:23.761 03:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:23.761 03:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:23.761 03:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:23.761 { 00:20:23.761 "params": { 00:20:23.761 "name": "Nvme$subsystem", 00:20:23.761 "trtype": "$TEST_TRANSPORT", 00:20:23.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:23.761 "adrfam": "ipv4", 00:20:23.761 "trsvcid": "$NVMF_PORT", 00:20:23.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:23.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:23.761 "hdgst": ${hdgst:-false}, 00:20:23.761 "ddgst": ${ddgst:-false} 00:20:23.761 }, 00:20:23.761 "method": "bdev_nvme_attach_controller" 00:20:23.761 } 00:20:23.761 EOF 00:20:23.761 )") 00:20:23.761 03:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:23.761 03:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:23.761 03:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:23.761 { 00:20:23.761 "params": { 00:20:23.761 "name": "Nvme$subsystem", 00:20:23.761 "trtype": "$TEST_TRANSPORT", 00:20:23.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:23.761 "adrfam": "ipv4", 00:20:23.761 "trsvcid": "$NVMF_PORT", 00:20:23.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:23.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:23.761 "hdgst": ${hdgst:-false}, 00:20:23.761 "ddgst": ${ddgst:-false} 00:20:23.761 }, 00:20:23.761 "method": "bdev_nvme_attach_controller" 00:20:23.761 } 00:20:23.761 EOF 00:20:23.761 )") 00:20:23.761 03:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:23.761 03:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:23.761 03:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:23.761 { 00:20:23.761 "params": { 00:20:23.761 "name": "Nvme$subsystem", 00:20:23.761 "trtype": "$TEST_TRANSPORT", 00:20:23.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:23.761 "adrfam": "ipv4", 00:20:23.761 "trsvcid": "$NVMF_PORT", 00:20:23.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:23.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:23.761 "hdgst": ${hdgst:-false}, 00:20:23.761 "ddgst": ${ddgst:-false} 00:20:23.761 }, 00:20:23.761 "method": "bdev_nvme_attach_controller" 00:20:23.761 } 00:20:23.761 EOF 00:20:23.761 )") 00:20:23.761 03:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:23.761 03:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:23.761 03:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:23.761 { 00:20:23.761 "params": { 00:20:23.761 "name": "Nvme$subsystem", 00:20:23.761 "trtype": "$TEST_TRANSPORT", 00:20:23.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:23.761 "adrfam": "ipv4", 00:20:23.761 "trsvcid": "$NVMF_PORT", 00:20:23.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:23.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:23.761 "hdgst": ${hdgst:-false}, 00:20:23.761 "ddgst": ${ddgst:-false} 00:20:23.761 }, 00:20:23.761 "method": "bdev_nvme_attach_controller" 00:20:23.761 } 00:20:23.761 EOF 00:20:23.761 )") 00:20:23.761 03:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:23.761 [2024-12-06 03:28:43.808742] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:20:23.761 [2024-12-06 03:28:43.808791] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2669846 ] 00:20:23.761 03:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:23.761 03:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:23.761 { 00:20:23.761 "params": { 00:20:23.761 "name": "Nvme$subsystem", 00:20:23.761 "trtype": "$TEST_TRANSPORT", 00:20:23.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:23.761 "adrfam": "ipv4", 00:20:23.761 "trsvcid": "$NVMF_PORT", 00:20:23.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:23.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:23.761 "hdgst": ${hdgst:-false}, 00:20:23.761 "ddgst": ${ddgst:-false} 00:20:23.761 }, 00:20:23.761 "method": "bdev_nvme_attach_controller" 00:20:23.761 } 00:20:23.761 EOF 00:20:23.761 )") 00:20:23.761 03:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:23.761 03:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:23.761 03:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:23.761 { 00:20:23.761 "params": { 00:20:23.761 "name": "Nvme$subsystem", 00:20:23.761 "trtype": "$TEST_TRANSPORT", 00:20:23.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:23.761 "adrfam": "ipv4", 00:20:23.761 "trsvcid": "$NVMF_PORT", 00:20:23.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:23.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:23.761 "hdgst": ${hdgst:-false}, 00:20:23.761 "ddgst": ${ddgst:-false} 00:20:23.761 }, 00:20:23.761 "method": "bdev_nvme_attach_controller" 00:20:23.761 } 00:20:23.761 EOF 00:20:23.761 )") 00:20:23.761 03:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:23.761 03:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:23.761 03:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:23.761 { 00:20:23.761 "params": { 00:20:23.762 "name": "Nvme$subsystem", 00:20:23.762 "trtype": "$TEST_TRANSPORT", 00:20:23.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:23.762 "adrfam": "ipv4", 00:20:23.762 "trsvcid": "$NVMF_PORT", 00:20:23.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:23.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:23.762 "hdgst": ${hdgst:-false}, 00:20:23.762 "ddgst": ${ddgst:-false} 00:20:23.762 }, 00:20:23.762 "method": "bdev_nvme_attach_controller" 00:20:23.762 } 00:20:23.762 EOF 00:20:23.762 )") 00:20:23.762 03:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:23.762 03:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:23.762 03:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:23.762 03:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:23.762 "params": { 00:20:23.762 "name": "Nvme1", 00:20:23.762 "trtype": "tcp", 00:20:23.762 "traddr": "10.0.0.2", 00:20:23.762 "adrfam": "ipv4", 00:20:23.762 "trsvcid": "4420", 00:20:23.762 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.762 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:23.762 "hdgst": false, 00:20:23.762 "ddgst": false 00:20:23.762 }, 00:20:23.762 "method": "bdev_nvme_attach_controller" 00:20:23.762 },{ 00:20:23.762 "params": { 00:20:23.762 "name": "Nvme2", 00:20:23.762 "trtype": "tcp", 00:20:23.762 "traddr": "10.0.0.2", 00:20:23.762 "adrfam": "ipv4", 00:20:23.762 "trsvcid": "4420", 00:20:23.762 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:23.762 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:23.762 "hdgst": false, 00:20:23.762 "ddgst": false 00:20:23.762 }, 00:20:23.762 "method": "bdev_nvme_attach_controller" 00:20:23.762 },{ 00:20:23.762 "params": { 00:20:23.762 "name": "Nvme3", 00:20:23.762 "trtype": "tcp", 00:20:23.762 "traddr": "10.0.0.2", 00:20:23.762 "adrfam": "ipv4", 00:20:23.762 "trsvcid": "4420", 00:20:23.762 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:23.762 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:23.762 "hdgst": false, 00:20:23.762 "ddgst": false 00:20:23.762 }, 00:20:23.762 "method": "bdev_nvme_attach_controller" 00:20:23.762 },{ 00:20:23.762 "params": { 00:20:23.762 "name": "Nvme4", 00:20:23.762 "trtype": "tcp", 00:20:23.762 "traddr": "10.0.0.2", 00:20:23.762 "adrfam": "ipv4", 00:20:23.762 "trsvcid": "4420", 00:20:23.762 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:23.762 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:23.762 "hdgst": false, 00:20:23.762 "ddgst": false 00:20:23.762 }, 00:20:23.762 "method": "bdev_nvme_attach_controller" 00:20:23.762 },{ 00:20:23.762 "params": { 00:20:23.762 "name": "Nvme5", 00:20:23.762 "trtype": "tcp", 00:20:23.762 "traddr": "10.0.0.2", 00:20:23.762 "adrfam": "ipv4", 00:20:23.762 "trsvcid": "4420", 00:20:23.762 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:23.762 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:23.762 "hdgst": false, 00:20:23.762 "ddgst": false 00:20:23.762 }, 00:20:23.762 "method": "bdev_nvme_attach_controller" 00:20:23.762 },{ 00:20:23.762 "params": { 00:20:23.762 "name": "Nvme6", 00:20:23.762 "trtype": "tcp", 00:20:23.762 "traddr": "10.0.0.2", 00:20:23.762 "adrfam": "ipv4", 00:20:23.762 "trsvcid": "4420", 00:20:23.762 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:23.762 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:23.762 "hdgst": false, 00:20:23.762 "ddgst": false 00:20:23.762 }, 00:20:23.762 "method": "bdev_nvme_attach_controller" 00:20:23.762 },{ 00:20:23.762 "params": { 00:20:23.762 "name": "Nvme7", 00:20:23.762 "trtype": "tcp", 00:20:23.762 "traddr": "10.0.0.2", 00:20:23.762 "adrfam": "ipv4", 00:20:23.762 "trsvcid": "4420", 00:20:23.762 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:23.762 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:23.762 "hdgst": false, 00:20:23.762 "ddgst": false 00:20:23.762 }, 00:20:23.762 "method": "bdev_nvme_attach_controller" 00:20:23.762 },{ 00:20:23.762 "params": { 00:20:23.762 "name": "Nvme8", 00:20:23.762 "trtype": "tcp", 00:20:23.762 "traddr": "10.0.0.2", 00:20:23.762 "adrfam": "ipv4", 00:20:23.762 "trsvcid": "4420", 00:20:23.762 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:23.762 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:23.762 "hdgst": false, 00:20:23.762 "ddgst": false 00:20:23.762 }, 00:20:23.762 "method": "bdev_nvme_attach_controller" 00:20:23.762 },{ 00:20:23.762 "params": { 00:20:23.762 "name": "Nvme9", 00:20:23.762 "trtype": "tcp", 00:20:23.762 "traddr": "10.0.0.2", 00:20:23.762 "adrfam": "ipv4", 00:20:23.762 "trsvcid": "4420", 00:20:23.762 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:23.762 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:23.762 "hdgst": false, 00:20:23.762 "ddgst": false 00:20:23.762 }, 00:20:23.762 "method": "bdev_nvme_attach_controller" 00:20:23.762 },{ 00:20:23.762 "params": { 00:20:23.762 "name": "Nvme10", 00:20:23.762 "trtype": "tcp", 00:20:23.762 "traddr": "10.0.0.2", 00:20:23.762 "adrfam": "ipv4", 00:20:23.762 "trsvcid": "4420", 00:20:23.762 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:23.762 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:23.762 "hdgst": false, 00:20:23.762 "ddgst": false 00:20:23.762 }, 00:20:23.762 "method": "bdev_nvme_attach_controller" 00:20:23.762 }' 00:20:23.762 [2024-12-06 03:28:43.873748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.021 [2024-12-06 03:28:43.916134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.400 Running I/O for 1 seconds... 00:20:26.594 2174.00 IOPS, 135.88 MiB/s 00:20:26.594 Latency(us) 00:20:26.594 [2024-12-06T02:28:46.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.594 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:26.594 Verification LBA range: start 0x0 length 0x400 00:20:26.594 Nvme1n1 : 1.15 277.72 17.36 0.00 0.00 226897.39 25872.47 208803.39 00:20:26.594 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:26.594 Verification LBA range: start 0x0 length 0x400 00:20:26.594 Nvme2n1 : 1.03 248.99 15.56 0.00 0.00 250554.32 15044.79 242540.19 00:20:26.594 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:26.594 Verification LBA range: start 0x0 length 0x400 00:20:26.594 Nvme3n1 : 1.16 275.98 17.25 0.00 0.00 223452.12 15842.62 221568.67 00:20:26.594 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:26.594 Verification LBA range: start 0x0 length 0x400 00:20:26.594 Nvme4n1 : 1.15 279.30 17.46 0.00 0.00 215665.53 12936.24 225215.89 00:20:26.594 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:26.594 Verification LBA range: start 0x0 length 0x400 00:20:26.594 Nvme5n1 : 1.17 273.73 17.11 0.00 0.00 217240.98 14930.81 213362.42 00:20:26.594 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:26.594 Verification LBA range: start 0x0 length 0x400 00:20:26.594 Nvme6n1 : 1.16 274.89 17.18 0.00 0.00 213789.65 15614.66 218833.25 00:20:26.594 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:26.594 Verification LBA range: start 0x0 length 0x400 00:20:26.594 Nvme7n1 : 1.16 279.71 17.48 0.00 0.00 207051.33 8605.16 199685.34 00:20:26.594 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:26.594 Verification LBA range: start 0x0 length 0x400 00:20:26.594 Nvme8n1 : 1.17 272.52 17.03 0.00 0.00 210560.67 14132.98 227951.30 00:20:26.594 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:26.594 Verification LBA range: start 0x0 length 0x400 00:20:26.594 Nvme9n1 : 1.18 271.00 16.94 0.00 0.00 208720.76 15842.62 240716.58 00:20:26.594 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:26.594 Verification LBA range: start 0x0 length 0x400 00:20:26.594 Nvme10n1 : 1.18 275.75 17.23 0.00 0.00 201895.75 2165.54 221568.67 00:20:26.594 [2024-12-06T02:28:46.735Z] =================================================================================================================== 00:20:26.594 [2024-12-06T02:28:46.735Z] Total : 2729.60 170.60 0.00 0.00 216870.43 2165.54 242540.19 00:20:26.853 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:20:26.853 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:26.853 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:26.853 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:26.853 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:26.853 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:26.853 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:20:26.853 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:26.853 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:20:26.853 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:26.853 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:26.853 rmmod nvme_tcp 00:20:26.853 rmmod nvme_fabrics 00:20:26.853 rmmod nvme_keyring 00:20:26.853 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:26.853 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:20:26.853 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:20:26.853 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2669083 ']' 00:20:26.853 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2669083 00:20:26.853 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2669083 ']' 00:20:26.853 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2669083 00:20:26.853 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:20:26.853 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:26.853 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2669083 00:20:26.853 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:26.853 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:26.853 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2669083' 00:20:26.853 killing process with pid 2669083 00:20:26.853 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2669083 00:20:26.853 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2669083 00:20:27.420 03:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:27.420 03:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:27.420 03:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:27.420 03:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:20:27.420 03:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:20:27.420 03:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:27.420 03:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:20:27.420 03:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:27.420 03:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:27.420 03:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.420 03:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:27.420 03:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.323 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:29.323 00:20:29.323 real 0m14.467s 00:20:29.323 user 0m34.140s 00:20:29.323 sys 0m5.183s 00:20:29.323 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:29.323 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:29.323 ************************************ 00:20:29.323 END TEST nvmf_shutdown_tc1 00:20:29.323 ************************************ 00:20:29.323 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:29.323 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:29.323 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:29.323 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:29.323 ************************************ 00:20:29.323 START TEST nvmf_shutdown_tc2 00:20:29.323 ************************************ 00:20:29.323 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:20:29.323 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:20:29.323 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:29.323 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:29.323 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:29.323 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:29.324 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:29.324 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:29.324 Found net devices under 0000:86:00.0: cvl_0_0 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:29.324 Found net devices under 0000:86:00.1: cvl_0_1 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:29.324 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:29.325 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:29.325 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:29.583 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:29.583 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:29.583 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:29.583 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:29.583 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:29.583 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:29.583 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:29.583 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:29.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:29.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.437 ms 00:20:29.583 00:20:29.583 --- 10.0.0.2 ping statistics --- 00:20:29.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.584 rtt min/avg/max/mdev = 0.437/0.437/0.437/0.000 ms 00:20:29.584 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:29.584 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:29.584 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:20:29.584 00:20:29.584 --- 10.0.0.1 ping statistics --- 00:20:29.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.584 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:20:29.584 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:29.584 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:20:29.584 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:29.584 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:29.584 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:29.584 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:29.584 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:29.584 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:29.584 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:29.584 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:29.584 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:29.584 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:29.584 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:29.584 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2670868 00:20:29.584 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2670868 00:20:29.584 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2670868 ']' 00:20:29.584 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.584 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:29.584 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.584 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:29.584 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:29.584 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:29.843 [2024-12-06 03:28:49.731217] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:20:29.843 [2024-12-06 03:28:49.731262] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:29.843 [2024-12-06 03:28:49.797101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:29.843 [2024-12-06 03:28:49.839738] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:29.843 [2024-12-06 03:28:49.839776] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:29.843 [2024-12-06 03:28:49.839783] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:29.843 [2024-12-06 03:28:49.839789] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:29.843 [2024-12-06 03:28:49.839795] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:29.843 [2024-12-06 03:28:49.841316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:29.843 [2024-12-06 03:28:49.841400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:29.843 [2024-12-06 03:28:49.841479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:29.843 [2024-12-06 03:28:49.841479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:29.843 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:29.843 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:29.843 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:29.844 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:29.844 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:30.104 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:30.104 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:30.104 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.104 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:30.104 [2024-12-06 03:28:49.988270] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:30.104 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.104 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:30.104 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:30.104 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:30.104 03:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:30.104 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:30.104 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:30.104 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:30.104 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:30.104 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:30.104 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:30.104 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:30.104 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:30.104 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:30.104 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:30.104 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:30.104 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:30.104 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:30.104 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:30.104 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:30.104 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:30.104 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:30.104 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:30.104 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:30.104 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:30.104 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:30.104 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:30.104 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.104 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:30.104 Malloc1 00:20:30.104 [2024-12-06 03:28:50.098241] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:30.104 Malloc2 00:20:30.104 Malloc3 00:20:30.104 Malloc4 00:20:30.364 Malloc5 00:20:30.364 Malloc6 00:20:30.364 Malloc7 00:20:30.364 Malloc8 00:20:30.364 Malloc9 00:20:30.364 Malloc10 00:20:30.364 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.364 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:30.364 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:30.364 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:30.622 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2671139 00:20:30.622 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2671139 /var/tmp/bdevperf.sock 00:20:30.622 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2671139 ']' 00:20:30.622 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:30.622 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:30.622 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:30.622 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:30.622 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:30.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:30.622 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:20:30.622 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:30.622 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:20:30.622 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:30.622 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:30.622 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:30.622 { 00:20:30.622 "params": { 00:20:30.622 "name": "Nvme$subsystem", 00:20:30.622 "trtype": "$TEST_TRANSPORT", 00:20:30.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.622 "adrfam": "ipv4", 00:20:30.622 "trsvcid": "$NVMF_PORT", 00:20:30.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.622 "hdgst": ${hdgst:-false}, 00:20:30.622 "ddgst": ${ddgst:-false} 00:20:30.622 }, 00:20:30.622 "method": "bdev_nvme_attach_controller" 00:20:30.622 } 00:20:30.622 EOF 00:20:30.622 )") 00:20:30.622 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:30.622 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:30.622 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:30.622 { 00:20:30.622 "params": { 00:20:30.622 "name": "Nvme$subsystem", 00:20:30.622 "trtype": "$TEST_TRANSPORT", 00:20:30.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.622 "adrfam": "ipv4", 00:20:30.622 "trsvcid": "$NVMF_PORT", 00:20:30.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.622 "hdgst": ${hdgst:-false}, 00:20:30.622 "ddgst": ${ddgst:-false} 00:20:30.622 }, 00:20:30.622 "method": "bdev_nvme_attach_controller" 00:20:30.622 } 00:20:30.622 EOF 00:20:30.622 )") 00:20:30.622 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:30.622 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:30.622 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:30.622 { 00:20:30.622 "params": { 00:20:30.622 "name": "Nvme$subsystem", 00:20:30.622 "trtype": "$TEST_TRANSPORT", 00:20:30.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.622 "adrfam": "ipv4", 00:20:30.622 "trsvcid": "$NVMF_PORT", 00:20:30.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.622 "hdgst": ${hdgst:-false}, 00:20:30.622 "ddgst": ${ddgst:-false} 00:20:30.622 }, 00:20:30.622 "method": "bdev_nvme_attach_controller" 00:20:30.622 } 00:20:30.622 EOF 00:20:30.622 )") 00:20:30.622 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:30.623 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:30.623 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:30.623 { 00:20:30.623 "params": { 00:20:30.623 "name": "Nvme$subsystem", 00:20:30.623 "trtype": "$TEST_TRANSPORT", 00:20:30.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.623 "adrfam": "ipv4", 00:20:30.623 "trsvcid": "$NVMF_PORT", 00:20:30.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.623 "hdgst": ${hdgst:-false}, 00:20:30.623 "ddgst": ${ddgst:-false} 00:20:30.623 }, 00:20:30.623 "method": "bdev_nvme_attach_controller" 00:20:30.623 } 00:20:30.623 EOF 00:20:30.623 )") 00:20:30.623 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:30.623 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:30.623 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:30.623 { 00:20:30.623 "params": { 00:20:30.623 "name": "Nvme$subsystem", 00:20:30.623 "trtype": "$TEST_TRANSPORT", 00:20:30.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.623 "adrfam": "ipv4", 00:20:30.623 "trsvcid": "$NVMF_PORT", 00:20:30.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.623 "hdgst": ${hdgst:-false}, 00:20:30.623 "ddgst": ${ddgst:-false} 00:20:30.623 }, 00:20:30.623 "method": "bdev_nvme_attach_controller" 00:20:30.623 } 00:20:30.623 EOF 00:20:30.623 )") 00:20:30.623 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:30.623 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:30.623 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:30.623 { 00:20:30.623 "params": { 00:20:30.623 "name": "Nvme$subsystem", 00:20:30.623 "trtype": "$TEST_TRANSPORT", 00:20:30.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.623 "adrfam": "ipv4", 00:20:30.623 "trsvcid": "$NVMF_PORT", 00:20:30.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.623 "hdgst": ${hdgst:-false}, 00:20:30.623 "ddgst": ${ddgst:-false} 00:20:30.623 }, 00:20:30.623 "method": "bdev_nvme_attach_controller" 00:20:30.623 } 00:20:30.623 EOF 00:20:30.623 )") 00:20:30.623 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:30.623 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:30.623 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:30.623 { 00:20:30.623 "params": { 00:20:30.623 "name": "Nvme$subsystem", 00:20:30.623 "trtype": "$TEST_TRANSPORT", 00:20:30.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.623 "adrfam": "ipv4", 00:20:30.623 "trsvcid": "$NVMF_PORT", 00:20:30.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.623 "hdgst": ${hdgst:-false}, 00:20:30.623 "ddgst": ${ddgst:-false} 00:20:30.623 }, 00:20:30.623 "method": "bdev_nvme_attach_controller" 00:20:30.623 } 00:20:30.623 EOF 00:20:30.623 )") 00:20:30.623 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:30.623 [2024-12-06 03:28:50.578744] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:20:30.623 [2024-12-06 03:28:50.578794] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2671139 ] 00:20:30.623 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:30.623 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:30.623 { 00:20:30.623 "params": { 00:20:30.623 "name": "Nvme$subsystem", 00:20:30.623 "trtype": "$TEST_TRANSPORT", 00:20:30.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.623 "adrfam": "ipv4", 00:20:30.623 "trsvcid": "$NVMF_PORT", 00:20:30.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.623 "hdgst": ${hdgst:-false}, 00:20:30.623 "ddgst": ${ddgst:-false} 00:20:30.623 }, 00:20:30.623 "method": "bdev_nvme_attach_controller" 00:20:30.623 } 00:20:30.623 EOF 00:20:30.623 )") 00:20:30.623 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:30.623 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:30.623 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:30.623 { 00:20:30.623 "params": { 00:20:30.623 "name": "Nvme$subsystem", 00:20:30.623 "trtype": "$TEST_TRANSPORT", 00:20:30.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.623 "adrfam": "ipv4", 00:20:30.623 "trsvcid": "$NVMF_PORT", 00:20:30.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.623 "hdgst": ${hdgst:-false}, 00:20:30.623 "ddgst": ${ddgst:-false} 00:20:30.623 }, 00:20:30.623 "method": "bdev_nvme_attach_controller" 00:20:30.623 } 00:20:30.623 EOF 00:20:30.623 )") 00:20:30.623 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:30.623 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:30.623 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:30.623 { 00:20:30.623 "params": { 00:20:30.623 "name": "Nvme$subsystem", 00:20:30.623 "trtype": "$TEST_TRANSPORT", 00:20:30.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.623 "adrfam": "ipv4", 00:20:30.623 "trsvcid": "$NVMF_PORT", 00:20:30.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.623 "hdgst": ${hdgst:-false}, 00:20:30.623 "ddgst": ${ddgst:-false} 00:20:30.623 }, 00:20:30.623 "method": "bdev_nvme_attach_controller" 00:20:30.623 } 00:20:30.623 EOF 00:20:30.623 )") 00:20:30.623 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:30.623 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:20:30.623 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:20:30.623 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:30.623 "params": { 00:20:30.623 "name": "Nvme1", 00:20:30.623 "trtype": "tcp", 00:20:30.623 "traddr": "10.0.0.2", 00:20:30.623 "adrfam": "ipv4", 00:20:30.623 "trsvcid": "4420", 00:20:30.623 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:30.623 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:30.623 "hdgst": false, 00:20:30.623 "ddgst": false 00:20:30.623 }, 00:20:30.623 "method": "bdev_nvme_attach_controller" 00:20:30.623 },{ 00:20:30.623 "params": { 00:20:30.623 "name": "Nvme2", 00:20:30.623 "trtype": "tcp", 00:20:30.623 "traddr": "10.0.0.2", 00:20:30.623 "adrfam": "ipv4", 00:20:30.623 "trsvcid": "4420", 00:20:30.623 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:30.623 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:30.623 "hdgst": false, 00:20:30.623 "ddgst": false 00:20:30.623 }, 00:20:30.623 "method": "bdev_nvme_attach_controller" 00:20:30.623 },{ 00:20:30.623 "params": { 00:20:30.623 "name": "Nvme3", 00:20:30.623 "trtype": "tcp", 00:20:30.623 "traddr": "10.0.0.2", 00:20:30.623 "adrfam": "ipv4", 00:20:30.623 "trsvcid": "4420", 00:20:30.623 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:30.623 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:30.623 "hdgst": false, 00:20:30.623 "ddgst": false 00:20:30.623 }, 00:20:30.623 "method": "bdev_nvme_attach_controller" 00:20:30.623 },{ 00:20:30.623 "params": { 00:20:30.623 "name": "Nvme4", 00:20:30.623 "trtype": "tcp", 00:20:30.623 "traddr": "10.0.0.2", 00:20:30.623 "adrfam": "ipv4", 00:20:30.623 "trsvcid": "4420", 00:20:30.623 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:30.623 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:30.623 "hdgst": false, 00:20:30.623 "ddgst": false 00:20:30.623 }, 00:20:30.623 "method": "bdev_nvme_attach_controller" 00:20:30.623 },{ 00:20:30.623 "params": { 00:20:30.623 "name": "Nvme5", 00:20:30.623 "trtype": "tcp", 00:20:30.623 "traddr": "10.0.0.2", 00:20:30.623 "adrfam": "ipv4", 00:20:30.623 "trsvcid": "4420", 00:20:30.623 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:30.623 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:30.623 "hdgst": false, 00:20:30.623 "ddgst": false 00:20:30.623 }, 00:20:30.623 "method": "bdev_nvme_attach_controller" 00:20:30.623 },{ 00:20:30.623 "params": { 00:20:30.623 "name": "Nvme6", 00:20:30.623 "trtype": "tcp", 00:20:30.623 "traddr": "10.0.0.2", 00:20:30.623 "adrfam": "ipv4", 00:20:30.623 "trsvcid": "4420", 00:20:30.623 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:30.623 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:30.623 "hdgst": false, 00:20:30.623 "ddgst": false 00:20:30.623 }, 00:20:30.623 "method": "bdev_nvme_attach_controller" 00:20:30.623 },{ 00:20:30.623 "params": { 00:20:30.623 "name": "Nvme7", 00:20:30.623 "trtype": "tcp", 00:20:30.623 "traddr": "10.0.0.2", 00:20:30.623 "adrfam": "ipv4", 00:20:30.623 "trsvcid": "4420", 00:20:30.623 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:30.623 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:30.623 "hdgst": false, 00:20:30.623 "ddgst": false 00:20:30.623 }, 00:20:30.623 "method": "bdev_nvme_attach_controller" 00:20:30.623 },{ 00:20:30.623 "params": { 00:20:30.623 "name": "Nvme8", 00:20:30.623 "trtype": "tcp", 00:20:30.623 "traddr": "10.0.0.2", 00:20:30.623 "adrfam": "ipv4", 00:20:30.623 "trsvcid": "4420", 00:20:30.623 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:30.623 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:30.623 "hdgst": false, 00:20:30.623 "ddgst": false 00:20:30.623 }, 00:20:30.623 "method": "bdev_nvme_attach_controller" 00:20:30.623 },{ 00:20:30.623 "params": { 00:20:30.623 "name": "Nvme9", 00:20:30.623 "trtype": "tcp", 00:20:30.623 "traddr": "10.0.0.2", 00:20:30.623 "adrfam": "ipv4", 00:20:30.623 "trsvcid": "4420", 00:20:30.623 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:30.623 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:30.623 "hdgst": false, 00:20:30.623 "ddgst": false 00:20:30.623 }, 00:20:30.623 "method": "bdev_nvme_attach_controller" 00:20:30.623 },{ 00:20:30.623 "params": { 00:20:30.623 "name": "Nvme10", 00:20:30.624 "trtype": "tcp", 00:20:30.624 "traddr": "10.0.0.2", 00:20:30.624 "adrfam": "ipv4", 00:20:30.624 "trsvcid": "4420", 00:20:30.624 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:30.624 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:30.624 "hdgst": false, 00:20:30.624 "ddgst": false 00:20:30.624 }, 00:20:30.624 "method": "bdev_nvme_attach_controller" 00:20:30.624 }' 00:20:30.624 [2024-12-06 03:28:50.643875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.624 [2024-12-06 03:28:50.685555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.996 Running I/O for 10 seconds... 00:20:32.563 03:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:32.563 03:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:32.563 03:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:32.563 03:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.563 03:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:32.563 03:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.563 03:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:32.563 03:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:32.563 03:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:32.563 03:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:20:32.563 03:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:20:32.563 03:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:32.563 03:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:32.563 03:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:32.563 03:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:32.563 03:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.563 03:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:32.563 03:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.563 03:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:20:32.563 03:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:20:32.563 03:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:20:32.563 03:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:20:32.563 03:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:20:32.563 03:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2671139 00:20:32.563 03:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2671139 ']' 00:20:32.563 03:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2671139 00:20:32.563 03:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:32.563 03:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:32.563 03:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2671139 00:20:32.563 03:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:32.563 03:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:32.563 03:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2671139' 00:20:32.563 killing process with pid 2671139 00:20:32.563 03:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2671139 00:20:32.563 03:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2671139 00:20:32.563 Received shutdown signal, test time was about 0.677775 seconds 00:20:32.563 00:20:32.563 Latency(us) 00:20:32.563 [2024-12-06T02:28:52.704Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.563 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.563 Verification LBA range: start 0x0 length 0x400 00:20:32.563 Nvme1n1 : 0.66 292.26 18.27 0.00 0.00 215763.92 16982.37 208803.39 00:20:32.563 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.563 Verification LBA range: start 0x0 length 0x400 00:20:32.563 Nvme2n1 : 0.66 288.73 18.05 0.00 0.00 213098.26 21313.45 221568.67 00:20:32.563 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.563 Verification LBA range: start 0x0 length 0x400 00:20:32.563 Nvme3n1 : 0.66 290.05 18.13 0.00 0.00 205985.91 13734.07 222480.47 00:20:32.563 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.563 Verification LBA range: start 0x0 length 0x400 00:20:32.563 Nvme4n1 : 0.65 296.17 18.51 0.00 0.00 196166.34 12936.24 223392.28 00:20:32.563 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.563 Verification LBA range: start 0x0 length 0x400 00:20:32.564 Nvme5n1 : 0.67 284.69 17.79 0.00 0.00 200294.99 31229.33 205156.17 00:20:32.564 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.564 Verification LBA range: start 0x0 length 0x400 00:20:32.564 Nvme6n1 : 0.68 283.56 17.72 0.00 0.00 195911.98 15956.59 222480.47 00:20:32.564 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.564 Verification LBA range: start 0x0 length 0x400 00:20:32.564 Nvme7n1 : 0.67 286.05 17.88 0.00 0.00 188525.23 35332.45 188743.68 00:20:32.564 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.564 Verification LBA range: start 0x0 length 0x400 00:20:32.564 Nvme8n1 : 0.67 286.95 17.93 0.00 0.00 182448.90 30773.43 186920.07 00:20:32.564 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.564 Verification LBA range: start 0x0 length 0x400 00:20:32.564 Nvme9n1 : 0.64 198.95 12.43 0.00 0.00 253139.92 36700.16 219745.06 00:20:32.564 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.564 Verification LBA range: start 0x0 length 0x400 00:20:32.564 Nvme10n1 : 0.65 198.10 12.38 0.00 0.00 246202.10 23592.96 237069.36 00:20:32.564 [2024-12-06T02:28:52.705Z] =================================================================================================================== 00:20:32.564 [2024-12-06T02:28:52.705Z] Total : 2705.51 169.09 0.00 0.00 206902.52 12936.24 237069.36 00:20:32.822 03:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:20:33.756 03:28:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2670868 00:20:33.756 03:28:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:20:33.756 03:28:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:33.756 03:28:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:33.756 03:28:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:33.756 03:28:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:33.756 03:28:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:33.756 03:28:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:20:33.756 03:28:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:33.756 03:28:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:20:33.756 03:28:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:33.756 03:28:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:33.756 rmmod nvme_tcp 00:20:33.756 rmmod nvme_fabrics 00:20:34.014 rmmod nvme_keyring 00:20:34.014 03:28:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:34.014 03:28:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:20:34.014 03:28:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:20:34.014 03:28:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2670868 ']' 00:20:34.014 03:28:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2670868 00:20:34.014 03:28:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2670868 ']' 00:20:34.014 03:28:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2670868 00:20:34.014 03:28:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:34.014 03:28:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:34.014 03:28:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2670868 00:20:34.014 03:28:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:34.014 03:28:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:34.014 03:28:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2670868' 00:20:34.014 killing process with pid 2670868 00:20:34.014 03:28:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2670868 00:20:34.014 03:28:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2670868 00:20:34.272 03:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:34.272 03:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:34.272 03:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:34.272 03:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:20:34.272 03:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:20:34.272 03:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:34.272 03:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:20:34.272 03:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:34.272 03:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:34.272 03:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.272 03:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:34.272 03:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.799 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:36.799 00:20:36.799 real 0m6.994s 00:20:36.799 user 0m20.066s 00:20:36.799 sys 0m1.264s 00:20:36.799 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:36.799 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:36.799 ************************************ 00:20:36.799 END TEST nvmf_shutdown_tc2 00:20:36.799 ************************************ 00:20:36.799 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:36.799 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:36.799 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:36.799 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:36.799 ************************************ 00:20:36.799 START TEST nvmf_shutdown_tc3 00:20:36.800 ************************************ 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:36.800 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:36.800 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:36.800 Found net devices under 0000:86:00.0: cvl_0_0 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:36.800 Found net devices under 0000:86:00.1: cvl_0_1 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:36.800 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:36.801 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:36.801 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:36.801 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:36.801 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:36.801 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:36.801 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:36.801 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:36.801 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:36.801 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:36.801 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:36.801 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:36.801 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:36.801 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:36.801 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:36.801 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:36.801 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:36.801 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:36.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:36.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.410 ms 00:20:36.801 00:20:36.801 --- 10.0.0.2 ping statistics --- 00:20:36.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.801 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:20:36.801 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:36.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:36.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:20:36.801 00:20:36.801 --- 10.0.0.1 ping statistics --- 00:20:36.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.801 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:20:36.801 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:36.801 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:20:36.801 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:36.801 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:36.801 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:36.801 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:36.801 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:36.801 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:36.801 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:36.801 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:36.801 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:36.801 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:36.801 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:36.801 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2672183 00:20:36.801 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2672183 00:20:36.801 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:36.801 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2672183 ']' 00:20:36.801 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.801 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:36.801 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.801 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:36.801 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:36.801 [2024-12-06 03:28:56.845232] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:20:36.801 [2024-12-06 03:28:56.845275] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:36.801 [2024-12-06 03:28:56.915037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:37.060 [2024-12-06 03:28:56.959370] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:37.060 [2024-12-06 03:28:56.959404] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:37.060 [2024-12-06 03:28:56.959411] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:37.060 [2024-12-06 03:28:56.959417] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:37.060 [2024-12-06 03:28:56.959422] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:37.060 [2024-12-06 03:28:56.960844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:37.060 [2024-12-06 03:28:56.960920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:37.060 [2024-12-06 03:28:56.961050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:37.060 [2024-12-06 03:28:56.961051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:37.060 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:37.060 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:20:37.060 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:37.060 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:37.060 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:37.060 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:37.060 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:37.060 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.060 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:37.060 [2024-12-06 03:28:57.106482] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:37.060 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.060 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:37.060 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:37.060 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:37.060 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:37.060 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:37.060 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:37.060 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:37.060 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:37.060 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:37.060 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:37.060 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:37.060 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:37.060 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:37.060 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:37.060 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:37.060 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:37.060 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:37.060 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:37.060 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:37.060 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:37.060 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:37.060 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:37.060 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:37.060 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:37.060 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:37.060 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:37.060 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.060 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:37.060 Malloc1 00:20:37.319 [2024-12-06 03:28:57.216229] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:37.319 Malloc2 00:20:37.319 Malloc3 00:20:37.319 Malloc4 00:20:37.319 Malloc5 00:20:37.319 Malloc6 00:20:37.319 Malloc7 00:20:37.578 Malloc8 00:20:37.578 Malloc9 00:20:37.578 Malloc10 00:20:37.578 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.578 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:37.578 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:37.578 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:37.578 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2672456 00:20:37.578 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2672456 /var/tmp/bdevperf.sock 00:20:37.578 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2672456 ']' 00:20:37.578 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:37.578 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:37.578 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:37.578 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:37.578 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:37.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:37.578 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:20:37.578 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:37.578 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:20:37.578 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:37.578 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:37.578 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:37.578 { 00:20:37.578 "params": { 00:20:37.578 "name": "Nvme$subsystem", 00:20:37.578 "trtype": "$TEST_TRANSPORT", 00:20:37.578 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.578 "adrfam": "ipv4", 00:20:37.578 "trsvcid": "$NVMF_PORT", 00:20:37.578 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.578 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.578 "hdgst": ${hdgst:-false}, 00:20:37.578 "ddgst": ${ddgst:-false} 00:20:37.578 }, 00:20:37.578 "method": "bdev_nvme_attach_controller" 00:20:37.578 } 00:20:37.578 EOF 00:20:37.578 )") 00:20:37.578 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:37.578 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:37.578 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:37.578 { 00:20:37.578 "params": { 00:20:37.578 "name": "Nvme$subsystem", 00:20:37.578 "trtype": "$TEST_TRANSPORT", 00:20:37.578 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.578 "adrfam": "ipv4", 00:20:37.578 "trsvcid": "$NVMF_PORT", 00:20:37.578 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.578 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.578 "hdgst": ${hdgst:-false}, 00:20:37.578 "ddgst": ${ddgst:-false} 00:20:37.578 }, 00:20:37.578 "method": "bdev_nvme_attach_controller" 00:20:37.578 } 00:20:37.578 EOF 00:20:37.578 )") 00:20:37.578 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:37.578 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:37.578 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:37.578 { 00:20:37.578 "params": { 00:20:37.578 "name": "Nvme$subsystem", 00:20:37.578 "trtype": "$TEST_TRANSPORT", 00:20:37.578 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.578 "adrfam": "ipv4", 00:20:37.578 "trsvcid": "$NVMF_PORT", 00:20:37.578 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.578 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.578 "hdgst": ${hdgst:-false}, 00:20:37.578 "ddgst": ${ddgst:-false} 00:20:37.578 }, 00:20:37.578 "method": "bdev_nvme_attach_controller" 00:20:37.578 } 00:20:37.578 EOF 00:20:37.578 )") 00:20:37.578 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:37.578 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:37.578 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:37.578 { 00:20:37.578 "params": { 00:20:37.578 "name": "Nvme$subsystem", 00:20:37.578 "trtype": "$TEST_TRANSPORT", 00:20:37.578 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.578 "adrfam": "ipv4", 00:20:37.578 "trsvcid": "$NVMF_PORT", 00:20:37.578 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.578 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.578 "hdgst": ${hdgst:-false}, 00:20:37.578 "ddgst": ${ddgst:-false} 00:20:37.578 }, 00:20:37.578 "method": "bdev_nvme_attach_controller" 00:20:37.579 } 00:20:37.579 EOF 00:20:37.579 )") 00:20:37.579 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:37.579 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:37.579 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:37.579 { 00:20:37.579 "params": { 00:20:37.579 "name": "Nvme$subsystem", 00:20:37.579 "trtype": "$TEST_TRANSPORT", 00:20:37.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.579 "adrfam": "ipv4", 00:20:37.579 "trsvcid": "$NVMF_PORT", 00:20:37.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.579 "hdgst": ${hdgst:-false}, 00:20:37.579 "ddgst": ${ddgst:-false} 00:20:37.579 }, 00:20:37.579 "method": "bdev_nvme_attach_controller" 00:20:37.579 } 00:20:37.579 EOF 00:20:37.579 )") 00:20:37.579 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:37.579 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:37.579 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:37.579 { 00:20:37.579 "params": { 00:20:37.579 "name": "Nvme$subsystem", 00:20:37.579 "trtype": "$TEST_TRANSPORT", 00:20:37.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.579 "adrfam": "ipv4", 00:20:37.579 "trsvcid": "$NVMF_PORT", 00:20:37.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.579 "hdgst": ${hdgst:-false}, 00:20:37.579 "ddgst": ${ddgst:-false} 00:20:37.579 }, 00:20:37.579 "method": "bdev_nvme_attach_controller" 00:20:37.579 } 00:20:37.579 EOF 00:20:37.579 )") 00:20:37.579 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:37.579 [2024-12-06 03:28:57.695563] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:20:37.579 [2024-12-06 03:28:57.695610] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2672456 ] 00:20:37.579 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:37.579 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:37.579 { 00:20:37.579 "params": { 00:20:37.579 "name": "Nvme$subsystem", 00:20:37.579 "trtype": "$TEST_TRANSPORT", 00:20:37.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.579 "adrfam": "ipv4", 00:20:37.579 "trsvcid": "$NVMF_PORT", 00:20:37.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.579 "hdgst": ${hdgst:-false}, 00:20:37.579 "ddgst": ${ddgst:-false} 00:20:37.579 }, 00:20:37.579 "method": "bdev_nvme_attach_controller" 00:20:37.579 } 00:20:37.579 EOF 00:20:37.579 )") 00:20:37.579 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:37.579 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:37.579 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:37.579 { 00:20:37.579 "params": { 00:20:37.579 "name": "Nvme$subsystem", 00:20:37.579 "trtype": "$TEST_TRANSPORT", 00:20:37.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.579 "adrfam": "ipv4", 00:20:37.579 "trsvcid": "$NVMF_PORT", 00:20:37.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.579 "hdgst": ${hdgst:-false}, 00:20:37.579 "ddgst": ${ddgst:-false} 00:20:37.579 }, 00:20:37.579 "method": "bdev_nvme_attach_controller" 00:20:37.579 } 00:20:37.579 EOF 00:20:37.579 )") 00:20:37.579 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:37.579 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:37.579 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:37.579 { 00:20:37.579 "params": { 00:20:37.579 "name": "Nvme$subsystem", 00:20:37.579 "trtype": "$TEST_TRANSPORT", 00:20:37.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.579 "adrfam": "ipv4", 00:20:37.579 "trsvcid": "$NVMF_PORT", 00:20:37.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.579 "hdgst": ${hdgst:-false}, 00:20:37.579 "ddgst": ${ddgst:-false} 00:20:37.579 }, 00:20:37.579 "method": "bdev_nvme_attach_controller" 00:20:37.579 } 00:20:37.579 EOF 00:20:37.579 )") 00:20:37.579 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:37.579 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:37.579 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:37.579 { 00:20:37.579 "params": { 00:20:37.579 "name": "Nvme$subsystem", 00:20:37.579 "trtype": "$TEST_TRANSPORT", 00:20:37.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.579 "adrfam": "ipv4", 00:20:37.579 "trsvcid": "$NVMF_PORT", 00:20:37.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.579 "hdgst": ${hdgst:-false}, 00:20:37.579 "ddgst": ${ddgst:-false} 00:20:37.579 }, 00:20:37.579 "method": "bdev_nvme_attach_controller" 00:20:37.579 } 00:20:37.579 EOF 00:20:37.579 )") 00:20:37.837 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:37.837 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:20:37.837 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:20:37.837 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:37.837 "params": { 00:20:37.837 "name": "Nvme1", 00:20:37.837 "trtype": "tcp", 00:20:37.837 "traddr": "10.0.0.2", 00:20:37.837 "adrfam": "ipv4", 00:20:37.837 "trsvcid": "4420", 00:20:37.837 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.837 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:37.837 "hdgst": false, 00:20:37.837 "ddgst": false 00:20:37.837 }, 00:20:37.837 "method": "bdev_nvme_attach_controller" 00:20:37.837 },{ 00:20:37.837 "params": { 00:20:37.837 "name": "Nvme2", 00:20:37.837 "trtype": "tcp", 00:20:37.837 "traddr": "10.0.0.2", 00:20:37.837 "adrfam": "ipv4", 00:20:37.837 "trsvcid": "4420", 00:20:37.837 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:37.837 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:37.837 "hdgst": false, 00:20:37.837 "ddgst": false 00:20:37.837 }, 00:20:37.837 "method": "bdev_nvme_attach_controller" 00:20:37.837 },{ 00:20:37.837 "params": { 00:20:37.837 "name": "Nvme3", 00:20:37.837 "trtype": "tcp", 00:20:37.837 "traddr": "10.0.0.2", 00:20:37.837 "adrfam": "ipv4", 00:20:37.837 "trsvcid": "4420", 00:20:37.837 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:37.837 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:37.837 "hdgst": false, 00:20:37.837 "ddgst": false 00:20:37.837 }, 00:20:37.837 "method": "bdev_nvme_attach_controller" 00:20:37.837 },{ 00:20:37.837 "params": { 00:20:37.837 "name": "Nvme4", 00:20:37.837 "trtype": "tcp", 00:20:37.837 "traddr": "10.0.0.2", 00:20:37.837 "adrfam": "ipv4", 00:20:37.837 "trsvcid": "4420", 00:20:37.837 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:37.837 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:37.837 "hdgst": false, 00:20:37.837 "ddgst": false 00:20:37.837 }, 00:20:37.837 "method": "bdev_nvme_attach_controller" 00:20:37.837 },{ 00:20:37.837 "params": { 00:20:37.837 "name": "Nvme5", 00:20:37.837 "trtype": "tcp", 00:20:37.837 "traddr": "10.0.0.2", 00:20:37.837 "adrfam": "ipv4", 00:20:37.837 "trsvcid": "4420", 00:20:37.838 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:37.838 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:37.838 "hdgst": false, 00:20:37.838 "ddgst": false 00:20:37.838 }, 00:20:37.838 "method": "bdev_nvme_attach_controller" 00:20:37.838 },{ 00:20:37.838 "params": { 00:20:37.838 "name": "Nvme6", 00:20:37.838 "trtype": "tcp", 00:20:37.838 "traddr": "10.0.0.2", 00:20:37.838 "adrfam": "ipv4", 00:20:37.838 "trsvcid": "4420", 00:20:37.838 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:37.838 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:37.838 "hdgst": false, 00:20:37.838 "ddgst": false 00:20:37.838 }, 00:20:37.838 "method": "bdev_nvme_attach_controller" 00:20:37.838 },{ 00:20:37.838 "params": { 00:20:37.838 "name": "Nvme7", 00:20:37.838 "trtype": "tcp", 00:20:37.838 "traddr": "10.0.0.2", 00:20:37.838 "adrfam": "ipv4", 00:20:37.838 "trsvcid": "4420", 00:20:37.838 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:37.838 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:37.838 "hdgst": false, 00:20:37.838 "ddgst": false 00:20:37.838 }, 00:20:37.838 "method": "bdev_nvme_attach_controller" 00:20:37.838 },{ 00:20:37.838 "params": { 00:20:37.838 "name": "Nvme8", 00:20:37.838 "trtype": "tcp", 00:20:37.838 "traddr": "10.0.0.2", 00:20:37.838 "adrfam": "ipv4", 00:20:37.838 "trsvcid": "4420", 00:20:37.838 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:37.838 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:37.838 "hdgst": false, 00:20:37.838 "ddgst": false 00:20:37.838 }, 00:20:37.838 "method": "bdev_nvme_attach_controller" 00:20:37.838 },{ 00:20:37.838 "params": { 00:20:37.838 "name": "Nvme9", 00:20:37.838 "trtype": "tcp", 00:20:37.838 "traddr": "10.0.0.2", 00:20:37.838 "adrfam": "ipv4", 00:20:37.838 "trsvcid": "4420", 00:20:37.838 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:37.838 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:37.838 "hdgst": false, 00:20:37.838 "ddgst": false 00:20:37.838 }, 00:20:37.838 "method": "bdev_nvme_attach_controller" 00:20:37.838 },{ 00:20:37.838 "params": { 00:20:37.838 "name": "Nvme10", 00:20:37.838 "trtype": "tcp", 00:20:37.838 "traddr": "10.0.0.2", 00:20:37.838 "adrfam": "ipv4", 00:20:37.838 "trsvcid": "4420", 00:20:37.838 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:37.838 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:37.838 "hdgst": false, 00:20:37.838 "ddgst": false 00:20:37.838 }, 00:20:37.838 "method": "bdev_nvme_attach_controller" 00:20:37.838 }' 00:20:37.838 [2024-12-06 03:28:57.760514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.838 [2024-12-06 03:28:57.801917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.214 Running I/O for 10 seconds... 00:20:39.474 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:39.474 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:20:39.474 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:39.474 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.474 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:39.474 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.474 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:39.474 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:39.474 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:39.474 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:39.474 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:20:39.474 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:20:39.474 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:39.474 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:39.733 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:39.733 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:39.733 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.733 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:39.733 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.733 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:20:39.733 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:20:39.733 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:40.015 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:40.015 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:40.015 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:40.015 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:40.015 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.015 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:40.015 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.015 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:20:40.015 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:20:40.015 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:20:40.015 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:20:40.015 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:20:40.015 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2672183 00:20:40.015 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2672183 ']' 00:20:40.015 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2672183 00:20:40.015 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:20:40.015 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:40.015 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2672183 00:20:40.015 03:29:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:40.015 03:29:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:40.015 03:29:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2672183' 00:20:40.015 killing process with pid 2672183 00:20:40.015 03:29:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2672183 00:20:40.015 03:29:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2672183 00:20:40.015 [2024-12-06 03:29:00.006326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299ac0 is same with the state(6) to be set 00:20:40.015 [2024-12-06 03:29:00.007294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.015 [2024-12-06 03:29:00.007330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.015 [2024-12-06 03:29:00.007338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.015 [2024-12-06 03:29:00.007345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.015 [2024-12-06 03:29:00.007354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.015 [2024-12-06 03:29:00.007362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.015 [2024-12-06 03:29:00.007368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.015 [2024-12-06 03:29:00.007375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.015 [2024-12-06 03:29:00.007381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.015 [2024-12-06 03:29:00.007388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.015 [2024-12-06 03:29:00.007394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.015 [2024-12-06 03:29:00.007400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.015 [2024-12-06 03:29:00.007407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.015 [2024-12-06 03:29:00.007414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.015 [2024-12-06 03:29:00.007420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.015 [2024-12-06 03:29:00.007427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.015 [2024-12-06 03:29:00.007434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.015 [2024-12-06 03:29:00.007441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.015 [2024-12-06 03:29:00.007448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.015 [2024-12-06 03:29:00.007454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.015 [2024-12-06 03:29:00.007460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.015 [2024-12-06 03:29:00.007466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.015 [2024-12-06 03:29:00.007472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.015 [2024-12-06 03:29:00.007478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.007484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.007491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.007497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.007504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.007512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.007520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.007526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.007533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.007540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.007546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.007553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.007559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.007567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.007574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.007581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.007587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.007594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.007601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.007606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.007614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.007621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.007627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.007633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.007640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.007646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.007652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.007659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.007665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.007671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.007677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.007683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.007691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.007697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.007703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.007709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.007715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.007721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.007727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.007734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250de30 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.008825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.008839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.008845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.008852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.008858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.008865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.008872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.008878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.008885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.008892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.008899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.008905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.008911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.008918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.008924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.008931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.008937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.008944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.008956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.008967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.008973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.008980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.008987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.008993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.009000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.009006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.009013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.009019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.009027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.009033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.009040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.009047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.009053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.009060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.009066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.009073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.009079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.009084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.009091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.009097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.009103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.009110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.009116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.016 [2024-12-06 03:29:00.009123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.009129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.009134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.009142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.009148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.009154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.009161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.009167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.009173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.009180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.009187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299fb0 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.010992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.011000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.011007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a480 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.012114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.012142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.012150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.012156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.012163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.012169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.012176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.012182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.012188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.012195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.012201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.012207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.012217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.017 [2024-12-06 03:29:00.012224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.012538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a970 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.013301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.013321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.013328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.013336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.013346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.013354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.013361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.013368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.013375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.013381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.013387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.013393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.013401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.013408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.013414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.013421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.013428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.013434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.013440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.013447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.013454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.013461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.013467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.013473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.013479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.013486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.013494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.013500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.013506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.013514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.013520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.013528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.013534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.018 [2024-12-06 03:29:00.013542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.019 [2024-12-06 03:29:00.013549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.019 [2024-12-06 03:29:00.013555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.019 [2024-12-06 03:29:00.013561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.019 [2024-12-06 03:29:00.013568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.019 [2024-12-06 03:29:00.013575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.019 [2024-12-06 03:29:00.013582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.019 [2024-12-06 03:29:00.013588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.019 [2024-12-06 03:29:00.013594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.019 [2024-12-06 03:29:00.013601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.019 [2024-12-06 03:29:00.013608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.019 [2024-12-06 03:29:00.013614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.019 [2024-12-06 03:29:00.013621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.019 [2024-12-06 03:29:00.013627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.019 [2024-12-06 03:29:00.013634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.019 [2024-12-06 03:29:00.013640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.019 [2024-12-06 03:29:00.013647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.019 [2024-12-06 03:29:00.013639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-12-06 03:29:00.013654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with id:0 cdw10:00000000 cdw11:00000000 00:20:40.019 the state(6) to be set 00:20:40.019 [2024-12-06 03:29:00.013666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.019 [2024-12-06 03:29:00.013672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.019 [2024-12-06 03:29:00.013672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.019 [2024-12-06 03:29:00.013679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.019 [2024-12-06 03:29:00.013684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.019 [2024-12-06 03:29:00.013687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.019 [2024-12-06 03:29:00.013692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.019 [2024-12-06 03:29:00.013712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.019 [2024-12-06 03:29:00.013718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.019 [2024-12-06 03:29:00.013720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.019 [2024-12-06 03:29:00.013726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.019 [2024-12-06 03:29:00.013728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.019 [2024-12-06 03:29:00.013735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-12-06 03:29:00.013736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with id:0 cdw10:00000000 cdw11:00000000 00:20:40.019 the state(6) to be set 00:20:40.019 [2024-12-06 03:29:00.013744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-06 03:29:00.013745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.019 the state(6) to be set 00:20:40.019 [2024-12-06 03:29:00.013754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.019 [2024-12-06 03:29:00.013754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26a5a60 is same with the state(6) to be set 00:20:40.019 [2024-12-06 03:29:00.013761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.019 [2024-12-06 03:29:00.013769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae40 is same with the state(6) to be set 00:20:40.019 [2024-12-06 03:29:00.013804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.019 [2024-12-06 03:29:00.013814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.019 [2024-12-06 03:29:00.013822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.019 [2024-12-06 03:29:00.013830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.019 [2024-12-06 03:29:00.013837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.019 [2024-12-06 03:29:00.013845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.019 [2024-12-06 03:29:00.013852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.019 [2024-12-06 03:29:00.013859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.019 [2024-12-06 03:29:00.013866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e940 is same with the state(6) to be set 00:20:40.019 [2024-12-06 03:29:00.013892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.019 [2024-12-06 03:29:00.013901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.019 [2024-12-06 03:29:00.013909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.019 [2024-12-06 03:29:00.013920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.019 [2024-12-06 03:29:00.013927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.019 [2024-12-06 03:29:00.013934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.019 [2024-12-06 03:29:00.013942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.019 [2024-12-06 03:29:00.013955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.019 [2024-12-06 03:29:00.013963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x267ac80 is same with the state(6) to be set 00:20:40.019 [2024-12-06 03:29:00.013996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.019 [2024-12-06 03:29:00.014005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.019 [2024-12-06 03:29:00.014013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.019 [2024-12-06 03:29:00.014020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.019 [2024-12-06 03:29:00.014028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.019 [2024-12-06 03:29:00.014035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.019 [2024-12-06 03:29:00.014043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.019 [2024-12-06 03:29:00.014051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.019 [2024-12-06 03:29:00.014057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2242e10 is same with the state(6) to be set 00:20:40.019 [2024-12-06 03:29:00.014081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.019 [2024-12-06 03:29:00.014090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.019 [2024-12-06 03:29:00.014098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.019 [2024-12-06 03:29:00.014105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.019 [2024-12-06 03:29:00.014113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.019 [2024-12-06 03:29:00.014120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.020 [2024-12-06 03:29:00.014128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.020 [2024-12-06 03:29:00.014135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.020 [2024-12-06 03:29:00.014141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224edd0 is same with the state(6) to be set 00:20:40.020 [2024-12-06 03:29:00.014176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.020 [2024-12-06 03:29:00.014188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.020 [2024-12-06 03:29:00.014203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.020 [2024-12-06 03:29:00.014211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.020 [2024-12-06 03:29:00.014220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.020 [2024-12-06 03:29:00.014227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.020 [2024-12-06 03:29:00.014236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.020 [2024-12-06 03:29:00.014243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.020 [2024-12-06 03:29:00.014252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.020 [2024-12-06 03:29:00.014259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.020 [2024-12-06 03:29:00.014267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.020 [2024-12-06 03:29:00.014274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.020 [2024-12-06 03:29:00.014283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.020 [2024-12-06 03:29:00.014289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.020 [2024-12-06 03:29:00.014298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.020 [2024-12-06 03:29:00.014304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.020 [2024-12-06 03:29:00.014312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.020 [2024-12-06 03:29:00.014319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.020 [2024-12-06 03:29:00.014327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.020 [2024-12-06 03:29:00.014334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.020 [2024-12-06 03:29:00.014343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.020 [2024-12-06 03:29:00.014350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.020 [2024-12-06 03:29:00.014358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.020 [2024-12-06 03:29:00.014366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.020 [2024-12-06 03:29:00.014374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.020 [2024-12-06 03:29:00.014381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.020 [2024-12-06 03:29:00.014391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.020 [2024-12-06 03:29:00.014398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.020 [2024-12-06 03:29:00.014407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.020 [2024-12-06 03:29:00.014416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.020 [2024-12-06 03:29:00.014426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.020 [2024-12-06 03:29:00.014433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.020 [2024-12-06 03:29:00.014441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.020 [2024-12-06 03:29:00.014449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.020 [2024-12-06 03:29:00.014457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.020 [2024-12-06 03:29:00.014464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.020 [2024-12-06 03:29:00.014489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.020 [2024-12-06 03:29:00.014497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.020 [2024-12-06 03:29:00.014506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.020 [2024-12-06 03:29:00.014514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.020 [2024-12-06 03:29:00.014523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.020 [2024-12-06 03:29:00.014531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.020 [2024-12-06 03:29:00.014540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.020 [2024-12-06 03:29:00.014547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.020 [2024-12-06 03:29:00.014556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.020 [2024-12-06 03:29:00.014563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.020 [2024-12-06 03:29:00.014572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.020 [2024-12-06 03:29:00.014580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.020 [2024-12-06 03:29:00.014588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.020 [2024-12-06 03:29:00.014596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.020 [2024-12-06 03:29:00.014605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.020 [2024-12-06 03:29:00.014615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.020 [2024-12-06 03:29:00.014625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.020 [2024-12-06 03:29:00.014632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.020 [2024-12-06 03:29:00.014641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.020 [2024-12-06 03:29:00.014649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.020 [2024-12-06 03:29:00.014658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.020 [2024-12-06 03:29:00.014665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.020 [2024-12-06 03:29:00.014674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.020 [2024-12-06 03:29:00.014682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.020 [2024-12-06 03:29:00.014691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.020 [2024-12-06 03:29:00.014698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.020 [2024-12-06 03:29:00.014707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.020 [2024-12-06 03:29:00.014715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.020 [2024-12-06 03:29:00.014724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.020 [2024-12-06 03:29:00.014732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.020 [2024-12-06 03:29:00.014740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.020 [2024-12-06 03:29:00.014748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.020 [2024-12-06 03:29:00.014757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.020 [2024-12-06 03:29:00.014764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.020 [2024-12-06 03:29:00.014773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.020 [2024-12-06 03:29:00.014780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.020 [2024-12-06 03:29:00.014789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.020 [2024-12-06 03:29:00.014796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.020 [2024-12-06 03:29:00.014805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.021 [2024-12-06 03:29:00.014812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.021 [2024-12-06 03:29:00.014823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.021 [2024-12-06 03:29:00.014830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.021 [2024-12-06 03:29:00.014839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.021 [2024-12-06 03:29:00.014835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.021 [2024-12-06 03:29:00.014847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.021 [2024-12-06 03:29:00.014854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.021 [2024-12-06 03:29:00.014857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.021 [2024-12-06 03:29:00.014862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.021 [2024-12-06 03:29:00.014865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.021 [2024-12-06 03:29:00.014870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.021 [2024-12-06 03:29:00.014875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.021 [2024-12-06 03:29:00.014878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.021 [2024-12-06 03:29:00.014883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.021 [2024-12-06 03:29:00.014886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.021 [2024-12-06 03:29:00.014894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with [2024-12-06 03:29:00.014894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:1the state(6) to be set 00:20:40.021 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.021 [2024-12-06 03:29:00.014903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with [2024-12-06 03:29:00.014904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:20:40.021 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.021 [2024-12-06 03:29:00.014912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.021 [2024-12-06 03:29:00.014915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.021 [2024-12-06 03:29:00.014921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.021 [2024-12-06 03:29:00.014924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.021 [2024-12-06 03:29:00.014929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.021 [2024-12-06 03:29:00.014934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.021 [2024-12-06 03:29:00.014937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.021 [2024-12-06 03:29:00.014942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-06 03:29:00.014946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.021 the state(6) to be set 00:20:40.021 [2024-12-06 03:29:00.014961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.021 [2024-12-06 03:29:00.014962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.021 [2024-12-06 03:29:00.014968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.021 [2024-12-06 03:29:00.014971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.021 [2024-12-06 03:29:00.014976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.021 [2024-12-06 03:29:00.014981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.021 [2024-12-06 03:29:00.014984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.021 [2024-12-06 03:29:00.014989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.021 [2024-12-06 03:29:00.014992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.021 [2024-12-06 03:29:00.014999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:1[2024-12-06 03:29:00.014999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.021 the state(6) to be set 00:20:40.021 [2024-12-06 03:29:00.015010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-06 03:29:00.015010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.021 the state(6) to be set 00:20:40.021 [2024-12-06 03:29:00.015021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.021 [2024-12-06 03:29:00.015025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.021 [2024-12-06 03:29:00.015028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.021 [2024-12-06 03:29:00.015034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.021 [2024-12-06 03:29:00.015036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.021 [2024-12-06 03:29:00.015043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with [2024-12-06 03:29:00.015044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:1the state(6) to be set 00:20:40.021 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.021 [2024-12-06 03:29:00.015052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.021 [2024-12-06 03:29:00.015054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.021 [2024-12-06 03:29:00.015060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.021 [2024-12-06 03:29:00.015064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.021 [2024-12-06 03:29:00.015068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.021 [2024-12-06 03:29:00.015072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.021 [2024-12-06 03:29:00.015080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.021 [2024-12-06 03:29:00.015083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.021 [2024-12-06 03:29:00.015088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.021 [2024-12-06 03:29:00.015091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.021 [2024-12-06 03:29:00.015096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.021 [2024-12-06 03:29:00.015101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.021 [2024-12-06 03:29:00.015105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.021 [2024-12-06 03:29:00.015109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.021 [2024-12-06 03:29:00.015113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.021 [2024-12-06 03:29:00.015119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.021 [2024-12-06 03:29:00.015121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.021 [2024-12-06 03:29:00.015128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.021 [2024-12-06 03:29:00.015137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with [2024-12-06 03:29:00.015138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:1the state(6) to be set 00:20:40.021 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.021 [2024-12-06 03:29:00.015146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.021 [2024-12-06 03:29:00.015147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.021 [2024-12-06 03:29:00.015154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.021 [2024-12-06 03:29:00.015158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.021 [2024-12-06 03:29:00.015162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.021 [2024-12-06 03:29:00.015166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.021 [2024-12-06 03:29:00.015170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.021 [2024-12-06 03:29:00.015177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:1[2024-12-06 03:29:00.015177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.021 the state(6) to be set 00:20:40.021 [2024-12-06 03:29:00.015187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-06 03:29:00.015188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.021 the state(6) to be set 00:20:40.021 [2024-12-06 03:29:00.015199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.021 [2024-12-06 03:29:00.015200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.022 [2024-12-06 03:29:00.015206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.015209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.022 [2024-12-06 03:29:00.015216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.015220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.022 [2024-12-06 03:29:00.015224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.015228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.022 [2024-12-06 03:29:00.015231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.015238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:1[2024-12-06 03:29:00.015239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.022 the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.015249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-06 03:29:00.015249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.022 the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.015260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.015262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.022 [2024-12-06 03:29:00.015268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.015270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.022 [2024-12-06 03:29:00.015276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.015281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.022 [2024-12-06 03:29:00.015284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.015289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.022 [2024-12-06 03:29:00.015292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.015299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:1[2024-12-06 03:29:00.015299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.022 the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.015309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-06 03:29:00.015309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.022 the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.015323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:1[2024-12-06 03:29:00.015324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.022 the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.015334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-06 03:29:00.015334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.022 the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.015344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.015351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.015358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.015364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.015371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.015378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.015384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b330 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with [2024-12-06 03:29:00.017402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controthe state(6) to be set 00:20:40.022 ller 00:20:40.022 [2024-12-06 03:29:00.017422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224edd0 (9): Bad file descriptor 00:20:40.022 [2024-12-06 03:29:00.017452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128[2024-12-06 03:29:00.017487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.022 the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.022 [2024-12-06 03:29:00.017505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.022 [2024-12-06 03:29:00.017513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.023 [2024-12-06 03:29:00.017514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.023 [2024-12-06 03:29:00.017520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.023 [2024-12-06 03:29:00.017523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.023 [2024-12-06 03:29:00.017527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.023 [2024-12-06 03:29:00.017533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.023 [2024-12-06 03:29:00.017535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.023 [2024-12-06 03:29:00.017542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-06 03:29:00.017543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.023 the state(6) to be set 00:20:40.023 [2024-12-06 03:29:00.017553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.023 [2024-12-06 03:29:00.017555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.023 [2024-12-06 03:29:00.017560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.023 [2024-12-06 03:29:00.017563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.023 [2024-12-06 03:29:00.017567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.023 [2024-12-06 03:29:00.017574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:12[2024-12-06 03:29:00.017575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.023 the state(6) to be set 00:20:40.023 [2024-12-06 03:29:00.017583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.023 [2024-12-06 03:29:00.017585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.023 [2024-12-06 03:29:00.017593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.023 [2024-12-06 03:29:00.017594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.023 [2024-12-06 03:29:00.017600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.023 [2024-12-06 03:29:00.017603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.023 [2024-12-06 03:29:00.017608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.023 [2024-12-06 03:29:00.017615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:12[2024-12-06 03:29:00.017615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.023 the state(6) to be set 00:20:40.023 [2024-12-06 03:29:00.017626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with [2024-12-06 03:29:00.017626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:20:40.023 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.023 [2024-12-06 03:29:00.017636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.023 [2024-12-06 03:29:00.017640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.023 [2024-12-06 03:29:00.017644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.023 [2024-12-06 03:29:00.017649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.023 [2024-12-06 03:29:00.017652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with the state(6) to be set 00:20:40.023 [2024-12-06 03:29:00.017659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:12[2024-12-06 03:29:00.017660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.023 the state(6) to be set 00:20:40.023 [2024-12-06 03:29:00.017669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-06 03:29:00.017670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b800 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.023 the state(6) to be set 00:20:40.023 [2024-12-06 03:29:00.017684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.023 [2024-12-06 03:29:00.017692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.023 [2024-12-06 03:29:00.017701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.023 [2024-12-06 03:29:00.017708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.023 [2024-12-06 03:29:00.017717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.023 [2024-12-06 03:29:00.017724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.023 [2024-12-06 03:29:00.017733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.023 [2024-12-06 03:29:00.017740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.023 [2024-12-06 03:29:00.017749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.023 [2024-12-06 03:29:00.017757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.023 [2024-12-06 03:29:00.017766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.023 [2024-12-06 03:29:00.017775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.023 [2024-12-06 03:29:00.017784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.023 [2024-12-06 03:29:00.017791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.023 [2024-12-06 03:29:00.017800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.023 [2024-12-06 03:29:00.017807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.023 [2024-12-06 03:29:00.017816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.023 [2024-12-06 03:29:00.017823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.023 [2024-12-06 03:29:00.017832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.023 [2024-12-06 03:29:00.017839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.023 [2024-12-06 03:29:00.017848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.023 [2024-12-06 03:29:00.017855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.023 [2024-12-06 03:29:00.017865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.023 [2024-12-06 03:29:00.017872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.023 [2024-12-06 03:29:00.017880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.023 [2024-12-06 03:29:00.017887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.023 [2024-12-06 03:29:00.017896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.023 [2024-12-06 03:29:00.017904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.023 [2024-12-06 03:29:00.017912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.023 [2024-12-06 03:29:00.017920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.023 [2024-12-06 03:29:00.017929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.023 [2024-12-06 03:29:00.017938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.023 [2024-12-06 03:29:00.017954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.023 [2024-12-06 03:29:00.017962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.023 [2024-12-06 03:29:00.017971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.023 [2024-12-06 03:29:00.017978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.023 [2024-12-06 03:29:00.017987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.023 [2024-12-06 03:29:00.017999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.023 [2024-12-06 03:29:00.018008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.023 [2024-12-06 03:29:00.018016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.023 [2024-12-06 03:29:00.018026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.023 [2024-12-06 03:29:00.018034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.023 [2024-12-06 03:29:00.018043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.024 [2024-12-06 03:29:00.018050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.024 [2024-12-06 03:29:00.018059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.024 [2024-12-06 03:29:00.018066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.024 [2024-12-06 03:29:00.018076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.024 [2024-12-06 03:29:00.018084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.024 [2024-12-06 03:29:00.018092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.024 [2024-12-06 03:29:00.018100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.024 [2024-12-06 03:29:00.018109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.024 [2024-12-06 03:29:00.018116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.024 [2024-12-06 03:29:00.018125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.024 [2024-12-06 03:29:00.018132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.024 [2024-12-06 03:29:00.018141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.024 [2024-12-06 03:29:00.018149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.024 [2024-12-06 03:29:00.018157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.024 [2024-12-06 03:29:00.018165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.024 [2024-12-06 03:29:00.018175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.024 [2024-12-06 03:29:00.018182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.024 [2024-12-06 03:29:00.018202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.024 [2024-12-06 03:29:00.018208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.024 [2024-12-06 03:29:00.018218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.024 [2024-12-06 03:29:00.018227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.024 [2024-12-06 03:29:00.018236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.024 [2024-12-06 03:29:00.018242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.024 [2024-12-06 03:29:00.018251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.024 [2024-12-06 03:29:00.018257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.024 [2024-12-06 03:29:00.018266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.024 [2024-12-06 03:29:00.018273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.024 [2024-12-06 03:29:00.018281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.024 [2024-12-06 03:29:00.018288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.024 [2024-12-06 03:29:00.018299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.024 [2024-12-06 03:29:00.018305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.024 [2024-12-06 03:29:00.018314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.024 [2024-12-06 03:29:00.018320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.024 [2024-12-06 03:29:00.018329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.024 [2024-12-06 03:29:00.018336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.024 [2024-12-06 03:29:00.018345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.024 [2024-12-06 03:29:00.018351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.024 [2024-12-06 03:29:00.018359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.024 [2024-12-06 03:29:00.018366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.024 [2024-12-06 03:29:00.018375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.024 [2024-12-06 03:29:00.018382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.024 [2024-12-06 03:29:00.018390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.024 [2024-12-06 03:29:00.018397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.024 [2024-12-06 03:29:00.018516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.024 [2024-12-06 03:29:00.018541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.024 [2024-12-06 03:29:00.018549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.024 [2024-12-06 03:29:00.018556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.024 [2024-12-06 03:29:00.018563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.024 [2024-12-06 03:29:00.018571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.024 [2024-12-06 03:29:00.018578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.024 [2024-12-06 03:29:00.018584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.024 [2024-12-06 03:29:00.018590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.024 [2024-12-06 03:29:00.018598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.024 [2024-12-06 03:29:00.018604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.024 [2024-12-06 03:29:00.018610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.024 [2024-12-06 03:29:00.018616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.024 [2024-12-06 03:29:00.018622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.024 [2024-12-06 03:29:00.018628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.024 [2024-12-06 03:29:00.018634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.024 [2024-12-06 03:29:00.018640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.024 [2024-12-06 03:29:00.018647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.024 [2024-12-06 03:29:00.018655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.024 [2024-12-06 03:29:00.018703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.024 [2024-12-06 03:29:00.018755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.024 [2024-12-06 03:29:00.018811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.024 [2024-12-06 03:29:00.018862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.024 [2024-12-06 03:29:00.018921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.024 [2024-12-06 03:29:00.018984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.024 [2024-12-06 03:29:00.019037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.024 [2024-12-06 03:29:00.019091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.019142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.019194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.019248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.019299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.019351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.019404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.019457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.019509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.019566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.019619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.019670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.019723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.019773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.019826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.019879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.019933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.019993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.020046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.020104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.020157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.020218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.020272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.020324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.020376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.020428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.020481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.020534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.020586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.020638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.020690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.020742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.020794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.020846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.020905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.020969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.021022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229bcf0 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.021666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.021685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.021692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.021700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.021707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.021714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.021720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.021727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.021735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.021741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.021748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.021755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.021763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.021769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.021805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.021856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.021910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.021969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.022021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.022073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.022125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.022176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.022229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.022287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.022340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.022391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.022445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.022499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.022553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.022605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.022657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.022709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.022763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.022817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.022873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.023231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.023289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.023342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.023410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.023463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.023520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.023574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.023630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.023682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.023737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.023797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.023833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.023867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.025 [2024-12-06 03:29:00.023907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.026 [2024-12-06 03:29:00.023941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.026 [2024-12-06 03:29:00.023981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.026 [2024-12-06 03:29:00.024014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.026 [2024-12-06 03:29:00.024048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.026 [2024-12-06 03:29:00.024083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.026 [2024-12-06 03:29:00.024116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.026 [2024-12-06 03:29:00.024149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.026 [2024-12-06 03:29:00.024182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.026 [2024-12-06 03:29:00.024216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.026 [2024-12-06 03:29:00.024253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.026 [2024-12-06 03:29:00.024288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.026 [2024-12-06 03:29:00.024321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.026 [2024-12-06 03:29:00.024354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.026 [2024-12-06 03:29:00.024387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d960 is same with the state(6) to be set 00:20:40.026 [2024-12-06 03:29:00.036861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.026 [2024-12-06 03:29:00.036899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.026 [2024-12-06 03:29:00.036915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.026 [2024-12-06 03:29:00.036927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.026 [2024-12-06 03:29:00.036941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.026 [2024-12-06 03:29:00.036961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.026 [2024-12-06 03:29:00.036974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.026 [2024-12-06 03:29:00.036984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.026 [2024-12-06 03:29:00.036997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.026 [2024-12-06 03:29:00.037014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.026 [2024-12-06 03:29:00.037027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.026 [2024-12-06 03:29:00.037037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.026 [2024-12-06 03:29:00.037049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.026 [2024-12-06 03:29:00.037059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.026 [2024-12-06 03:29:00.037071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.026 [2024-12-06 03:29:00.037081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.026 [2024-12-06 03:29:00.037094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.026 [2024-12-06 03:29:00.037104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.026 [2024-12-06 03:29:00.037118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.026 [2024-12-06 03:29:00.037129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.026 [2024-12-06 03:29:00.037141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.026 [2024-12-06 03:29:00.037151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.026 [2024-12-06 03:29:00.037162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.026 [2024-12-06 03:29:00.037173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.026 [2024-12-06 03:29:00.037839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.026 [2024-12-06 03:29:00.037864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.026 [2024-12-06 03:29:00.037877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.026 [2024-12-06 03:29:00.037889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.026 [2024-12-06 03:29:00.037900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.026 [2024-12-06 03:29:00.037910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.026 [2024-12-06 03:29:00.037921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.026 [2024-12-06 03:29:00.037930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.026 [2024-12-06 03:29:00.037940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bf830 is same with the state(6) to be set 00:20:40.026 [2024-12-06 03:29:00.037995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.026 [2024-12-06 03:29:00.038007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.026 [2024-12-06 03:29:00.038024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.026 [2024-12-06 03:29:00.038035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.026 [2024-12-06 03:29:00.038046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.026 [2024-12-06 03:29:00.038057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.026 [2024-12-06 03:29:00.038068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.026 [2024-12-06 03:29:00.038078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.026 [2024-12-06 03:29:00.038087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26a5550 is same with the state(6) to be set 00:20:40.026 [2024-12-06 03:29:00.038113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26a5a60 (9): Bad file descriptor 00:20:40.026 [2024-12-06 03:29:00.038153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.026 [2024-12-06 03:29:00.038164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.026 [2024-12-06 03:29:00.038175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.026 [2024-12-06 03:29:00.038186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.026 [2024-12-06 03:29:00.038197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.026 [2024-12-06 03:29:00.038208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.026 [2024-12-06 03:29:00.038220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.026 [2024-12-06 03:29:00.038229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.026 [2024-12-06 03:29:00.038239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26798a0 is same with the state(6) to be set 00:20:40.026 [2024-12-06 03:29:00.038271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.026 [2024-12-06 03:29:00.038285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.026 [2024-12-06 03:29:00.038297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.026 [2024-12-06 03:29:00.038307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.026 [2024-12-06 03:29:00.038318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.026 [2024-12-06 03:29:00.038328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.026 [2024-12-06 03:29:00.038339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.026 [2024-12-06 03:29:00.038350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.026 [2024-12-06 03:29:00.038364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26796a0 is same with the state(6) to be set 00:20:40.026 [2024-12-06 03:29:00.038386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e940 (9): Bad file descriptor 00:20:40.026 [2024-12-06 03:29:00.038410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x267ac80 (9): Bad file descriptor 00:20:40.026 [2024-12-06 03:29:00.038445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.026 [2024-12-06 03:29:00.038458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.026 [2024-12-06 03:29:00.038469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.027 [2024-12-06 03:29:00.038479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.027 [2024-12-06 03:29:00.038490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.027 [2024-12-06 03:29:00.038500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.027 [2024-12-06 03:29:00.038511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.027 [2024-12-06 03:29:00.038521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.027 [2024-12-06 03:29:00.038531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163610 is same with the state(6) to be set 00:20:40.027 [2024-12-06 03:29:00.038552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2242e10 (9): Bad file descriptor 00:20:40.027 [2024-12-06 03:29:00.040255] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:40.027 [2024-12-06 03:29:00.040332] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:40.027 [2024-12-06 03:29:00.040680] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:40.027 [2024-12-06 03:29:00.040708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:20:40.027 [2024-12-06 03:29:00.041018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.027 [2024-12-06 03:29:00.041038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224edd0 with addr=10.0.0.2, port=4420 00:20:40.027 [2024-12-06 03:29:00.041050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224edd0 is same with the state(6) to be set 00:20:40.027 [2024-12-06 03:29:00.041111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.027 [2024-12-06 03:29:00.041125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.027 [2024-12-06 03:29:00.041140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.027 [2024-12-06 03:29:00.041151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.027 [2024-12-06 03:29:00.041164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.027 [2024-12-06 03:29:00.041175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.027 [2024-12-06 03:29:00.041188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.027 [2024-12-06 03:29:00.041198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.027 [2024-12-06 03:29:00.041215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.027 [2024-12-06 03:29:00.041225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.027 [2024-12-06 03:29:00.041236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.027 [2024-12-06 03:29:00.041247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.027 [2024-12-06 03:29:00.041261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.027 [2024-12-06 03:29:00.041271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.027 [2024-12-06 03:29:00.041283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.027 [2024-12-06 03:29:00.041292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.027 [2024-12-06 03:29:00.041303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.027 [2024-12-06 03:29:00.041314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.027 [2024-12-06 03:29:00.041327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.027 [2024-12-06 03:29:00.041337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.027 [2024-12-06 03:29:00.041349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.027 [2024-12-06 03:29:00.041359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.027 [2024-12-06 03:29:00.041370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.027 [2024-12-06 03:29:00.041380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.027 [2024-12-06 03:29:00.041392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.027 [2024-12-06 03:29:00.041403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.027 [2024-12-06 03:29:00.041415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.027 [2024-12-06 03:29:00.041425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.027 [2024-12-06 03:29:00.041437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.027 [2024-12-06 03:29:00.041446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.027 [2024-12-06 03:29:00.041458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.027 [2024-12-06 03:29:00.041468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.027 [2024-12-06 03:29:00.041480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.027 [2024-12-06 03:29:00.041493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.027 [2024-12-06 03:29:00.041506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.027 [2024-12-06 03:29:00.041515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.027 [2024-12-06 03:29:00.041527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.027 [2024-12-06 03:29:00.041536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.027 [2024-12-06 03:29:00.041549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.027 [2024-12-06 03:29:00.041560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.027 [2024-12-06 03:29:00.041573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.027 [2024-12-06 03:29:00.041582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.027 [2024-12-06 03:29:00.041594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.027 [2024-12-06 03:29:00.041603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.027 [2024-12-06 03:29:00.041615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.027 [2024-12-06 03:29:00.041626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.027 [2024-12-06 03:29:00.041638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.027 [2024-12-06 03:29:00.041649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.027 [2024-12-06 03:29:00.041661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.027 [2024-12-06 03:29:00.041671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.027 [2024-12-06 03:29:00.041682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.027 [2024-12-06 03:29:00.041692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.027 [2024-12-06 03:29:00.041704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.027 [2024-12-06 03:29:00.041716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.027 [2024-12-06 03:29:00.041728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.027 [2024-12-06 03:29:00.041738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.027 [2024-12-06 03:29:00.041750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.027 [2024-12-06 03:29:00.041760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.027 [2024-12-06 03:29:00.041775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.027 [2024-12-06 03:29:00.041785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.027 [2024-12-06 03:29:00.041798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.027 [2024-12-06 03:29:00.041808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.027 [2024-12-06 03:29:00.041820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.027 [2024-12-06 03:29:00.041829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.027 [2024-12-06 03:29:00.041840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.027 [2024-12-06 03:29:00.041851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.027 [2024-12-06 03:29:00.041862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.028 [2024-12-06 03:29:00.041873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.028 [2024-12-06 03:29:00.041885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.028 [2024-12-06 03:29:00.041895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.028 [2024-12-06 03:29:00.041907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.028 [2024-12-06 03:29:00.041918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.028 [2024-12-06 03:29:00.041930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.028 [2024-12-06 03:29:00.041939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.028 [2024-12-06 03:29:00.041956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.028 [2024-12-06 03:29:00.041967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.028 [2024-12-06 03:29:00.041981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.028 [2024-12-06 03:29:00.041991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.028 [2024-12-06 03:29:00.042003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.028 [2024-12-06 03:29:00.042013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.028 [2024-12-06 03:29:00.042025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.028 [2024-12-06 03:29:00.042035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.028 [2024-12-06 03:29:00.042047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.028 [2024-12-06 03:29:00.042060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.028 [2024-12-06 03:29:00.042072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.028 [2024-12-06 03:29:00.042083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.028 [2024-12-06 03:29:00.042094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.028 [2024-12-06 03:29:00.042104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.028 [2024-12-06 03:29:00.042115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.028 [2024-12-06 03:29:00.042126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.028 [2024-12-06 03:29:00.042139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.028 [2024-12-06 03:29:00.042150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.028 [2024-12-06 03:29:00.042162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.028 [2024-12-06 03:29:00.042171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.028 [2024-12-06 03:29:00.042183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.028 [2024-12-06 03:29:00.042193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.028 [2024-12-06 03:29:00.042205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.028 [2024-12-06 03:29:00.042216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.028 [2024-12-06 03:29:00.042229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.028 [2024-12-06 03:29:00.042239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.028 [2024-12-06 03:29:00.042251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.028 [2024-12-06 03:29:00.042260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.028 [2024-12-06 03:29:00.042271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.028 [2024-12-06 03:29:00.042282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.028 [2024-12-06 03:29:00.042294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.028 [2024-12-06 03:29:00.042305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.028 [2024-12-06 03:29:00.042317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.028 [2024-12-06 03:29:00.042326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.028 [2024-12-06 03:29:00.042340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.028 [2024-12-06 03:29:00.042349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.028 [2024-12-06 03:29:00.042362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.028 [2024-12-06 03:29:00.042375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.028 [2024-12-06 03:29:00.042388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.028 [2024-12-06 03:29:00.042397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.028 [2024-12-06 03:29:00.042409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.028 [2024-12-06 03:29:00.042418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.028 [2024-12-06 03:29:00.042431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.028 [2024-12-06 03:29:00.042441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.028 [2024-12-06 03:29:00.042455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.028 [2024-12-06 03:29:00.042465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.028 [2024-12-06 03:29:00.042477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.028 [2024-12-06 03:29:00.042487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.028 [2024-12-06 03:29:00.042499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.028 [2024-12-06 03:29:00.042510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.028 [2024-12-06 03:29:00.042523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.028 [2024-12-06 03:29:00.042533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.028 [2024-12-06 03:29:00.042546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.028 [2024-12-06 03:29:00.042556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.028 [2024-12-06 03:29:00.042566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2640320 is same with the state(6) to be set 00:20:40.028 [2024-12-06 03:29:00.042722] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:40.028 [2024-12-06 03:29:00.043073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.028 [2024-12-06 03:29:00.043097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2242e10 with addr=10.0.0.2, port=4420 00:20:40.028 [2024-12-06 03:29:00.043109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2242e10 is same with the state(6) to be set 00:20:40.028 [2024-12-06 03:29:00.043126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224edd0 (9): Bad file descriptor 00:20:40.028 [2024-12-06 03:29:00.044852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.028 [2024-12-06 03:29:00.044873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.028 [2024-12-06 03:29:00.044892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.028 [2024-12-06 03:29:00.044903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.028 [2024-12-06 03:29:00.044915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.029 [2024-12-06 03:29:00.044925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.029 [2024-12-06 03:29:00.044938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.029 [2024-12-06 03:29:00.044956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.029 [2024-12-06 03:29:00.044969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.029 [2024-12-06 03:29:00.044978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.029 [2024-12-06 03:29:00.044991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.029 [2024-12-06 03:29:00.045000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.029 [2024-12-06 03:29:00.045014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.029 [2024-12-06 03:29:00.045024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.029 [2024-12-06 03:29:00.045037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.029 [2024-12-06 03:29:00.045047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.029 [2024-12-06 03:29:00.045059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.029 [2024-12-06 03:29:00.045070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.029 [2024-12-06 03:29:00.045084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.029 [2024-12-06 03:29:00.045094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.029 [2024-12-06 03:29:00.045106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.029 [2024-12-06 03:29:00.045116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.029 [2024-12-06 03:29:00.045129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.029 [2024-12-06 03:29:00.045141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.029 [2024-12-06 03:29:00.045153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.029 [2024-12-06 03:29:00.045167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.029 [2024-12-06 03:29:00.045180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.029 [2024-12-06 03:29:00.045192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.029 [2024-12-06 03:29:00.045205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.029 [2024-12-06 03:29:00.045214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.029 [2024-12-06 03:29:00.045227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.029 [2024-12-06 03:29:00.045237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.029 [2024-12-06 03:29:00.045251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.029 [2024-12-06 03:29:00.045260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.029 [2024-12-06 03:29:00.045272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.029 [2024-12-06 03:29:00.045283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.029 [2024-12-06 03:29:00.045296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.029 [2024-12-06 03:29:00.045307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.029 [2024-12-06 03:29:00.045319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.029 [2024-12-06 03:29:00.045329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.029 [2024-12-06 03:29:00.045342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.029 [2024-12-06 03:29:00.045353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.029 [2024-12-06 03:29:00.045365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.029 [2024-12-06 03:29:00.045375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.029 [2024-12-06 03:29:00.045388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.029 [2024-12-06 03:29:00.045398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.029 [2024-12-06 03:29:00.045412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.029 [2024-12-06 03:29:00.045423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.029 [2024-12-06 03:29:00.045435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.029 [2024-12-06 03:29:00.045447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.029 [2024-12-06 03:29:00.045462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.029 [2024-12-06 03:29:00.045472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.029 [2024-12-06 03:29:00.045484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.029 [2024-12-06 03:29:00.045494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.029 [2024-12-06 03:29:00.045508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.029 [2024-12-06 03:29:00.045519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.029 [2024-12-06 03:29:00.045531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.029 [2024-12-06 03:29:00.045540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.029 [2024-12-06 03:29:00.045552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.029 [2024-12-06 03:29:00.045562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.029 [2024-12-06 03:29:00.045576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.029 [2024-12-06 03:29:00.045587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.029 [2024-12-06 03:29:00.045600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.029 [2024-12-06 03:29:00.045611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.029 [2024-12-06 03:29:00.045623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.029 [2024-12-06 03:29:00.045633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.029 [2024-12-06 03:29:00.045647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.029 [2024-12-06 03:29:00.045657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.029 [2024-12-06 03:29:00.045669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.029 [2024-12-06 03:29:00.045678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.029 [2024-12-06 03:29:00.045691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.029 [2024-12-06 03:29:00.045704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.029 [2024-12-06 03:29:00.045716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.029 [2024-12-06 03:29:00.045725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.029 [2024-12-06 03:29:00.045738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.029 [2024-12-06 03:29:00.045751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.029 [2024-12-06 03:29:00.045764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.029 [2024-12-06 03:29:00.045773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.029 [2024-12-06 03:29:00.045785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.029 [2024-12-06 03:29:00.045796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.029 [2024-12-06 03:29:00.045809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.029 [2024-12-06 03:29:00.045818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.029 [2024-12-06 03:29:00.045830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.029 [2024-12-06 03:29:00.045840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.030 [2024-12-06 03:29:00.045853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.030 [2024-12-06 03:29:00.045864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.030 [2024-12-06 03:29:00.045876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.030 [2024-12-06 03:29:00.045886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.030 [2024-12-06 03:29:00.045899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.030 [2024-12-06 03:29:00.045909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.030 [2024-12-06 03:29:00.045921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.030 [2024-12-06 03:29:00.045930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.030 [2024-12-06 03:29:00.045942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.030 [2024-12-06 03:29:00.045958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.030 [2024-12-06 03:29:00.045970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.030 [2024-12-06 03:29:00.045980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.030 [2024-12-06 03:29:00.045992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.030 [2024-12-06 03:29:00.046003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.030 [2024-12-06 03:29:00.046016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.030 [2024-12-06 03:29:00.046026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.030 [2024-12-06 03:29:00.046040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.030 [2024-12-06 03:29:00.046049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.030 [2024-12-06 03:29:00.046063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.030 [2024-12-06 03:29:00.046073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.030 [2024-12-06 03:29:00.046085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.030 [2024-12-06 03:29:00.046095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.030 [2024-12-06 03:29:00.046107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.030 [2024-12-06 03:29:00.046116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.030 [2024-12-06 03:29:00.046129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.030 [2024-12-06 03:29:00.046139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.030 [2024-12-06 03:29:00.046152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.030 [2024-12-06 03:29:00.046162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.030 [2024-12-06 03:29:00.046174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.030 [2024-12-06 03:29:00.046186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.030 [2024-12-06 03:29:00.046199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.030 [2024-12-06 03:29:00.046209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.030 [2024-12-06 03:29:00.046221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.030 [2024-12-06 03:29:00.046229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.030 [2024-12-06 03:29:00.046243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.030 [2024-12-06 03:29:00.046253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.030 [2024-12-06 03:29:00.046266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.030 [2024-12-06 03:29:00.046275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.030 [2024-12-06 03:29:00.046287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.030 [2024-12-06 03:29:00.046298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.030 [2024-12-06 03:29:00.046310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.030 [2024-12-06 03:29:00.046325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.030 [2024-12-06 03:29:00.046337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.030 [2024-12-06 03:29:00.046348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.030 [2024-12-06 03:29:00.046360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x264f910 is same with the state(6) to be set 00:20:40.030 [2024-12-06 03:29:00.046609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:20:40.030 [2024-12-06 03:29:00.046656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2242e10 (9): Bad file descriptor 00:20:40.030 [2024-12-06 03:29:00.046672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:40.030 [2024-12-06 03:29:00.046682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:40.030 [2024-12-06 03:29:00.046693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:40.030 [2024-12-06 03:29:00.046705] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:40.030 [2024-12-06 03:29:00.048128] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:40.030 [2024-12-06 03:29:00.048190] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:40.030 [2024-12-06 03:29:00.048242] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:40.030 [2024-12-06 03:29:00.048266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:20:40.030 [2024-12-06 03:29:00.048286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26796a0 (9): Bad file descriptor 00:20:40.030 [2024-12-06 03:29:00.048550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.030 [2024-12-06 03:29:00.048568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e940 with addr=10.0.0.2, port=4420 00:20:40.030 [2024-12-06 03:29:00.048579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e940 is same with the state(6) to be set 00:20:40.030 [2024-12-06 03:29:00.048591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:20:40.030 [2024-12-06 03:29:00.048601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:20:40.030 [2024-12-06 03:29:00.048612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:20:40.030 [2024-12-06 03:29:00.048622] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:20:40.030 [2024-12-06 03:29:00.048653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26bf830 (9): Bad file descriptor 00:20:40.030 [2024-12-06 03:29:00.048682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26a5550 (9): Bad file descriptor 00:20:40.030 [2024-12-06 03:29:00.048714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26798a0 (9): Bad file descriptor 00:20:40.030 [2024-12-06 03:29:00.048745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2163610 (9): Bad file descriptor 00:20:40.030 [2024-12-06 03:29:00.049243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e940 (9): Bad file descriptor 00:20:40.030 [2024-12-06 03:29:00.049326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.030 [2024-12-06 03:29:00.049341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.030 [2024-12-06 03:29:00.049361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.030 [2024-12-06 03:29:00.049373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.030 [2024-12-06 03:29:00.049384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.030 [2024-12-06 03:29:00.049394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.030 [2024-12-06 03:29:00.049407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.030 [2024-12-06 03:29:00.049418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.030 [2024-12-06 03:29:00.049431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.030 [2024-12-06 03:29:00.049441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.030 [2024-12-06 03:29:00.049452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.030 [2024-12-06 03:29:00.049463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.030 [2024-12-06 03:29:00.049476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.030 [2024-12-06 03:29:00.049485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.031 [2024-12-06 03:29:00.049497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.031 [2024-12-06 03:29:00.049507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.031 [2024-12-06 03:29:00.049520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.031 [2024-12-06 03:29:00.049531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.031 [2024-12-06 03:29:00.049542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.031 [2024-12-06 03:29:00.049552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.031 [2024-12-06 03:29:00.049566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.031 [2024-12-06 03:29:00.049577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.031 [2024-12-06 03:29:00.049589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.031 [2024-12-06 03:29:00.049598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.031 [2024-12-06 03:29:00.049611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.031 [2024-12-06 03:29:00.049622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.031 [2024-12-06 03:29:00.049635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.031 [2024-12-06 03:29:00.049646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.031 [2024-12-06 03:29:00.049659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.031 [2024-12-06 03:29:00.049669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.031 [2024-12-06 03:29:00.049682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.031 [2024-12-06 03:29:00.049692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.031 [2024-12-06 03:29:00.049703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.031 [2024-12-06 03:29:00.049714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.031 [2024-12-06 03:29:00.049728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.031 [2024-12-06 03:29:00.049738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.031 [2024-12-06 03:29:00.049750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.031 [2024-12-06 03:29:00.049760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.031 [2024-12-06 03:29:00.049773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.031 [2024-12-06 03:29:00.049784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.031 [2024-12-06 03:29:00.049796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.031 [2024-12-06 03:29:00.049805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.031 [2024-12-06 03:29:00.049817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.031 [2024-12-06 03:29:00.049827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.031 [2024-12-06 03:29:00.049840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.031 [2024-12-06 03:29:00.049850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.031 [2024-12-06 03:29:00.049863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.031 [2024-12-06 03:29:00.049872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.031 [2024-12-06 03:29:00.049884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.031 [2024-12-06 03:29:00.049895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.031 [2024-12-06 03:29:00.049908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.031 [2024-12-06 03:29:00.049918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.031 [2024-12-06 03:29:00.049932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.031 [2024-12-06 03:29:00.049942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.031 [2024-12-06 03:29:00.049963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.031 [2024-12-06 03:29:00.049974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.031 [2024-12-06 03:29:00.049985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.031 [2024-12-06 03:29:00.049995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.031 [2024-12-06 03:29:00.050008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.031 [2024-12-06 03:29:00.050018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.031 [2024-12-06 03:29:00.050030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.031 [2024-12-06 03:29:00.050040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.031 [2024-12-06 03:29:00.050051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.031 [2024-12-06 03:29:00.050061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.031 [2024-12-06 03:29:00.050074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.031 [2024-12-06 03:29:00.050085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.031 [2024-12-06 03:29:00.050097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.031 [2024-12-06 03:29:00.050106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.031 [2024-12-06 03:29:00.050119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.031 [2024-12-06 03:29:00.050129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.031 [2024-12-06 03:29:00.050142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.031 [2024-12-06 03:29:00.050152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.031 [2024-12-06 03:29:00.050164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.031 [2024-12-06 03:29:00.050174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.031 [2024-12-06 03:29:00.050186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.031 [2024-12-06 03:29:00.050197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.031 [2024-12-06 03:29:00.050210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.031 [2024-12-06 03:29:00.050222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.031 [2024-12-06 03:29:00.050234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.031 [2024-12-06 03:29:00.050244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.031 [2024-12-06 03:29:00.050257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.031 [2024-12-06 03:29:00.050267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.031 [2024-12-06 03:29:00.050278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.031 [2024-12-06 03:29:00.050288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.031 [2024-12-06 03:29:00.050300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.031 [2024-12-06 03:29:00.050310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.031 [2024-12-06 03:29:00.050323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.031 [2024-12-06 03:29:00.050333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.031 [2024-12-06 03:29:00.050344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.031 [2024-12-06 03:29:00.050354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.031 [2024-12-06 03:29:00.050367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.031 [2024-12-06 03:29:00.050378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.031 [2024-12-06 03:29:00.050391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.031 [2024-12-06 03:29:00.050400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.032 [2024-12-06 03:29:00.050412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.032 [2024-12-06 03:29:00.050422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.032 [2024-12-06 03:29:00.050435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.032 [2024-12-06 03:29:00.050445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.032 [2024-12-06 03:29:00.050458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.032 [2024-12-06 03:29:00.050467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.032 [2024-12-06 03:29:00.050479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.032 [2024-12-06 03:29:00.050490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.032 [2024-12-06 03:29:00.050504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.032 [2024-12-06 03:29:00.050514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.032 [2024-12-06 03:29:00.050525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.032 [2024-12-06 03:29:00.050535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.032 [2024-12-06 03:29:00.050548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.032 [2024-12-06 03:29:00.050558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.032 [2024-12-06 03:29:00.050571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.032 [2024-12-06 03:29:00.050581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.032 [2024-12-06 03:29:00.050593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.032 [2024-12-06 03:29:00.050603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.032 [2024-12-06 03:29:00.050616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.032 [2024-12-06 03:29:00.050625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.032 [2024-12-06 03:29:00.050637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.032 [2024-12-06 03:29:00.050648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.032 [2024-12-06 03:29:00.050661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.032 [2024-12-06 03:29:00.050672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.032 [2024-12-06 03:29:00.050684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.032 [2024-12-06 03:29:00.050693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.032 [2024-12-06 03:29:00.050706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.032 [2024-12-06 03:29:00.050717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.032 [2024-12-06 03:29:00.050729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.032 [2024-12-06 03:29:00.050738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.032 [2024-12-06 03:29:00.050750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.032 [2024-12-06 03:29:00.050760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.032 [2024-12-06 03:29:00.050773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.032 [2024-12-06 03:29:00.050784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.032 [2024-12-06 03:29:00.050795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x264d3b0 is same with the state(6) to be set 00:20:40.032 [2024-12-06 03:29:00.052543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.032 [2024-12-06 03:29:00.052564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.032 [2024-12-06 03:29:00.052579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.032 [2024-12-06 03:29:00.052590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.032 [2024-12-06 03:29:00.052603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.032 [2024-12-06 03:29:00.052612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.032 [2024-12-06 03:29:00.052625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.032 [2024-12-06 03:29:00.052635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.032 [2024-12-06 03:29:00.052646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.032 [2024-12-06 03:29:00.052656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.032 [2024-12-06 03:29:00.052668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.032 [2024-12-06 03:29:00.052679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.032 [2024-12-06 03:29:00.052691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.032 [2024-12-06 03:29:00.052700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.032 [2024-12-06 03:29:00.052712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.032 [2024-12-06 03:29:00.052721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.032 [2024-12-06 03:29:00.052733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.032 [2024-12-06 03:29:00.052743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.032 [2024-12-06 03:29:00.052755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.032 [2024-12-06 03:29:00.052765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.032 [2024-12-06 03:29:00.052778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.032 [2024-12-06 03:29:00.052788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.032 [2024-12-06 03:29:00.052801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.032 [2024-12-06 03:29:00.052814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.032 [2024-12-06 03:29:00.052826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.032 [2024-12-06 03:29:00.052836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.032 [2024-12-06 03:29:00.052847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.032 [2024-12-06 03:29:00.052857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.032 [2024-12-06 03:29:00.052871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.032 [2024-12-06 03:29:00.052881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.032 [2024-12-06 03:29:00.052894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.032 [2024-12-06 03:29:00.052903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.032 [2024-12-06 03:29:00.052915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.032 [2024-12-06 03:29:00.052925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.032 [2024-12-06 03:29:00.052938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.032 [2024-12-06 03:29:00.052955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.032 [2024-12-06 03:29:00.052968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.032 [2024-12-06 03:29:00.052978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.032 [2024-12-06 03:29:00.052990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.032 [2024-12-06 03:29:00.053001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.032 [2024-12-06 03:29:00.053014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.032 [2024-12-06 03:29:00.053023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.032 [2024-12-06 03:29:00.053036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.032 [2024-12-06 03:29:00.053047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.033 [2024-12-06 03:29:00.053059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.033 [2024-12-06 03:29:00.053070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.033 [2024-12-06 03:29:00.053081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.033 [2024-12-06 03:29:00.053091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.033 [2024-12-06 03:29:00.053104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.033 [2024-12-06 03:29:00.053117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.033 [2024-12-06 03:29:00.053130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.033 [2024-12-06 03:29:00.053140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.033 [2024-12-06 03:29:00.053151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.033 [2024-12-06 03:29:00.053162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.033 [2024-12-06 03:29:00.053175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.033 [2024-12-06 03:29:00.053185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.033 [2024-12-06 03:29:00.053197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.033 [2024-12-06 03:29:00.053206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.033 [2024-12-06 03:29:00.053220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.033 [2024-12-06 03:29:00.053230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.033 [2024-12-06 03:29:00.053242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.033 [2024-12-06 03:29:00.053252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.033 [2024-12-06 03:29:00.053264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.033 [2024-12-06 03:29:00.053275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.033 [2024-12-06 03:29:00.053288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.033 [2024-12-06 03:29:00.053298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.033 [2024-12-06 03:29:00.053310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.033 [2024-12-06 03:29:00.053320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.033 [2024-12-06 03:29:00.053333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.033 [2024-12-06 03:29:00.053343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.033 [2024-12-06 03:29:00.053355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.033 [2024-12-06 03:29:00.053364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.033 [2024-12-06 03:29:00.053376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.033 [2024-12-06 03:29:00.053386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.033 [2024-12-06 03:29:00.053400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.033 [2024-12-06 03:29:00.053410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.033 [2024-12-06 03:29:00.053423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.033 [2024-12-06 03:29:00.053432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.033 [2024-12-06 03:29:00.053444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.033 [2024-12-06 03:29:00.053454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.033 [2024-12-06 03:29:00.053468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.033 [2024-12-06 03:29:00.053479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.033 [2024-12-06 03:29:00.053491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.033 [2024-12-06 03:29:00.053501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.033 [2024-12-06 03:29:00.053513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.033 [2024-12-06 03:29:00.053524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.033 [2024-12-06 03:29:00.053536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.033 [2024-12-06 03:29:00.053546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.033 [2024-12-06 03:29:00.053558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.033 [2024-12-06 03:29:00.053569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.033 [2024-12-06 03:29:00.053582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.033 [2024-12-06 03:29:00.053592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.033 [2024-12-06 03:29:00.053604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.033 [2024-12-06 03:29:00.053613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.033 [2024-12-06 03:29:00.053624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.033 [2024-12-06 03:29:00.053635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.033 [2024-12-06 03:29:00.053648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.033 [2024-12-06 03:29:00.053658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.033 [2024-12-06 03:29:00.053670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.033 [2024-12-06 03:29:00.053683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.033 [2024-12-06 03:29:00.053695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.033 [2024-12-06 03:29:00.053704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.033 [2024-12-06 03:29:00.053717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.033 [2024-12-06 03:29:00.053728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.033 [2024-12-06 03:29:00.053740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.033 [2024-12-06 03:29:00.053751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.033 [2024-12-06 03:29:00.053764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.033 [2024-12-06 03:29:00.053774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.033 [2024-12-06 03:29:00.053785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.033 [2024-12-06 03:29:00.053796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.033 [2024-12-06 03:29:00.053809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.033 [2024-12-06 03:29:00.053819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.033 [2024-12-06 03:29:00.053830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.033 [2024-12-06 03:29:00.053842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.034 [2024-12-06 03:29:00.053854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.034 [2024-12-06 03:29:00.053864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.034 [2024-12-06 03:29:00.053875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.034 [2024-12-06 03:29:00.053885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.034 [2024-12-06 03:29:00.053897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.034 [2024-12-06 03:29:00.053906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.034 [2024-12-06 03:29:00.053918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.034 [2024-12-06 03:29:00.053928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.034 [2024-12-06 03:29:00.053940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.034 [2024-12-06 03:29:00.053955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.034 [2024-12-06 03:29:00.053970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.034 [2024-12-06 03:29:00.053979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.034 [2024-12-06 03:29:00.053991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.034 [2024-12-06 03:29:00.054001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.034 [2024-12-06 03:29:00.054011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26544a0 is same with the state(6) to be set 00:20:40.034 [2024-12-06 03:29:00.055117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:40.034 [2024-12-06 03:29:00.055138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:20:40.034 [2024-12-06 03:29:00.055149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:20:40.034 [2024-12-06 03:29:00.055356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.034 [2024-12-06 03:29:00.055371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26796a0 with addr=10.0.0.2, port=4420 00:20:40.034 [2024-12-06 03:29:00.055380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26796a0 is same with the state(6) to be set 00:20:40.034 [2024-12-06 03:29:00.055388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:20:40.034 [2024-12-06 03:29:00.055395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:20:40.034 [2024-12-06 03:29:00.055404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:20:40.034 [2024-12-06 03:29:00.055411] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:20:40.034 [2024-12-06 03:29:00.055757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.034 [2024-12-06 03:29:00.055772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224edd0 with addr=10.0.0.2, port=4420 00:20:40.034 [2024-12-06 03:29:00.055781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224edd0 is same with the state(6) to be set 00:20:40.034 [2024-12-06 03:29:00.055995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.034 [2024-12-06 03:29:00.056007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x267ac80 with addr=10.0.0.2, port=4420 00:20:40.034 [2024-12-06 03:29:00.056014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x267ac80 is same with the state(6) to be set 00:20:40.034 [2024-12-06 03:29:00.056116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.034 [2024-12-06 03:29:00.056127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26a5a60 with addr=10.0.0.2, port=4420 00:20:40.034 [2024-12-06 03:29:00.056134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26a5a60 is same with the state(6) to be set 00:20:40.034 [2024-12-06 03:29:00.056144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26796a0 (9): Bad file descriptor 00:20:40.034 [2024-12-06 03:29:00.056627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:20:40.034 [2024-12-06 03:29:00.056649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224edd0 (9): Bad file descriptor 00:20:40.034 [2024-12-06 03:29:00.056658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x267ac80 (9): Bad file descriptor 00:20:40.034 [2024-12-06 03:29:00.056670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26a5a60 (9): Bad file descriptor 00:20:40.034 [2024-12-06 03:29:00.056678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:20:40.034 [2024-12-06 03:29:00.056685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:20:40.034 [2024-12-06 03:29:00.056692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:20:40.034 [2024-12-06 03:29:00.056700] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:20:40.034 [2024-12-06 03:29:00.056907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.034 [2024-12-06 03:29:00.056920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2242e10 with addr=10.0.0.2, port=4420 00:20:40.034 [2024-12-06 03:29:00.056928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2242e10 is same with the state(6) to be set 00:20:40.034 [2024-12-06 03:29:00.056935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:40.034 [2024-12-06 03:29:00.056941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:40.034 [2024-12-06 03:29:00.056956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:40.034 [2024-12-06 03:29:00.056963] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:40.034 [2024-12-06 03:29:00.056971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:20:40.034 [2024-12-06 03:29:00.056977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:20:40.034 [2024-12-06 03:29:00.056984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:20:40.034 [2024-12-06 03:29:00.056990] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:20:40.034 [2024-12-06 03:29:00.056997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:20:40.034 [2024-12-06 03:29:00.057003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:20:40.034 [2024-12-06 03:29:00.057010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:20:40.034 [2024-12-06 03:29:00.057016] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:20:40.034 [2024-12-06 03:29:00.057065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2242e10 (9): Bad file descriptor 00:20:40.034 [2024-12-06 03:29:00.057092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:20:40.034 [2024-12-06 03:29:00.057106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:20:40.034 [2024-12-06 03:29:00.057113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:20:40.034 [2024-12-06 03:29:00.057120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:20:40.034 [2024-12-06 03:29:00.057126] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:20:40.034 [2024-12-06 03:29:00.057295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.034 [2024-12-06 03:29:00.057308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e940 with addr=10.0.0.2, port=4420 00:20:40.034 [2024-12-06 03:29:00.057315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e940 is same with the state(6) to be set 00:20:40.034 [2024-12-06 03:29:00.057345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e940 (9): Bad file descriptor 00:20:40.034 [2024-12-06 03:29:00.057370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:20:40.034 [2024-12-06 03:29:00.057376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:20:40.034 [2024-12-06 03:29:00.057383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:20:40.034 [2024-12-06 03:29:00.057390] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:20:40.034 [2024-12-06 03:29:00.058383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.034 [2024-12-06 03:29:00.058397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.034 [2024-12-06 03:29:00.058408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.034 [2024-12-06 03:29:00.058416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.034 [2024-12-06 03:29:00.058425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.034 [2024-12-06 03:29:00.058433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.034 [2024-12-06 03:29:00.058442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.034 [2024-12-06 03:29:00.058449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.034 [2024-12-06 03:29:00.058457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.034 [2024-12-06 03:29:00.058464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.034 [2024-12-06 03:29:00.058472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.034 [2024-12-06 03:29:00.058479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.034 [2024-12-06 03:29:00.058487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.035 [2024-12-06 03:29:00.058495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.035 [2024-12-06 03:29:00.058503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.035 [2024-12-06 03:29:00.058510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.035 [2024-12-06 03:29:00.058519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.035 [2024-12-06 03:29:00.058525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.035 [2024-12-06 03:29:00.058534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.035 [2024-12-06 03:29:00.058540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.035 [2024-12-06 03:29:00.058549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.035 [2024-12-06 03:29:00.058558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.035 [2024-12-06 03:29:00.058567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.035 [2024-12-06 03:29:00.058574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.035 [2024-12-06 03:29:00.058583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.035 [2024-12-06 03:29:00.058590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.035 [2024-12-06 03:29:00.058598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.035 [2024-12-06 03:29:00.058605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.035 [2024-12-06 03:29:00.058613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.035 [2024-12-06 03:29:00.058620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.035 [2024-12-06 03:29:00.058628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.035 [2024-12-06 03:29:00.058634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.035 [2024-12-06 03:29:00.058643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.035 [2024-12-06 03:29:00.058649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.035 [2024-12-06 03:29:00.058658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.035 [2024-12-06 03:29:00.058664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.035 [2024-12-06 03:29:00.058673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.035 [2024-12-06 03:29:00.058679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.035 [2024-12-06 03:29:00.058687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.035 [2024-12-06 03:29:00.058694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.035 [2024-12-06 03:29:00.058702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.035 [2024-12-06 03:29:00.058709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.035 [2024-12-06 03:29:00.058716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.035 [2024-12-06 03:29:00.058724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.035 [2024-12-06 03:29:00.058732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.035 [2024-12-06 03:29:00.058739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.035 [2024-12-06 03:29:00.058752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.035 [2024-12-06 03:29:00.058759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.035 [2024-12-06 03:29:00.058768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.035 [2024-12-06 03:29:00.058775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.035 [2024-12-06 03:29:00.058783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.035 [2024-12-06 03:29:00.058790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.035 [2024-12-06 03:29:00.058799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.035 [2024-12-06 03:29:00.058807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.035 [2024-12-06 03:29:00.058815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.035 [2024-12-06 03:29:00.058822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.035 [2024-12-06 03:29:00.058831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.035 [2024-12-06 03:29:00.058838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.035 [2024-12-06 03:29:00.058846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.035 [2024-12-06 03:29:00.058853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.035 [2024-12-06 03:29:00.058861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.035 [2024-12-06 03:29:00.058869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.035 [2024-12-06 03:29:00.058877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.035 [2024-12-06 03:29:00.058884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.035 [2024-12-06 03:29:00.058893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.035 [2024-12-06 03:29:00.058899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.035 [2024-12-06 03:29:00.058907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.035 [2024-12-06 03:29:00.058914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.035 [2024-12-06 03:29:00.058922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.035 [2024-12-06 03:29:00.058929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.035 [2024-12-06 03:29:00.058938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.035 [2024-12-06 03:29:00.058951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.035 [2024-12-06 03:29:00.058962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.035 [2024-12-06 03:29:00.058969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.035 [2024-12-06 03:29:00.058977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.035 [2024-12-06 03:29:00.058987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.035 [2024-12-06 03:29:00.058996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.035 [2024-12-06 03:29:00.059004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.035 [2024-12-06 03:29:00.059014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.035 [2024-12-06 03:29:00.059022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.035 [2024-12-06 03:29:00.059032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.035 [2024-12-06 03:29:00.059040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.035 [2024-12-06 03:29:00.059050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.035 [2024-12-06 03:29:00.059058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.035 [2024-12-06 03:29:00.059067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.035 [2024-12-06 03:29:00.059076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.035 [2024-12-06 03:29:00.059085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.035 [2024-12-06 03:29:00.059093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.035 [2024-12-06 03:29:00.059104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.035 [2024-12-06 03:29:00.059113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.035 [2024-12-06 03:29:00.059124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.035 [2024-12-06 03:29:00.059132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.035 [2024-12-06 03:29:00.059140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.036 [2024-12-06 03:29:00.059147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.036 [2024-12-06 03:29:00.059155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.036 [2024-12-06 03:29:00.059162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.036 [2024-12-06 03:29:00.059173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.036 [2024-12-06 03:29:00.059180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.036 [2024-12-06 03:29:00.059189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.036 [2024-12-06 03:29:00.059196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.036 [2024-12-06 03:29:00.059205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.036 [2024-12-06 03:29:00.059211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.036 [2024-12-06 03:29:00.059220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.036 [2024-12-06 03:29:00.059226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.036 [2024-12-06 03:29:00.059234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.036 [2024-12-06 03:29:00.059241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.036 [2024-12-06 03:29:00.059250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.036 [2024-12-06 03:29:00.059257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.036 [2024-12-06 03:29:00.059265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.036 [2024-12-06 03:29:00.059272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.036 [2024-12-06 03:29:00.059280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.036 [2024-12-06 03:29:00.059287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.036 [2024-12-06 03:29:00.059295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.036 [2024-12-06 03:29:00.059301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.036 [2024-12-06 03:29:00.059309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.036 [2024-12-06 03:29:00.059316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.036 [2024-12-06 03:29:00.059325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.036 [2024-12-06 03:29:00.059332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.036 [2024-12-06 03:29:00.059340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.036 [2024-12-06 03:29:00.059347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.036 [2024-12-06 03:29:00.059355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.036 [2024-12-06 03:29:00.059365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.036 [2024-12-06 03:29:00.059374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.036 [2024-12-06 03:29:00.059381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.036 [2024-12-06 03:29:00.059389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.036 [2024-12-06 03:29:00.059396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.036 [2024-12-06 03:29:00.059405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.036 [2024-12-06 03:29:00.059411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.036 [2024-12-06 03:29:00.059419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x264e650 is same with the state(6) to be set 00:20:40.036 [2024-12-06 03:29:00.060433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.036 [2024-12-06 03:29:00.060447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.036 [2024-12-06 03:29:00.060458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.036 [2024-12-06 03:29:00.060467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.036 [2024-12-06 03:29:00.060476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.036 [2024-12-06 03:29:00.060484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.036 [2024-12-06 03:29:00.060492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.036 [2024-12-06 03:29:00.060499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.036 [2024-12-06 03:29:00.060509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.036 [2024-12-06 03:29:00.060517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.036 [2024-12-06 03:29:00.060526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.036 [2024-12-06 03:29:00.060533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.036 [2024-12-06 03:29:00.060541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.036 [2024-12-06 03:29:00.060549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.036 [2024-12-06 03:29:00.060559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.036 [2024-12-06 03:29:00.060566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.036 [2024-12-06 03:29:00.060574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.036 [2024-12-06 03:29:00.060583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.036 [2024-12-06 03:29:00.060594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.036 [2024-12-06 03:29:00.060602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.036 [2024-12-06 03:29:00.060612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.036 [2024-12-06 03:29:00.060619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.036 [2024-12-06 03:29:00.060627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.036 [2024-12-06 03:29:00.060635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.036 [2024-12-06 03:29:00.060645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.036 [2024-12-06 03:29:00.060653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.036 [2024-12-06 03:29:00.060661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.036 [2024-12-06 03:29:00.060668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.036 [2024-12-06 03:29:00.060677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.036 [2024-12-06 03:29:00.060685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.036 [2024-12-06 03:29:00.060694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.036 [2024-12-06 03:29:00.060700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.036 [2024-12-06 03:29:00.060709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.036 [2024-12-06 03:29:00.060716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.036 [2024-12-06 03:29:00.060725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.036 [2024-12-06 03:29:00.060733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.036 [2024-12-06 03:29:00.060741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.036 [2024-12-06 03:29:00.060748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.036 [2024-12-06 03:29:00.060756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.036 [2024-12-06 03:29:00.060763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.036 [2024-12-06 03:29:00.060773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.036 [2024-12-06 03:29:00.060780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.036 [2024-12-06 03:29:00.060791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.036 [2024-12-06 03:29:00.060798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.037 [2024-12-06 03:29:00.060806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.037 [2024-12-06 03:29:00.060813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.037 [2024-12-06 03:29:00.060821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.037 [2024-12-06 03:29:00.060829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.037 [2024-12-06 03:29:00.060839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.037 [2024-12-06 03:29:00.060846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.037 [2024-12-06 03:29:00.060855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.037 [2024-12-06 03:29:00.060861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.037 [2024-12-06 03:29:00.060869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.037 [2024-12-06 03:29:00.060878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.037 [2024-12-06 03:29:00.060887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.037 [2024-12-06 03:29:00.060895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.037 [2024-12-06 03:29:00.060903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.037 [2024-12-06 03:29:00.060909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.037 [2024-12-06 03:29:00.060918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.037 [2024-12-06 03:29:00.060926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.037 [2024-12-06 03:29:00.060935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.037 [2024-12-06 03:29:00.060942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.037 [2024-12-06 03:29:00.060956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.037 [2024-12-06 03:29:00.060963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.037 [2024-12-06 03:29:00.060971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.037 [2024-12-06 03:29:00.060979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.037 [2024-12-06 03:29:00.060988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.037 [2024-12-06 03:29:00.060998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.037 [2024-12-06 03:29:00.061006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.037 [2024-12-06 03:29:00.061013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.037 [2024-12-06 03:29:00.061022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.037 [2024-12-06 03:29:00.061028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.037 [2024-12-06 03:29:00.061038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.037 [2024-12-06 03:29:00.061046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.037 [2024-12-06 03:29:00.061056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.037 [2024-12-06 03:29:00.061062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.037 [2024-12-06 03:29:00.061071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.037 [2024-12-06 03:29:00.061079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.037 [2024-12-06 03:29:00.061088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.037 [2024-12-06 03:29:00.061096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.037 [2024-12-06 03:29:00.061105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.037 [2024-12-06 03:29:00.061111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.037 [2024-12-06 03:29:00.061120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.037 [2024-12-06 03:29:00.061128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.037 [2024-12-06 03:29:00.061137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.037 [2024-12-06 03:29:00.061144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.037 [2024-12-06 03:29:00.061153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.037 [2024-12-06 03:29:00.061161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.037 [2024-12-06 03:29:00.061170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.037 [2024-12-06 03:29:00.061178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.037 [2024-12-06 03:29:00.061186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.037 [2024-12-06 03:29:00.061193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.037 [2024-12-06 03:29:00.061201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.037 [2024-12-06 03:29:00.061210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.037 [2024-12-06 03:29:00.061221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.037 [2024-12-06 03:29:00.061227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.037 [2024-12-06 03:29:00.061235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.037 [2024-12-06 03:29:00.061242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.037 [2024-12-06 03:29:00.061251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.037 [2024-12-06 03:29:00.061260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.037 [2024-12-06 03:29:00.061269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.037 [2024-12-06 03:29:00.061275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.037 [2024-12-06 03:29:00.061284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.037 [2024-12-06 03:29:00.061290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.037 [2024-12-06 03:29:00.061298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.037 [2024-12-06 03:29:00.061305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.037 [2024-12-06 03:29:00.061314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.037 [2024-12-06 03:29:00.061322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.037 [2024-12-06 03:29:00.061330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.037 [2024-12-06 03:29:00.061337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.037 [2024-12-06 03:29:00.061347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.037 [2024-12-06 03:29:00.061354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.037 [2024-12-06 03:29:00.061362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.037 [2024-12-06 03:29:00.061369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.038 [2024-12-06 03:29:00.061377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.038 [2024-12-06 03:29:00.061384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.038 [2024-12-06 03:29:00.061393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.038 [2024-12-06 03:29:00.061399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.038 [2024-12-06 03:29:00.061409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.038 [2024-12-06 03:29:00.061415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.038 [2024-12-06 03:29:00.061424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.038 [2024-12-06 03:29:00.061432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.038 [2024-12-06 03:29:00.061441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.038 [2024-12-06 03:29:00.061448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.038 [2024-12-06 03:29:00.061457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.038 [2024-12-06 03:29:00.061463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.038 [2024-12-06 03:29:00.061472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.038 [2024-12-06 03:29:00.061479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.038 [2024-12-06 03:29:00.061487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2650c20 is same with the state(6) to be set 00:20:40.038 [2024-12-06 03:29:00.062490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.038 [2024-12-06 03:29:00.062503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.038 [2024-12-06 03:29:00.062514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.038 [2024-12-06 03:29:00.062521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.038 [2024-12-06 03:29:00.062530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.038 [2024-12-06 03:29:00.062537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.038 [2024-12-06 03:29:00.062546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.038 [2024-12-06 03:29:00.062553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.038 [2024-12-06 03:29:00.062561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.038 [2024-12-06 03:29:00.062568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.038 [2024-12-06 03:29:00.062576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.038 [2024-12-06 03:29:00.062583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.038 [2024-12-06 03:29:00.062591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.038 [2024-12-06 03:29:00.062598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.038 [2024-12-06 03:29:00.062608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.038 [2024-12-06 03:29:00.062616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.038 [2024-12-06 03:29:00.062625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.038 [2024-12-06 03:29:00.062632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.038 [2024-12-06 03:29:00.062641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.038 [2024-12-06 03:29:00.062647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.038 [2024-12-06 03:29:00.062656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.038 [2024-12-06 03:29:00.062662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.038 [2024-12-06 03:29:00.062670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.038 [2024-12-06 03:29:00.062677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.038 [2024-12-06 03:29:00.062686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.038 [2024-12-06 03:29:00.062693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.038 [2024-12-06 03:29:00.062701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.038 [2024-12-06 03:29:00.062708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.038 [2024-12-06 03:29:00.062718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.038 [2024-12-06 03:29:00.062724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.038 [2024-12-06 03:29:00.062732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.038 [2024-12-06 03:29:00.062739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.038 [2024-12-06 03:29:00.062747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.038 [2024-12-06 03:29:00.062754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.038 [2024-12-06 03:29:00.062763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.038 [2024-12-06 03:29:00.062769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.038 [2024-12-06 03:29:00.062779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.038 [2024-12-06 03:29:00.062785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.038 [2024-12-06 03:29:00.062793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.038 [2024-12-06 03:29:00.062802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.038 [2024-12-06 03:29:00.062811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.038 [2024-12-06 03:29:00.062817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.038 [2024-12-06 03:29:00.062826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.038 [2024-12-06 03:29:00.062832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.038 [2024-12-06 03:29:00.062841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.038 [2024-12-06 03:29:00.062848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.038 [2024-12-06 03:29:00.062856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.038 [2024-12-06 03:29:00.062863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.038 [2024-12-06 03:29:00.062871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.038 [2024-12-06 03:29:00.062877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.038 [2024-12-06 03:29:00.062886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.038 [2024-12-06 03:29:00.062892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.038 [2024-12-06 03:29:00.062901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.038 [2024-12-06 03:29:00.062908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.038 [2024-12-06 03:29:00.062916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.038 [2024-12-06 03:29:00.062923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.038 [2024-12-06 03:29:00.062931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.038 [2024-12-06 03:29:00.062937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.038 [2024-12-06 03:29:00.062946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.038 [2024-12-06 03:29:00.062960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.038 [2024-12-06 03:29:00.062968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.038 [2024-12-06 03:29:00.062975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.038 [2024-12-06 03:29:00.062984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.038 [2024-12-06 03:29:00.062991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.038 [2024-12-06 03:29:00.063003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.039 [2024-12-06 03:29:00.063010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.039 [2024-12-06 03:29:00.063018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.039 [2024-12-06 03:29:00.063025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.039 [2024-12-06 03:29:00.063034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.039 [2024-12-06 03:29:00.063040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.039 [2024-12-06 03:29:00.063048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.039 [2024-12-06 03:29:00.063055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.039 [2024-12-06 03:29:00.063063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.039 [2024-12-06 03:29:00.063069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.039 [2024-12-06 03:29:00.063078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.039 [2024-12-06 03:29:00.063085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.039 [2024-12-06 03:29:00.063095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.039 [2024-12-06 03:29:00.063102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.039 [2024-12-06 03:29:00.063110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.039 [2024-12-06 03:29:00.063117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.039 [2024-12-06 03:29:00.063125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.039 [2024-12-06 03:29:00.063131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.039 [2024-12-06 03:29:00.063141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.039 [2024-12-06 03:29:00.063148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.039 [2024-12-06 03:29:00.063157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.039 [2024-12-06 03:29:00.063164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.039 [2024-12-06 03:29:00.063172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.039 [2024-12-06 03:29:00.063180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.039 [2024-12-06 03:29:00.063188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.039 [2024-12-06 03:29:00.063198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.039 [2024-12-06 03:29:00.063207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.039 [2024-12-06 03:29:00.063213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.039 [2024-12-06 03:29:00.063221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.039 [2024-12-06 03:29:00.063228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.039 [2024-12-06 03:29:00.063237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.039 [2024-12-06 03:29:00.063245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.039 [2024-12-06 03:29:00.063254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.039 [2024-12-06 03:29:00.063260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.039 [2024-12-06 03:29:00.063269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.039 [2024-12-06 03:29:00.063276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.039 [2024-12-06 03:29:00.063284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.039 [2024-12-06 03:29:00.063291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.039 [2024-12-06 03:29:00.063300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.039 [2024-12-06 03:29:00.063308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.039 [2024-12-06 03:29:00.063316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.039 [2024-12-06 03:29:00.063323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.039 [2024-12-06 03:29:00.063331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.039 [2024-12-06 03:29:00.063337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.039 [2024-12-06 03:29:00.063346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.039 [2024-12-06 03:29:00.063354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.039 [2024-12-06 03:29:00.063363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.039 [2024-12-06 03:29:00.063371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.039 [2024-12-06 03:29:00.063379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.039 [2024-12-06 03:29:00.063385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.039 [2024-12-06 03:29:00.063395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.039 [2024-12-06 03:29:00.063403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.039 [2024-12-06 03:29:00.063412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.039 [2024-12-06 03:29:00.063420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.039 [2024-12-06 03:29:00.063429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.039 [2024-12-06 03:29:00.063436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.039 [2024-12-06 03:29:00.063445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.039 [2024-12-06 03:29:00.063452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.039 [2024-12-06 03:29:00.063461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.039 [2024-12-06 03:29:00.063469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.039 [2024-12-06 03:29:00.063478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.039 [2024-12-06 03:29:00.063485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.039 [2024-12-06 03:29:00.063493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.039 [2024-12-06 03:29:00.063499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.039 [2024-12-06 03:29:00.063507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2651f30 is same with the state(6) to be set 00:20:40.039 [2024-12-06 03:29:00.064525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.039 [2024-12-06 03:29:00.064539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.039 [2024-12-06 03:29:00.064550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.039 [2024-12-06 03:29:00.064558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.039 [2024-12-06 03:29:00.064568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.039 [2024-12-06 03:29:00.064575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.039 [2024-12-06 03:29:00.064585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.039 [2024-12-06 03:29:00.064592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.039 [2024-12-06 03:29:00.064601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.039 [2024-12-06 03:29:00.064609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.039 [2024-12-06 03:29:00.064621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.039 [2024-12-06 03:29:00.064629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.039 [2024-12-06 03:29:00.064637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.039 [2024-12-06 03:29:00.064644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.039 [2024-12-06 03:29:00.064652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.040 [2024-12-06 03:29:00.064660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.040 [2024-12-06 03:29:00.064670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.040 [2024-12-06 03:29:00.064678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.040 [2024-12-06 03:29:00.064687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.040 [2024-12-06 03:29:00.064694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.040 [2024-12-06 03:29:00.064703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.040 [2024-12-06 03:29:00.064709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.040 [2024-12-06 03:29:00.064718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.040 [2024-12-06 03:29:00.064724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.040 [2024-12-06 03:29:00.064733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.040 [2024-12-06 03:29:00.064740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.040 [2024-12-06 03:29:00.064750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.040 [2024-12-06 03:29:00.064757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.040 [2024-12-06 03:29:00.064765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.040 [2024-12-06 03:29:00.064772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.040 [2024-12-06 03:29:00.064780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.040 [2024-12-06 03:29:00.064786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.040 [2024-12-06 03:29:00.064795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.040 [2024-12-06 03:29:00.064804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.040 [2024-12-06 03:29:00.064812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.040 [2024-12-06 03:29:00.064824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.040 [2024-12-06 03:29:00.064833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.040 [2024-12-06 03:29:00.064839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.040 [2024-12-06 03:29:00.064848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.040 [2024-12-06 03:29:00.064856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.040 [2024-12-06 03:29:00.064866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.040 [2024-12-06 03:29:00.064873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.040 [2024-12-06 03:29:00.064881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.040 [2024-12-06 03:29:00.064888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.040 [2024-12-06 03:29:00.064896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.040 [2024-12-06 03:29:00.064903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.040 [2024-12-06 03:29:00.064913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.040 [2024-12-06 03:29:00.064920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.040 [2024-12-06 03:29:00.064930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.040 [2024-12-06 03:29:00.064936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.040 [2024-12-06 03:29:00.064945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.040 [2024-12-06 03:29:00.064957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.040 [2024-12-06 03:29:00.064966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.040 [2024-12-06 03:29:00.064974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.040 [2024-12-06 03:29:00.064984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.040 [2024-12-06 03:29:00.064992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.040 [2024-12-06 03:29:00.065001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.040 [2024-12-06 03:29:00.065007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.040 [2024-12-06 03:29:00.065016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.040 [2024-12-06 03:29:00.065025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.040 [2024-12-06 03:29:00.065036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.040 [2024-12-06 03:29:00.065043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.040 [2024-12-06 03:29:00.065051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.040 [2024-12-06 03:29:00.065058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.040 [2024-12-06 03:29:00.065067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.040 [2024-12-06 03:29:00.065073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.040 [2024-12-06 03:29:00.065083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.040 [2024-12-06 03:29:00.065091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.040 [2024-12-06 03:29:00.065100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.040 [2024-12-06 03:29:00.065108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.040 [2024-12-06 03:29:00.065116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.040 [2024-12-06 03:29:00.065123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.040 [2024-12-06 03:29:00.065132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.040 [2024-12-06 03:29:00.065140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.040 [2024-12-06 03:29:00.065149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.040 [2024-12-06 03:29:00.065156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.040 [2024-12-06 03:29:00.065165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.040 [2024-12-06 03:29:00.065172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.040 [2024-12-06 03:29:00.065180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.040 [2024-12-06 03:29:00.065186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.040 [2024-12-06 03:29:00.065196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.040 [2024-12-06 03:29:00.065204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.040 [2024-12-06 03:29:00.065214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.040 [2024-12-06 03:29:00.065221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.040 [2024-12-06 03:29:00.065229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.040 [2024-12-06 03:29:00.065238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.040 [2024-12-06 03:29:00.065247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.040 [2024-12-06 03:29:00.065255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.040 [2024-12-06 03:29:00.065265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.040 [2024-12-06 03:29:00.065272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.040 [2024-12-06 03:29:00.065280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.040 [2024-12-06 03:29:00.065287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.040 [2024-12-06 03:29:00.065296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.040 [2024-12-06 03:29:00.065302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.040 [2024-12-06 03:29:00.065310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.041 [2024-12-06 03:29:00.065317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.041 [2024-12-06 03:29:00.065325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.041 [2024-12-06 03:29:00.065333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.041 [2024-12-06 03:29:00.065341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.041 [2024-12-06 03:29:00.065348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.041 [2024-12-06 03:29:00.065356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.041 [2024-12-06 03:29:00.065363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.041 [2024-12-06 03:29:00.065372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.041 [2024-12-06 03:29:00.065378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.041 [2024-12-06 03:29:00.065387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.041 [2024-12-06 03:29:00.065393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.041 [2024-12-06 03:29:00.065402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.041 [2024-12-06 03:29:00.065409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.041 [2024-12-06 03:29:00.065417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.041 [2024-12-06 03:29:00.065423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.041 [2024-12-06 03:29:00.065433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.041 [2024-12-06 03:29:00.065440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.041 [2024-12-06 03:29:00.065448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.041 [2024-12-06 03:29:00.065455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.041 [2024-12-06 03:29:00.065464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.041 [2024-12-06 03:29:00.065471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.041 [2024-12-06 03:29:00.065479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.041 [2024-12-06 03:29:00.065486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.041 [2024-12-06 03:29:00.065495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.041 [2024-12-06 03:29:00.065501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.041 [2024-12-06 03:29:00.065509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.041 [2024-12-06 03:29:00.065516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.041 [2024-12-06 03:29:00.065525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.041 [2024-12-06 03:29:00.065532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.041 [2024-12-06 03:29:00.065540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.041 [2024-12-06 03:29:00.065547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.041 [2024-12-06 03:29:00.065556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.041 [2024-12-06 03:29:00.065562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.041 [2024-12-06 03:29:00.065570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26531f0 is same with the state(6) to be set 00:20:40.041 [2024-12-06 03:29:00.066552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:20:40.041 [2024-12-06 03:29:00.066570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:20:40.041 [2024-12-06 03:29:00.066581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:20:40.041 task offset: 24576 on job bdev=Nvme1n1 fails 00:20:40.041 00:20:40.041 Latency(us) 00:20:40.041 [2024-12-06T02:29:00.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.041 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:40.041 Job: Nvme1n1 ended in about 0.80 seconds with error 00:20:40.041 Verification LBA range: start 0x0 length 0x400 00:20:40.041 Nvme1n1 : 0.80 238.69 14.92 79.56 0.00 198776.18 3091.59 222480.47 00:20:40.041 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:40.041 Job: Nvme2n1 ended in about 0.83 seconds with error 00:20:40.041 Verification LBA range: start 0x0 length 0x400 00:20:40.041 Nvme2n1 : 0.83 165.58 10.35 77.35 0.00 255455.16 28151.99 225215.89 00:20:40.041 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:40.041 Job: Nvme3n1 ended in about 0.83 seconds with error 00:20:40.041 Verification LBA range: start 0x0 length 0x400 00:20:40.041 Nvme3n1 : 0.83 230.78 14.42 76.93 0.00 197612.97 14360.93 222480.47 00:20:40.041 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:40.041 Job: Nvme4n1 ended in about 0.84 seconds with error 00:20:40.041 Verification LBA range: start 0x0 length 0x400 00:20:40.041 Nvme4n1 : 0.84 233.54 14.60 76.26 0.00 192473.64 13449.13 210627.01 00:20:40.041 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:40.041 Job: Nvme5n1 ended in about 0.85 seconds with error 00:20:40.041 Verification LBA range: start 0x0 length 0x400 00:20:40.041 Nvme5n1 : 0.85 150.99 9.44 75.49 0.00 258278.03 16640.45 223392.28 00:20:40.041 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:40.041 Job: Nvme6n1 ended in about 0.84 seconds with error 00:20:40.041 Verification LBA range: start 0x0 length 0x400 00:20:40.041 Nvme6n1 : 0.84 229.88 14.37 76.63 0.00 186541.86 15842.62 218833.25 00:20:40.041 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:40.041 Job: Nvme7n1 ended in about 0.85 seconds with error 00:20:40.041 Verification LBA range: start 0x0 length 0x400 00:20:40.041 Nvme7n1 : 0.85 225.93 14.12 75.31 0.00 186237.77 14702.86 237069.36 00:20:40.041 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:40.041 Job: Nvme8n1 ended in about 0.85 seconds with error 00:20:40.041 Verification LBA range: start 0x0 length 0x400 00:20:40.041 Nvme8n1 : 0.85 150.26 9.39 75.13 0.00 243851.20 14132.98 248011.02 00:20:40.041 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:40.041 Job: Nvme9n1 ended in about 0.85 seconds with error 00:20:40.041 Verification LBA range: start 0x0 length 0x400 00:20:40.041 Nvme9n1 : 0.85 149.90 9.37 74.95 0.00 239157.72 19147.91 223392.28 00:20:40.041 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:40.041 Job: Nvme10n1 ended in about 0.84 seconds with error 00:20:40.041 Verification LBA range: start 0x0 length 0x400 00:20:40.041 Nvme10n1 : 0.84 156.68 9.79 75.97 0.00 225438.90 18464.06 248011.02 00:20:40.041 [2024-12-06T02:29:00.182Z] =================================================================================================================== 00:20:40.041 [2024-12-06T02:29:00.182Z] Total : 1932.24 120.76 763.58 0.00 214802.82 3091.59 248011.02 00:20:40.041 [2024-12-06 03:29:00.097810] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:40.041 [2024-12-06 03:29:00.097861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:20:40.041 [2024-12-06 03:29:00.098354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.041 [2024-12-06 03:29:00.098377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26798a0 with addr=10.0.0.2, port=4420 00:20:40.041 [2024-12-06 03:29:00.098389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26798a0 is same with the state(6) to be set 00:20:40.041 [2024-12-06 03:29:00.098501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.041 [2024-12-06 03:29:00.098514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2163610 with addr=10.0.0.2, port=4420 00:20:40.041 [2024-12-06 03:29:00.098522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163610 is same with the state(6) to be set 00:20:40.041 [2024-12-06 03:29:00.098675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.041 [2024-12-06 03:29:00.098687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26bf830 with addr=10.0.0.2, port=4420 00:20:40.041 [2024-12-06 03:29:00.098695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bf830 is same with the state(6) to be set 00:20:40.041 [2024-12-06 03:29:00.098759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.041 [2024-12-06 03:29:00.098769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26a5550 with addr=10.0.0.2, port=4420 00:20:40.041 [2024-12-06 03:29:00.098777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26a5550 is same with the state(6) to be set 00:20:40.041 [2024-12-06 03:29:00.098813] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:20:40.041 [2024-12-06 03:29:00.098825] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:20:40.041 [2024-12-06 03:29:00.098837] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:20:40.041 [2024-12-06 03:29:00.098847] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:20:40.041 [2024-12-06 03:29:00.099774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:20:40.042 [2024-12-06 03:29:00.099790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:20:40.042 [2024-12-06 03:29:00.099798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:20:40.042 [2024-12-06 03:29:00.099806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:40.042 [2024-12-06 03:29:00.099861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26798a0 (9): Bad file descriptor 00:20:40.042 [2024-12-06 03:29:00.099875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2163610 (9): Bad file descriptor 00:20:40.042 [2024-12-06 03:29:00.099884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26bf830 (9): Bad file descriptor 00:20:40.042 [2024-12-06 03:29:00.099892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26a5550 (9): Bad file descriptor 00:20:40.042 [2024-12-06 03:29:00.100213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:20:40.042 [2024-12-06 03:29:00.100229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:20:40.042 [2024-12-06 03:29:00.100412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.042 [2024-12-06 03:29:00.100428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26796a0 with addr=10.0.0.2, port=4420 00:20:40.042 [2024-12-06 03:29:00.100436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26796a0 is same with the state(6) to be set 00:20:40.042 [2024-12-06 03:29:00.100576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.042 [2024-12-06 03:29:00.100587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26a5a60 with addr=10.0.0.2, port=4420 00:20:40.042 [2024-12-06 03:29:00.100595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26a5a60 is same with the state(6) to be set 00:20:40.042 [2024-12-06 03:29:00.100755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.042 [2024-12-06 03:29:00.100766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x267ac80 with addr=10.0.0.2, port=4420 00:20:40.042 [2024-12-06 03:29:00.100773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x267ac80 is same with the state(6) to be set 00:20:40.042 [2024-12-06 03:29:00.100914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.042 [2024-12-06 03:29:00.100924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224edd0 with addr=10.0.0.2, port=4420 00:20:40.042 [2024-12-06 03:29:00.100935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224edd0 is same with the state(6) to be set 00:20:40.042 [2024-12-06 03:29:00.100943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:20:40.042 [2024-12-06 03:29:00.100956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:20:40.042 [2024-12-06 03:29:00.100965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:20:40.042 [2024-12-06 03:29:00.100974] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:20:40.042 [2024-12-06 03:29:00.100983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:20:40.042 [2024-12-06 03:29:00.100991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:20:40.042 [2024-12-06 03:29:00.100998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:20:40.042 [2024-12-06 03:29:00.101004] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:20:40.042 [2024-12-06 03:29:00.101011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:20:40.042 [2024-12-06 03:29:00.101016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:20:40.042 [2024-12-06 03:29:00.101023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:20:40.042 [2024-12-06 03:29:00.101029] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:20:40.042 [2024-12-06 03:29:00.101035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:20:40.042 [2024-12-06 03:29:00.101043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:20:40.042 [2024-12-06 03:29:00.101051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:20:40.042 [2024-12-06 03:29:00.101057] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:20:40.042 [2024-12-06 03:29:00.101287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.042 [2024-12-06 03:29:00.101298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2242e10 with addr=10.0.0.2, port=4420 00:20:40.042 [2024-12-06 03:29:00.101306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2242e10 is same with the state(6) to be set 00:20:40.042 [2024-12-06 03:29:00.101465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.042 [2024-12-06 03:29:00.101475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e940 with addr=10.0.0.2, port=4420 00:20:40.042 [2024-12-06 03:29:00.101482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e940 is same with the state(6) to be set 00:20:40.042 [2024-12-06 03:29:00.101491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26796a0 (9): Bad file descriptor 00:20:40.042 [2024-12-06 03:29:00.101501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26a5a60 (9): Bad file descriptor 00:20:40.042 [2024-12-06 03:29:00.101510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x267ac80 (9): Bad file descriptor 00:20:40.042 [2024-12-06 03:29:00.101518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224edd0 (9): Bad file descriptor 00:20:40.042 [2024-12-06 03:29:00.101548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2242e10 (9): Bad file descriptor 00:20:40.042 [2024-12-06 03:29:00.101557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e940 (9): Bad file descriptor 00:20:40.042 [2024-12-06 03:29:00.101568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:20:40.042 [2024-12-06 03:29:00.101574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:20:40.042 [2024-12-06 03:29:00.101581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:20:40.042 [2024-12-06 03:29:00.101588] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:20:40.042 [2024-12-06 03:29:00.101595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:20:40.042 [2024-12-06 03:29:00.101601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:20:40.042 [2024-12-06 03:29:00.101608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:20:40.042 [2024-12-06 03:29:00.101614] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:20:40.042 [2024-12-06 03:29:00.101621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:20:40.042 [2024-12-06 03:29:00.101627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:20:40.042 [2024-12-06 03:29:00.101633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:20:40.042 [2024-12-06 03:29:00.101639] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:20:40.042 [2024-12-06 03:29:00.101646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:40.042 [2024-12-06 03:29:00.101651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:40.042 [2024-12-06 03:29:00.101658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:40.042 [2024-12-06 03:29:00.101664] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:40.042 [2024-12-06 03:29:00.101686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:20:40.042 [2024-12-06 03:29:00.101693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:20:40.042 [2024-12-06 03:29:00.101700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:20:40.042 [2024-12-06 03:29:00.101706] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:20:40.042 [2024-12-06 03:29:00.101712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:20:40.042 [2024-12-06 03:29:00.101718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:20:40.042 [2024-12-06 03:29:00.101725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:20:40.042 [2024-12-06 03:29:00.101731] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:20:40.303 03:29:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:20:41.683 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2672456 00:20:41.683 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:20:41.683 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2672456 00:20:41.683 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:20:41.683 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:41.683 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:20:41.683 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:41.683 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2672456 00:20:41.683 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:20:41.683 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:41.683 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:20:41.683 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:20:41.683 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:20:41.683 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:41.683 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:20:41.683 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:41.683 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:41.683 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:41.683 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:41.683 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:41.683 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:20:41.683 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:41.683 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:20:41.683 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:41.683 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:41.683 rmmod nvme_tcp 00:20:41.683 rmmod nvme_fabrics 00:20:41.683 rmmod nvme_keyring 00:20:41.683 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:41.683 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:20:41.683 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:20:41.683 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2672183 ']' 00:20:41.683 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2672183 00:20:41.683 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2672183 ']' 00:20:41.683 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2672183 00:20:41.683 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2672183) - No such process 00:20:41.683 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2672183 is not found' 00:20:41.683 Process with pid 2672183 is not found 00:20:41.683 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:41.683 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:41.683 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:41.683 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:20:41.683 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:20:41.683 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:41.683 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:20:41.683 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:41.684 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:41.684 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.684 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:41.684 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.591 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:43.591 00:20:43.591 real 0m7.097s 00:20:43.591 user 0m16.192s 00:20:43.591 sys 0m1.269s 00:20:43.591 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:43.591 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:43.591 ************************************ 00:20:43.591 END TEST nvmf_shutdown_tc3 00:20:43.591 ************************************ 00:20:43.591 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:20:43.591 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:20:43.591 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:20:43.591 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:43.591 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:43.591 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:43.591 ************************************ 00:20:43.591 START TEST nvmf_shutdown_tc4 00:20:43.591 ************************************ 00:20:43.591 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:20:43.591 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:20:43.591 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:43.591 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:43.591 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:43.591 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:43.591 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:43.591 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:43.591 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.591 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:43.591 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:43.592 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:43.592 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:43.592 Found net devices under 0000:86:00.0: cvl_0_0 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:43.592 Found net devices under 0000:86:00.1: cvl_0_1 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:43.592 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:43.852 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:43.852 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:43.852 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:43.852 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:43.852 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:43.852 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:43.852 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:43.852 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:43.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:43.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:20:43.852 00:20:43.852 --- 10.0.0.2 ping statistics --- 00:20:43.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.852 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:20:43.852 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:43.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:43.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:20:43.852 00:20:43.852 --- 10.0.0.1 ping statistics --- 00:20:43.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.852 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:20:43.852 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:43.852 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:20:43.852 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:43.852 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:43.852 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:43.852 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:43.852 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:43.852 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:43.852 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:43.852 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:43.852 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:43.852 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:43.852 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:43.852 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2673500 00:20:43.852 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2673500 00:20:43.852 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:43.852 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2673500 ']' 00:20:43.852 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.852 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:43.852 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.852 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:43.852 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:44.110 [2024-12-06 03:29:04.029200] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:20:44.110 [2024-12-06 03:29:04.029244] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.110 [2024-12-06 03:29:04.099973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:44.110 [2024-12-06 03:29:04.145473] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:44.110 [2024-12-06 03:29:04.145504] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:44.110 [2024-12-06 03:29:04.145512] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:44.110 [2024-12-06 03:29:04.145519] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:44.110 [2024-12-06 03:29:04.145525] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:44.111 [2024-12-06 03:29:04.147017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.111 [2024-12-06 03:29:04.147037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:44.111 [2024-12-06 03:29:04.147128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:44.111 [2024-12-06 03:29:04.147128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:44.369 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:44.369 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:20:44.369 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:44.369 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:44.369 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:44.369 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:44.369 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:44.369 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.369 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:44.369 [2024-12-06 03:29:04.297756] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:44.369 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.369 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:44.369 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:44.369 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:44.369 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:44.369 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:44.369 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:44.369 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:44.369 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:44.369 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:44.369 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:44.369 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:44.369 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:44.369 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:44.369 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:44.369 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:44.369 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:44.369 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:44.369 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:44.369 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:44.369 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:44.369 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:44.369 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:44.369 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:44.369 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:44.369 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:44.369 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:44.369 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.369 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:44.369 Malloc1 00:20:44.369 [2024-12-06 03:29:04.408049] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:44.369 Malloc2 00:20:44.369 Malloc3 00:20:44.627 Malloc4 00:20:44.627 Malloc5 00:20:44.627 Malloc6 00:20:44.627 Malloc7 00:20:44.627 Malloc8 00:20:44.627 Malloc9 00:20:44.886 Malloc10 00:20:44.886 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.886 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:44.886 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:44.886 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:44.886 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2673770 00:20:44.886 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:20:44.886 03:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:20:44.886 [2024-12-06 03:29:04.911810] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:50.172 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:50.172 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2673500 00:20:50.172 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2673500 ']' 00:20:50.172 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2673500 00:20:50.172 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:20:50.172 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:50.172 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2673500 00:20:50.172 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:50.172 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:50.172 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2673500' 00:20:50.172 killing process with pid 2673500 00:20:50.172 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2673500 00:20:50.172 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2673500 00:20:50.172 [2024-12-06 03:29:09.912387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79b390 is same with the state(6) to be set 00:20:50.172 [2024-12-06 03:29:09.912449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79b390 is same with the state(6) to be set 00:20:50.172 [2024-12-06 03:29:09.912457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79b390 is same with the state(6) to be set 00:20:50.172 [2024-12-06 03:29:09.912464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79b390 is same with the state(6) to be set 00:20:50.172 [2024-12-06 03:29:09.912471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79b390 is same with the state(6) to be set 00:20:50.172 [2024-12-06 03:29:09.915338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f27f0 is same with the state(6) to be set 00:20:50.172 [2024-12-06 03:29:09.915366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f27f0 is same with the state(6) to be set 00:20:50.172 [2024-12-06 03:29:09.915373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f27f0 is same with the state(6) to be set 00:20:50.172 [2024-12-06 03:29:09.915380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f27f0 is same with the state(6) to be set 00:20:50.172 [2024-12-06 03:29:09.915388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f27f0 is same with the state(6) to be set 00:20:50.172 [2024-12-06 03:29:09.921505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7939e0 is same with the state(6) to be set 00:20:50.172 [2024-12-06 03:29:09.921535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7939e0 is same with the state(6) to be set 00:20:50.172 [2024-12-06 03:29:09.921544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7939e0 is same with the state(6) to be set 00:20:50.172 [2024-12-06 03:29:09.921553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7939e0 is same with the state(6) to be set 00:20:50.172 [2024-12-06 03:29:09.921561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7939e0 is same with the state(6) to be set 00:20:50.172 [2024-12-06 03:29:09.921569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7939e0 is same with the state(6) to be set 00:20:50.172 [2024-12-06 03:29:09.921576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7939e0 is same with the state(6) to be set 00:20:50.172 [2024-12-06 03:29:09.922568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793eb0 is same with the state(6) to be set 00:20:50.172 [2024-12-06 03:29:09.922594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793eb0 is same with the state(6) to be set 00:20:50.172 [2024-12-06 03:29:09.922604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793eb0 is same with the state(6) to be set 00:20:50.172 [2024-12-06 03:29:09.922612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793eb0 is same with the state(6) to be set 00:20:50.172 [2024-12-06 03:29:09.922619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793eb0 is same with the state(6) to be set 00:20:50.172 [2024-12-06 03:29:09.922626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793eb0 is same with the state(6) to be set 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 starting I/O failed: -6 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 starting I/O failed: -6 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 starting I/O failed: -6 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 starting I/O failed: -6 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 starting I/O failed: -6 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 starting I/O failed: -6 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 starting I/O failed: -6 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 starting I/O failed: -6 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 starting I/O failed: -6 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 [2024-12-06 03:29:09.927746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 starting I/O failed: -6 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 starting I/O failed: -6 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 starting I/O failed: -6 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 starting I/O failed: -6 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 starting I/O failed: -6 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 starting I/O failed: -6 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 starting I/O failed: -6 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 starting I/O failed: -6 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 starting I/O failed: -6 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 starting I/O failed: -6 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 starting I/O failed: -6 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 starting I/O failed: -6 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 starting I/O failed: -6 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 starting I/O failed: -6 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 starting I/O failed: -6 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 starting I/O failed: -6 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 starting I/O failed: -6 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 starting I/O failed: -6 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.172 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 [2024-12-06 03:29:09.928699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:50.173 starting I/O failed: -6 00:20:50.173 starting I/O failed: -6 00:20:50.173 starting I/O failed: -6 00:20:50.173 starting I/O failed: -6 00:20:50.173 starting I/O failed: -6 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 [2024-12-06 03:29:09.929875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x799b80 is same with the state(6) to be set 00:20:50.173 starting I/O failed: -6 00:20:50.173 [2024-12-06 03:29:09.929901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x799b80 is same with the state(6) to be set 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 [2024-12-06 03:29:09.929909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x799b80 is same with the state(6) to be set 00:20:50.173 [2024-12-06 03:29:09.929916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x799b80 is same with the state(6) to be set 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 [2024-12-06 03:29:09.929923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x799b80 is same with tstarting I/O failed: -6 00:20:50.173 he state(6) to be set 00:20:50.173 [2024-12-06 03:29:09.929932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x799b80 is same with the state(6) to be set 00:20:50.173 [2024-12-06 03:29:09.929938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x799b80 is same with the state(6) to be set 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 [2024-12-06 03:29:09.929946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x799b80 is same with tstarting I/O failed: -6 00:20:50.173 he state(6) to be set 00:20:50.173 [2024-12-06 03:29:09.929959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x799b80 is same with the state(6) to be set 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 [2024-12-06 03:29:09.930039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 [2024-12-06 03:29:09.930261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79a050 is same with tstarting I/O failed: -6 00:20:50.173 he state(6) to be set 00:20:50.173 [2024-12-06 03:29:09.930283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79a050 is same with the state(6) to be set 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 [2024-12-06 03:29:09.930291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79a050 is same with the state(6) to be set 00:20:50.173 starting I/O failed: -6 00:20:50.173 [2024-12-06 03:29:09.930298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79a050 is same with the state(6) to be set 00:20:50.173 [2024-12-06 03:29:09.930305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79a050 is same with the state(6) to be set 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 [2024-12-06 03:29:09.930583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79a520 is same with the state(6) to be set 00:20:50.173 starting I/O failed: -6 00:20:50.173 [2024-12-06 03:29:09.930607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79a520 is same with the state(6) to be set 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 [2024-12-06 03:29:09.930616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79a520 is same with the state(6) to be set 00:20:50.173 starting I/O failed: -6 00:20:50.173 [2024-12-06 03:29:09.930624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79a520 is same with the state(6) to be set 00:20:50.173 [2024-12-06 03:29:09.930631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79a520 is same with the state(6) to be set 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 [2024-12-06 03:29:09.930639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79a520 is same with tstarting I/O failed: -6 00:20:50.173 he state(6) to be set 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.173 Write completed with error (sct=0, sc=8) 00:20:50.173 starting I/O failed: -6 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 starting I/O failed: -6 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 starting I/O failed: -6 00:20:50.174 [2024-12-06 03:29:09.930965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7996b0 is same with the state(6) to be set 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 [2024-12-06 03:29:09.930990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7996b0 is same with the state(6) to be set 00:20:50.174 starting I/O failed: -6 00:20:50.174 [2024-12-06 03:29:09.930998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7996b0 is same with the state(6) to be set 00:20:50.174 [2024-12-06 03:29:09.931006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7996b0 is same with the state(6) to be set 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 [2024-12-06 03:29:09.931013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7996b0 is same with the state(6) to be set 00:20:50.174 starting I/O failed: -6 00:20:50.174 [2024-12-06 03:29:09.931021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7996b0 is same with the state(6) to be set 00:20:50.174 [2024-12-06 03:29:09.931028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7996b0 is same with the state(6) to be set 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 starting I/O failed: -6 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 starting I/O failed: -6 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 starting I/O failed: -6 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 starting I/O failed: -6 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 starting I/O failed: -6 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 starting I/O failed: -6 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 starting I/O failed: -6 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 starting I/O failed: -6 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 starting I/O failed: -6 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 starting I/O failed: -6 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 starting I/O failed: -6 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 starting I/O failed: -6 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 starting I/O failed: -6 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 starting I/O failed: -6 00:20:50.174 [2024-12-06 03:29:09.931474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x798820 is same with the state(6) to be set 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 [2024-12-06 03:29:09.931488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x798820 is same with tstarting I/O failed: -6 00:20:50.174 he state(6) to be set 00:20:50.174 [2024-12-06 03:29:09.931497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x798820 is same with the state(6) to be set 00:20:50.174 [2024-12-06 03:29:09.931504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x798820 is same with the state(6) to be set 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 [2024-12-06 03:29:09.931511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x798820 is same with tstarting I/O failed: -6 00:20:50.174 he state(6) to be set 00:20:50.174 [2024-12-06 03:29:09.931519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x798820 is same with the state(6) to be set 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 starting I/O failed: -6 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 starting I/O failed: -6 00:20:50.174 [2024-12-06 03:29:09.931750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:50.174 NVMe io qpair process completion error 00:20:50.174 [2024-12-06 03:29:09.931928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x798d10 is same with the state(6) to be set 00:20:50.174 [2024-12-06 03:29:09.931952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x798d10 is same with the state(6) to be set 00:20:50.174 [2024-12-06 03:29:09.931960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x798d10 is same with the state(6) to be set 00:20:50.174 [2024-12-06 03:29:09.931966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x798d10 is same with the state(6) to be set 00:20:50.174 [2024-12-06 03:29:09.932084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x798d10 is same with the state(6) to be set 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 starting I/O failed: -6 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 starting I/O failed: -6 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 starting I/O failed: -6 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 [2024-12-06 03:29:09.932353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7991e0 is same with the state(6) to be set 00:20:50.174 [2024-12-06 03:29:09.932367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7991e0 is same with the state(6) to be set 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 starting I/O failed: -6 00:20:50.174 [2024-12-06 03:29:09.932376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7991e0 is same with the state(6) to be set 00:20:50.174 [2024-12-06 03:29:09.932386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7991e0 is same with the state(6) to be set 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 [2024-12-06 03:29:09.932395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7991e0 is same with the state(6) to be set 00:20:50.174 [2024-12-06 03:29:09.932402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7991e0 is same with the state(6) to be set 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 starting I/O failed: -6 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 starting I/O failed: -6 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 starting I/O failed: -6 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 starting I/O failed: -6 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 starting I/O failed: -6 00:20:50.174 [2024-12-06 03:29:09.932686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x798350 is same with the state(6) to be set 00:20:50.174 [2024-12-06 03:29:09.932698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x798350 is same with tWrite completed with error (sct=0, sc=8) 00:20:50.174 he state(6) to be set 00:20:50.174 [2024-12-06 03:29:09.932707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x798350 is same with the state(6) to be set 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 [2024-12-06 03:29:09.932714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x798350 is same with the state(6) to be set 00:20:50.174 [2024-12-06 03:29:09.932721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x798350 is same with the state(6) to be set 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 [2024-12-06 03:29:09.932728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x798350 is same with the state(6) to be set 00:20:50.174 [2024-12-06 03:29:09.932739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x798350 is same with the state(6) to be set 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 [2024-12-06 03:29:09.932745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x798350 is same with the state(6) to be set 00:20:50.174 starting I/O failed: -6 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 [2024-12-06 03:29:09.932792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 starting I/O failed: -6 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 starting I/O failed: -6 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 starting I/O failed: -6 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 starting I/O failed: -6 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 starting I/O failed: -6 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 starting I/O failed: -6 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 starting I/O failed: -6 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 starting I/O failed: -6 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 starting I/O failed: -6 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 starting I/O failed: -6 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.174 starting I/O failed: -6 00:20:50.174 Write completed with error (sct=0, sc=8) 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 [2024-12-06 03:29:09.933663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 [2024-12-06 03:29:09.934682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.175 starting I/O failed: -6 00:20:50.175 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 [2024-12-06 03:29:09.936407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:50.176 NVMe io qpair process completion error 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 [2024-12-06 03:29:09.937441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 [2024-12-06 03:29:09.938375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 starting I/O failed: -6 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.176 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 [2024-12-06 03:29:09.939389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 [2024-12-06 03:29:09.941273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:50.177 NVMe io qpair process completion error 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 [2024-12-06 03:29:09.942385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:50.177 Write completed with error (sct=0, sc=8) 00:20:50.177 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 [2024-12-06 03:29:09.943298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 [2024-12-06 03:29:09.944322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.178 Write completed with error (sct=0, sc=8) 00:20:50.178 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 [2024-12-06 03:29:09.950226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:50.179 NVMe io qpair process completion error 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 [2024-12-06 03:29:09.951338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 [2024-12-06 03:29:09.952237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.179 starting I/O failed: -6 00:20:50.179 Write completed with error (sct=0, sc=8) 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 [2024-12-06 03:29:09.953235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 [2024-12-06 03:29:09.956455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:50.180 NVMe io qpair process completion error 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 [2024-12-06 03:29:09.957494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 Write completed with error (sct=0, sc=8) 00:20:50.180 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 [2024-12-06 03:29:09.958421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 [2024-12-06 03:29:09.959414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.181 starting I/O failed: -6 00:20:50.181 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 [2024-12-06 03:29:09.961515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:50.182 NVMe io qpair process completion error 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 [2024-12-06 03:29:09.962732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:50.182 starting I/O failed: -6 00:20:50.182 starting I/O failed: -6 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 [2024-12-06 03:29:09.963686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.182 Write completed with error (sct=0, sc=8) 00:20:50.182 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 [2024-12-06 03:29:09.964722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 [2024-12-06 03:29:09.966735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:50.183 NVMe io qpair process completion error 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 Write completed with error (sct=0, sc=8) 00:20:50.183 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 [2024-12-06 03:29:09.967788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 [2024-12-06 03:29:09.968609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 [2024-12-06 03:29:09.969674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.184 Write completed with error (sct=0, sc=8) 00:20:50.184 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 [2024-12-06 03:29:09.974808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:50.185 NVMe io qpair process completion error 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 [2024-12-06 03:29:09.975770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 [2024-12-06 03:29:09.976699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 starting I/O failed: -6 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.185 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 [2024-12-06 03:29:09.977727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 [2024-12-06 03:29:09.979374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:50.186 NVMe io qpair process completion error 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 starting I/O failed: -6 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.186 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 [2024-12-06 03:29:09.980304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 [2024-12-06 03:29:09.981216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 [2024-12-06 03:29:09.982254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.187 Write completed with error (sct=0, sc=8) 00:20:50.187 starting I/O failed: -6 00:20:50.188 Write completed with error (sct=0, sc=8) 00:20:50.188 starting I/O failed: -6 00:20:50.188 Write completed with error (sct=0, sc=8) 00:20:50.188 starting I/O failed: -6 00:20:50.188 Write completed with error (sct=0, sc=8) 00:20:50.188 starting I/O failed: -6 00:20:50.188 Write completed with error (sct=0, sc=8) 00:20:50.188 starting I/O failed: -6 00:20:50.188 Write completed with error (sct=0, sc=8) 00:20:50.188 starting I/O failed: -6 00:20:50.188 Write completed with error (sct=0, sc=8) 00:20:50.188 starting I/O failed: -6 00:20:50.188 Write completed with error (sct=0, sc=8) 00:20:50.188 starting I/O failed: -6 00:20:50.188 Write completed with error (sct=0, sc=8) 00:20:50.188 starting I/O failed: -6 00:20:50.188 Write completed with error (sct=0, sc=8) 00:20:50.188 starting I/O failed: -6 00:20:50.188 Write completed with error (sct=0, sc=8) 00:20:50.188 starting I/O failed: -6 00:20:50.188 Write completed with error (sct=0, sc=8) 00:20:50.188 starting I/O failed: -6 00:20:50.188 Write completed with error (sct=0, sc=8) 00:20:50.188 starting I/O failed: -6 00:20:50.188 Write completed with error (sct=0, sc=8) 00:20:50.188 starting I/O failed: -6 00:20:50.188 Write completed with error (sct=0, sc=8) 00:20:50.188 starting I/O failed: -6 00:20:50.188 Write completed with error (sct=0, sc=8) 00:20:50.188 starting I/O failed: -6 00:20:50.188 Write completed with error (sct=0, sc=8) 00:20:50.188 starting I/O failed: -6 00:20:50.188 Write completed with error (sct=0, sc=8) 00:20:50.188 starting I/O failed: -6 00:20:50.188 Write completed with error (sct=0, sc=8) 00:20:50.188 starting I/O failed: -6 00:20:50.188 Write completed with error (sct=0, sc=8) 00:20:50.188 starting I/O failed: -6 00:20:50.188 Write completed with error (sct=0, sc=8) 00:20:50.188 starting I/O failed: -6 00:20:50.188 Write completed with error (sct=0, sc=8) 00:20:50.188 starting I/O failed: -6 00:20:50.188 Write completed with error (sct=0, sc=8) 00:20:50.188 starting I/O failed: -6 00:20:50.188 Write completed with error (sct=0, sc=8) 00:20:50.188 starting I/O failed: -6 00:20:50.188 Write completed with error (sct=0, sc=8) 00:20:50.188 starting I/O failed: -6 00:20:50.188 Write completed with error (sct=0, sc=8) 00:20:50.188 starting I/O failed: -6 00:20:50.188 Write completed with error (sct=0, sc=8) 00:20:50.188 starting I/O failed: -6 00:20:50.188 Write completed with error (sct=0, sc=8) 00:20:50.188 starting I/O failed: -6 00:20:50.188 Write completed with error (sct=0, sc=8) 00:20:50.188 starting I/O failed: -6 00:20:50.188 Write completed with error (sct=0, sc=8) 00:20:50.188 starting I/O failed: -6 00:20:50.188 Write completed with error (sct=0, sc=8) 00:20:50.188 starting I/O failed: -6 00:20:50.188 Write completed with error (sct=0, sc=8) 00:20:50.188 starting I/O failed: -6 00:20:50.188 Write completed with error (sct=0, sc=8) 00:20:50.188 starting I/O failed: -6 00:20:50.188 Write completed with error (sct=0, sc=8) 00:20:50.188 starting I/O failed: -6 00:20:50.188 Write completed with error (sct=0, sc=8) 00:20:50.188 starting I/O failed: -6 00:20:50.188 Write completed with error (sct=0, sc=8) 00:20:50.188 starting I/O failed: -6 00:20:50.188 Write completed with error (sct=0, sc=8) 00:20:50.188 starting I/O failed: -6 00:20:50.188 [2024-12-06 03:29:09.984557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:50.188 NVMe io qpair process completion error 00:20:50.188 Initializing NVMe Controllers 00:20:50.188 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:20:50.188 Controller IO queue size 128, less than required. 00:20:50.188 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:50.188 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:20:50.188 Controller IO queue size 128, less than required. 00:20:50.188 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:50.188 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:20:50.188 Controller IO queue size 128, less than required. 00:20:50.188 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:50.188 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:20:50.188 Controller IO queue size 128, less than required. 00:20:50.188 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:50.188 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:20:50.188 Controller IO queue size 128, less than required. 00:20:50.188 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:50.188 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:20:50.188 Controller IO queue size 128, less than required. 00:20:50.188 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:50.188 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:50.188 Controller IO queue size 128, less than required. 00:20:50.188 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:50.188 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:20:50.188 Controller IO queue size 128, less than required. 00:20:50.188 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:50.188 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:20:50.188 Controller IO queue size 128, less than required. 00:20:50.188 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:50.188 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:20:50.188 Controller IO queue size 128, less than required. 00:20:50.188 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:50.188 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:20:50.188 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:20:50.188 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:20:50.188 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:20:50.188 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:20:50.188 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:20:50.188 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:50.188 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:20:50.188 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:20:50.188 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:20:50.188 Initialization complete. Launching workers. 00:20:50.188 ======================================================== 00:20:50.188 Latency(us) 00:20:50.188 Device Information : IOPS MiB/s Average min max 00:20:50.188 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2185.33 93.90 58578.41 901.53 107977.69 00:20:50.188 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2098.63 90.18 61031.83 873.73 112743.50 00:20:50.188 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2135.17 91.75 60008.43 507.50 111608.41 00:20:50.188 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2146.63 92.24 59703.75 746.69 117459.08 00:20:50.188 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2135.60 91.76 60067.11 876.70 123034.30 00:20:50.188 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2170.63 93.27 59110.83 925.09 107069.98 00:20:50.188 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2161.55 92.88 58641.90 735.84 104862.72 00:20:50.188 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2160.25 92.82 58690.00 750.19 104406.50 00:20:50.188 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2134.74 91.73 59403.86 933.98 103289.36 00:20:50.188 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2152.25 92.48 58936.26 783.55 102671.97 00:20:50.188 ======================================================== 00:20:50.188 Total : 21480.79 923.00 59410.01 507.50 123034.30 00:20:50.188 00:20:50.188 [2024-12-06 03:29:09.987590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e14560 is same with the state(6) to be set 00:20:50.188 [2024-12-06 03:29:09.987637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e14bc0 is same with the state(6) to be set 00:20:50.188 [2024-12-06 03:29:09.987666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15740 is same with the state(6) to be set 00:20:50.188 [2024-12-06 03:29:09.987695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15a70 is same with the state(6) to be set 00:20:50.188 [2024-12-06 03:29:09.987724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15410 is same with the state(6) to be set 00:20:50.188 [2024-12-06 03:29:09.987752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e14ef0 is same with the state(6) to be set 00:20:50.188 [2024-12-06 03:29:09.987780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e16720 is same with the state(6) to be set 00:20:50.188 [2024-12-06 03:29:09.987808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e16900 is same with the state(6) to be set 00:20:50.188 [2024-12-06 03:29:09.987835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e16ae0 is same with the state(6) to be set 00:20:50.188 [2024-12-06 03:29:09.987863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e14890 is same with the state(6) to be set 00:20:50.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:20:50.188 03:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2673770 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2673770 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2673770 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:51.567 rmmod nvme_tcp 00:20:51.567 rmmod nvme_fabrics 00:20:51.567 rmmod nvme_keyring 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2673500 ']' 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2673500 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2673500 ']' 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2673500 00:20:51.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2673500) - No such process 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2673500 is not found' 00:20:51.567 Process with pid 2673500 is not found 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:51.567 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.472 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:53.472 00:20:53.472 real 0m9.804s 00:20:53.472 user 0m25.143s 00:20:53.472 sys 0m5.076s 00:20:53.472 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:53.472 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:53.472 ************************************ 00:20:53.472 END TEST nvmf_shutdown_tc4 00:20:53.472 ************************************ 00:20:53.472 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:20:53.472 00:20:53.472 real 0m38.861s 00:20:53.472 user 1m35.759s 00:20:53.472 sys 0m13.109s 00:20:53.472 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:53.472 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:53.472 ************************************ 00:20:53.472 END TEST nvmf_shutdown 00:20:53.472 ************************************ 00:20:53.472 03:29:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:20:53.472 03:29:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:53.472 03:29:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:53.472 03:29:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:53.472 ************************************ 00:20:53.472 START TEST nvmf_nsid 00:20:53.472 ************************************ 00:20:53.472 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:20:53.730 * Looking for test storage... 00:20:53.730 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:53.730 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:53.730 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:20:53.730 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:53.730 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:53.730 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:53.730 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:53.730 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:53.730 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:20:53.730 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:20:53.730 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:20:53.730 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:20:53.730 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:20:53.730 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:20:53.730 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:20:53.730 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:53.730 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:20:53.730 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:20:53.730 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:53.730 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:53.730 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:20:53.730 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:20:53.730 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:53.730 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:20:53.730 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:20:53.730 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:53.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.731 --rc genhtml_branch_coverage=1 00:20:53.731 --rc genhtml_function_coverage=1 00:20:53.731 --rc genhtml_legend=1 00:20:53.731 --rc geninfo_all_blocks=1 00:20:53.731 --rc geninfo_unexecuted_blocks=1 00:20:53.731 00:20:53.731 ' 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:53.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.731 --rc genhtml_branch_coverage=1 00:20:53.731 --rc genhtml_function_coverage=1 00:20:53.731 --rc genhtml_legend=1 00:20:53.731 --rc geninfo_all_blocks=1 00:20:53.731 --rc geninfo_unexecuted_blocks=1 00:20:53.731 00:20:53.731 ' 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:53.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.731 --rc genhtml_branch_coverage=1 00:20:53.731 --rc genhtml_function_coverage=1 00:20:53.731 --rc genhtml_legend=1 00:20:53.731 --rc geninfo_all_blocks=1 00:20:53.731 --rc geninfo_unexecuted_blocks=1 00:20:53.731 00:20:53.731 ' 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:53.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.731 --rc genhtml_branch_coverage=1 00:20:53.731 --rc genhtml_function_coverage=1 00:20:53.731 --rc genhtml_legend=1 00:20:53.731 --rc geninfo_all_blocks=1 00:20:53.731 --rc geninfo_unexecuted_blocks=1 00:20:53.731 00:20:53.731 ' 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:53.731 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:20:53.731 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:59.047 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:59.047 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:20:59.047 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:59.047 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:59.047 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:59.047 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:59.047 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:59.047 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:20:59.047 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:59.047 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:20:59.047 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:20:59.047 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:20:59.047 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:20:59.047 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:20:59.047 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:20:59.047 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:59.047 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:59.047 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:59.047 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:59.047 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:59.047 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:59.048 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:59.048 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:59.048 Found net devices under 0000:86:00.0: cvl_0_0 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:59.048 Found net devices under 0000:86:00.1: cvl_0_1 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:59.048 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:59.308 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:59.308 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:59.308 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:59.308 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:59.308 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:59.309 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:59.309 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:59.309 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:59.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:59.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.409 ms 00:20:59.309 00:20:59.309 --- 10.0.0.2 ping statistics --- 00:20:59.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.309 rtt min/avg/max/mdev = 0.409/0.409/0.409/0.000 ms 00:20:59.309 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:59.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:59.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:20:59.309 00:20:59.309 --- 10.0.0.1 ping statistics --- 00:20:59.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.309 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:20:59.309 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:59.309 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:20:59.309 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:59.309 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:59.309 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:59.309 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:59.309 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:59.309 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:59.309 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:59.309 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:20:59.309 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:59.309 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:59.309 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:59.309 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2678232 00:20:59.309 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2678232 00:20:59.309 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:20:59.309 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2678232 ']' 00:20:59.309 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.309 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:59.309 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.309 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:59.309 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:59.568 [2024-12-06 03:29:19.471125] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:20:59.568 [2024-12-06 03:29:19.471179] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:59.568 [2024-12-06 03:29:19.537428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.568 [2024-12-06 03:29:19.578913] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:59.568 [2024-12-06 03:29:19.578955] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:59.568 [2024-12-06 03:29:19.578962] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:59.568 [2024-12-06 03:29:19.578969] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:59.568 [2024-12-06 03:29:19.578974] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:59.568 [2024-12-06 03:29:19.579517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.568 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:59.568 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:20:59.568 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:59.568 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:59.568 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:59.827 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:59.827 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:59.827 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2678259 00:20:59.827 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:20:59.827 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:20:59.827 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:20:59.827 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:20:59.828 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:59.828 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:59.828 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.828 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.828 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:59.828 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:59.828 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:59.828 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:59.828 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:59.828 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:20:59.828 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:20:59.828 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=3f6c14ee-b1c0-4b85-ace5-2223daf0deaf 00:20:59.828 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:20:59.828 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=d0760235-e493-4a72-971b-08f81eb3ae80 00:20:59.828 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:20:59.828 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=d224be89-acfe-4854-92a0-bb111e5a6771 00:20:59.828 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:20:59.828 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.828 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:59.828 null0 00:20:59.828 null1 00:20:59.828 null2 00:20:59.828 [2024-12-06 03:29:19.764506] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:20:59.828 [2024-12-06 03:29:19.764548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2678259 ] 00:20:59.828 [2024-12-06 03:29:19.768181] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:59.828 [2024-12-06 03:29:19.792374] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:59.828 [2024-12-06 03:29:19.827898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.828 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.828 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2678259 /var/tmp/tgt2.sock 00:20:59.828 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2678259 ']' 00:20:59.828 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:20:59.828 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:59.828 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:20:59.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:20:59.828 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:59.828 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:59.828 [2024-12-06 03:29:19.871770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.087 03:29:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:00.087 03:29:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:00.087 03:29:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:21:00.346 [2024-12-06 03:29:20.395835] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:00.346 [2024-12-06 03:29:20.411942] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:21:00.346 nvme0n1 nvme0n2 00:21:00.346 nvme1n1 00:21:00.346 03:29:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:21:00.346 03:29:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:21:00.346 03:29:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:01.725 03:29:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:21:01.725 03:29:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:21:01.725 03:29:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:21:01.725 03:29:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:21:01.725 03:29:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:21:01.725 03:29:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:21:01.725 03:29:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:21:01.725 03:29:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:01.725 03:29:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:01.725 03:29:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:01.725 03:29:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:21:01.725 03:29:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:21:01.725 03:29:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:21:02.667 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:02.667 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:02.667 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:02.667 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:02.667 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:02.667 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 3f6c14ee-b1c0-4b85-ace5-2223daf0deaf 00:21:02.667 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:02.667 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:21:02.667 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:21:02.667 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:21:02.667 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:02.667 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=3f6c14eeb1c04b85ace52223daf0deaf 00:21:02.667 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 3F6C14EEB1C04B85ACE52223DAF0DEAF 00:21:02.667 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 3F6C14EEB1C04B85ACE52223DAF0DEAF == \3\F\6\C\1\4\E\E\B\1\C\0\4\B\8\5\A\C\E\5\2\2\2\3\D\A\F\0\D\E\A\F ]] 00:21:02.667 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:21:02.667 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:02.667 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:21:02.667 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:02.667 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:02.667 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:21:02.667 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:02.667 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid d0760235-e493-4a72-971b-08f81eb3ae80 00:21:02.667 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:02.667 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:21:02.667 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:21:02.667 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:21:02.667 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:02.667 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=d0760235e4934a72971b08f81eb3ae80 00:21:02.667 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo D0760235E4934A72971B08F81EB3AE80 00:21:02.667 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ D0760235E4934A72971B08F81EB3AE80 == \D\0\7\6\0\2\3\5\E\4\9\3\4\A\7\2\9\7\1\B\0\8\F\8\1\E\B\3\A\E\8\0 ]] 00:21:02.667 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:21:02.667 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:02.667 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:02.667 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:21:02.667 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:02.667 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:21:02.667 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:02.667 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid d224be89-acfe-4854-92a0-bb111e5a6771 00:21:02.667 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:02.667 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:21:02.667 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:21:02.668 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:21:02.668 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:02.668 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=d224be89acfe485492a0bb111e5a6771 00:21:02.668 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo D224BE89ACFE485492A0BB111E5A6771 00:21:02.668 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ D224BE89ACFE485492A0BB111E5A6771 == \D\2\2\4\B\E\8\9\A\C\F\E\4\8\5\4\9\2\A\0\B\B\1\1\1\E\5\A\6\7\7\1 ]] 00:21:02.668 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:21:03.034 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:21:03.034 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:21:03.034 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2678259 00:21:03.034 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2678259 ']' 00:21:03.034 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2678259 00:21:03.034 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:03.034 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:03.034 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2678259 00:21:03.034 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:03.034 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:03.034 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2678259' 00:21:03.034 killing process with pid 2678259 00:21:03.034 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2678259 00:21:03.034 03:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2678259 00:21:03.406 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:21:03.406 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:03.406 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:21:03.406 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:03.406 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:21:03.406 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:03.406 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:03.406 rmmod nvme_tcp 00:21:03.406 rmmod nvme_fabrics 00:21:03.406 rmmod nvme_keyring 00:21:03.406 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:03.406 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:21:03.406 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:21:03.406 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2678232 ']' 00:21:03.406 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2678232 00:21:03.406 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2678232 ']' 00:21:03.406 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2678232 00:21:03.406 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:03.406 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:03.406 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2678232 00:21:03.406 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:03.406 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:03.406 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2678232' 00:21:03.406 killing process with pid 2678232 00:21:03.406 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2678232 00:21:03.406 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2678232 00:21:03.759 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:03.759 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:03.759 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:03.759 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:21:03.759 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:21:03.759 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:03.759 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:21:03.759 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:03.759 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:03.759 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.759 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:03.759 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.765 03:29:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:05.765 00:21:05.766 real 0m12.050s 00:21:05.766 user 0m9.549s 00:21:05.766 sys 0m5.261s 00:21:05.766 03:29:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:05.766 03:29:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:05.766 ************************************ 00:21:05.766 END TEST nvmf_nsid 00:21:05.766 ************************************ 00:21:05.766 03:29:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:21:05.766 00:21:05.766 real 11m45.420s 00:21:05.766 user 25m30.583s 00:21:05.766 sys 3m34.566s 00:21:05.766 03:29:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:05.766 03:29:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:05.766 ************************************ 00:21:05.766 END TEST nvmf_target_extra 00:21:05.766 ************************************ 00:21:05.766 03:29:25 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:05.766 03:29:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:05.766 03:29:25 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:05.766 03:29:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:05.766 ************************************ 00:21:05.766 START TEST nvmf_host 00:21:05.766 ************************************ 00:21:05.766 03:29:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:05.766 * Looking for test storage... 00:21:05.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:05.766 03:29:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:05.766 03:29:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:21:05.766 03:29:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:05.766 03:29:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:05.766 03:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:05.766 03:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:05.766 03:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:05.766 03:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:05.766 03:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:05.766 03:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:05.766 03:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:05.766 03:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:05.766 03:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:05.766 03:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:05.766 03:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:05.766 03:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:21:05.766 03:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:21:05.766 03:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:05.766 03:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:06.025 03:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:21:06.025 03:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:21:06.025 03:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:06.025 03:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:21:06.025 03:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:06.025 03:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:21:06.025 03:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:21:06.025 03:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:06.025 03:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:21:06.025 03:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:06.025 03:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:06.025 03:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:06.025 03:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:21:06.025 03:29:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:06.025 03:29:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:06.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.025 --rc genhtml_branch_coverage=1 00:21:06.025 --rc genhtml_function_coverage=1 00:21:06.025 --rc genhtml_legend=1 00:21:06.025 --rc geninfo_all_blocks=1 00:21:06.025 --rc geninfo_unexecuted_blocks=1 00:21:06.025 00:21:06.025 ' 00:21:06.025 03:29:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:06.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.025 --rc genhtml_branch_coverage=1 00:21:06.025 --rc genhtml_function_coverage=1 00:21:06.025 --rc genhtml_legend=1 00:21:06.025 --rc geninfo_all_blocks=1 00:21:06.025 --rc geninfo_unexecuted_blocks=1 00:21:06.025 00:21:06.025 ' 00:21:06.025 03:29:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:06.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.025 --rc genhtml_branch_coverage=1 00:21:06.025 --rc genhtml_function_coverage=1 00:21:06.025 --rc genhtml_legend=1 00:21:06.025 --rc geninfo_all_blocks=1 00:21:06.025 --rc geninfo_unexecuted_blocks=1 00:21:06.025 00:21:06.025 ' 00:21:06.025 03:29:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:06.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.025 --rc genhtml_branch_coverage=1 00:21:06.025 --rc genhtml_function_coverage=1 00:21:06.025 --rc genhtml_legend=1 00:21:06.025 --rc geninfo_all_blocks=1 00:21:06.025 --rc geninfo_unexecuted_blocks=1 00:21:06.025 00:21:06.025 ' 00:21:06.025 03:29:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:06.025 03:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:06.025 03:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:06.025 03:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:06.025 03:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:06.025 03:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:06.025 03:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:06.025 03:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:06.025 03:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:06.025 03:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:06.025 03:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:06.025 03:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:06.025 03:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:06.025 03:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:06.025 03:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:06.025 03:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:06.025 03:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:06.026 03:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:06.026 03:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:06.026 03:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:06.026 03:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:06.026 03:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:06.026 03:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:06.026 03:29:25 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.026 03:29:25 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.026 03:29:25 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.026 03:29:25 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:06.026 03:29:25 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.026 03:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:21:06.026 03:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:06.026 03:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:06.026 03:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:06.026 03:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:06.026 03:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:06.026 03:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:06.026 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:06.026 03:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:06.026 03:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:06.026 03:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:06.026 03:29:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:06.026 03:29:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:06.026 03:29:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:06.026 03:29:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:06.026 03:29:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:06.026 03:29:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:06.026 03:29:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.026 ************************************ 00:21:06.026 START TEST nvmf_multicontroller 00:21:06.026 ************************************ 00:21:06.026 03:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:06.026 * Looking for test storage... 00:21:06.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:06.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.026 --rc genhtml_branch_coverage=1 00:21:06.026 --rc genhtml_function_coverage=1 00:21:06.026 --rc genhtml_legend=1 00:21:06.026 --rc geninfo_all_blocks=1 00:21:06.026 --rc geninfo_unexecuted_blocks=1 00:21:06.026 00:21:06.026 ' 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:06.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.026 --rc genhtml_branch_coverage=1 00:21:06.026 --rc genhtml_function_coverage=1 00:21:06.026 --rc genhtml_legend=1 00:21:06.026 --rc geninfo_all_blocks=1 00:21:06.026 --rc geninfo_unexecuted_blocks=1 00:21:06.026 00:21:06.026 ' 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:06.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.026 --rc genhtml_branch_coverage=1 00:21:06.026 --rc genhtml_function_coverage=1 00:21:06.026 --rc genhtml_legend=1 00:21:06.026 --rc geninfo_all_blocks=1 00:21:06.026 --rc geninfo_unexecuted_blocks=1 00:21:06.026 00:21:06.026 ' 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:06.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.026 --rc genhtml_branch_coverage=1 00:21:06.026 --rc genhtml_function_coverage=1 00:21:06.026 --rc genhtml_legend=1 00:21:06.026 --rc geninfo_all_blocks=1 00:21:06.026 --rc geninfo_unexecuted_blocks=1 00:21:06.026 00:21:06.026 ' 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:06.026 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:06.027 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:06.027 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:06.027 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:21:06.285 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:06.285 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:06.285 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:06.285 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.285 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.285 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.285 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:06.285 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.285 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:21:06.285 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:06.285 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:06.285 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:06.285 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:06.285 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:06.285 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:06.285 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:06.285 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:06.285 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:06.285 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:06.285 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:06.285 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:06.285 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:06.285 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:06.285 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:06.285 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:06.286 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:06.286 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:06.286 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:06.286 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:06.286 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:06.286 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:06.286 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.286 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:06.286 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.286 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:06.286 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:06.286 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:21:06.286 03:29:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:11.558 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:11.558 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:11.558 Found net devices under 0000:86:00.0: cvl_0_0 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:11.558 Found net devices under 0000:86:00.1: cvl_0_1 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:11.558 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:11.559 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:11.559 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:11.559 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.343 ms 00:21:11.559 00:21:11.559 --- 10.0.0.2 ping statistics --- 00:21:11.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.559 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:21:11.559 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:11.559 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:11.559 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:21:11.559 00:21:11.559 --- 10.0.0.1 ping statistics --- 00:21:11.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.559 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:21:11.559 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:11.559 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:21:11.559 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:11.559 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:11.559 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:11.559 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:11.559 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:11.559 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:11.559 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:11.559 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:11.559 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:11.559 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:11.559 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:11.559 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2682572 00:21:11.559 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2682572 00:21:11.559 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:11.559 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2682572 ']' 00:21:11.559 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.559 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:11.559 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.559 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:11.559 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:11.818 [2024-12-06 03:29:31.710331] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:21:11.818 [2024-12-06 03:29:31.710376] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:11.818 [2024-12-06 03:29:31.775549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:11.818 [2024-12-06 03:29:31.818796] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:11.818 [2024-12-06 03:29:31.818832] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:11.818 [2024-12-06 03:29:31.818840] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:11.818 [2024-12-06 03:29:31.818847] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:11.818 [2024-12-06 03:29:31.818855] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:11.818 [2024-12-06 03:29:31.820303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:11.818 [2024-12-06 03:29:31.820392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:11.818 [2024-12-06 03:29:31.820393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:11.818 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:11.818 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:11.818 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:11.818 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:11.818 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:12.080 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:12.080 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:12.080 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.080 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:12.080 [2024-12-06 03:29:31.966604] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:12.080 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.080 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:12.080 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.080 03:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:12.080 Malloc0 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:12.080 [2024-12-06 03:29:32.020770] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:12.080 [2024-12-06 03:29:32.028687] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:12.080 Malloc1 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2682600 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2682600 /var/tmp/bdevperf.sock 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2682600 ']' 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:12.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:12.080 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:12.339 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:12.339 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:12.339 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:12.339 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.339 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:12.599 NVMe0n1 00:21:12.599 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.599 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:12.599 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:12.599 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.599 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:12.599 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.599 1 00:21:12.599 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:12.599 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:12.599 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:12.599 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:12.599 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:12.599 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:12.599 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:12.599 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:12.599 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.599 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:12.599 request: 00:21:12.599 { 00:21:12.599 "name": "NVMe0", 00:21:12.599 "trtype": "tcp", 00:21:12.599 "traddr": "10.0.0.2", 00:21:12.599 "adrfam": "ipv4", 00:21:12.599 "trsvcid": "4420", 00:21:12.599 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:12.599 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:12.599 "hostaddr": "10.0.0.1", 00:21:12.599 "prchk_reftag": false, 00:21:12.599 "prchk_guard": false, 00:21:12.599 "hdgst": false, 00:21:12.599 "ddgst": false, 00:21:12.599 "allow_unrecognized_csi": false, 00:21:12.599 "method": "bdev_nvme_attach_controller", 00:21:12.599 "req_id": 1 00:21:12.599 } 00:21:12.599 Got JSON-RPC error response 00:21:12.599 response: 00:21:12.599 { 00:21:12.599 "code": -114, 00:21:12.599 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:12.600 } 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:12.600 request: 00:21:12.600 { 00:21:12.600 "name": "NVMe0", 00:21:12.600 "trtype": "tcp", 00:21:12.600 "traddr": "10.0.0.2", 00:21:12.600 "adrfam": "ipv4", 00:21:12.600 "trsvcid": "4420", 00:21:12.600 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:12.600 "hostaddr": "10.0.0.1", 00:21:12.600 "prchk_reftag": false, 00:21:12.600 "prchk_guard": false, 00:21:12.600 "hdgst": false, 00:21:12.600 "ddgst": false, 00:21:12.600 "allow_unrecognized_csi": false, 00:21:12.600 "method": "bdev_nvme_attach_controller", 00:21:12.600 "req_id": 1 00:21:12.600 } 00:21:12.600 Got JSON-RPC error response 00:21:12.600 response: 00:21:12.600 { 00:21:12.600 "code": -114, 00:21:12.600 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:12.600 } 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:12.600 request: 00:21:12.600 { 00:21:12.600 "name": "NVMe0", 00:21:12.600 "trtype": "tcp", 00:21:12.600 "traddr": "10.0.0.2", 00:21:12.600 "adrfam": "ipv4", 00:21:12.600 "trsvcid": "4420", 00:21:12.600 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:12.600 "hostaddr": "10.0.0.1", 00:21:12.600 "prchk_reftag": false, 00:21:12.600 "prchk_guard": false, 00:21:12.600 "hdgst": false, 00:21:12.600 "ddgst": false, 00:21:12.600 "multipath": "disable", 00:21:12.600 "allow_unrecognized_csi": false, 00:21:12.600 "method": "bdev_nvme_attach_controller", 00:21:12.600 "req_id": 1 00:21:12.600 } 00:21:12.600 Got JSON-RPC error response 00:21:12.600 response: 00:21:12.600 { 00:21:12.600 "code": -114, 00:21:12.600 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:21:12.600 } 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:12.600 request: 00:21:12.600 { 00:21:12.600 "name": "NVMe0", 00:21:12.600 "trtype": "tcp", 00:21:12.600 "traddr": "10.0.0.2", 00:21:12.600 "adrfam": "ipv4", 00:21:12.600 "trsvcid": "4420", 00:21:12.600 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:12.600 "hostaddr": "10.0.0.1", 00:21:12.600 "prchk_reftag": false, 00:21:12.600 "prchk_guard": false, 00:21:12.600 "hdgst": false, 00:21:12.600 "ddgst": false, 00:21:12.600 "multipath": "failover", 00:21:12.600 "allow_unrecognized_csi": false, 00:21:12.600 "method": "bdev_nvme_attach_controller", 00:21:12.600 "req_id": 1 00:21:12.600 } 00:21:12.600 Got JSON-RPC error response 00:21:12.600 response: 00:21:12.600 { 00:21:12.600 "code": -114, 00:21:12.600 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:12.600 } 00:21:12.600 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:12.601 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:12.601 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:12.601 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:12.601 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:12.601 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:12.601 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.601 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:12.601 NVMe0n1 00:21:12.601 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.601 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:12.601 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.601 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:12.601 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.601 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:12.601 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.601 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:12.860 00:21:12.860 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.860 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:12.860 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:12.860 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.860 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:12.860 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.860 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:12.860 03:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:13.798 { 00:21:13.798 "results": [ 00:21:13.798 { 00:21:13.798 "job": "NVMe0n1", 00:21:13.798 "core_mask": "0x1", 00:21:13.798 "workload": "write", 00:21:13.798 "status": "finished", 00:21:13.798 "queue_depth": 128, 00:21:13.798 "io_size": 4096, 00:21:13.798 "runtime": 1.00516, 00:21:13.798 "iops": 24334.434318914402, 00:21:13.798 "mibps": 95.05638405825938, 00:21:13.798 "io_failed": 0, 00:21:13.798 "io_timeout": 0, 00:21:13.798 "avg_latency_us": 5248.890689608589, 00:21:13.798 "min_latency_us": 1531.5478260869565, 00:21:13.798 "max_latency_us": 9118.052173913044 00:21:13.798 } 00:21:13.798 ], 00:21:13.798 "core_count": 1 00:21:13.798 } 00:21:13.798 03:29:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:13.798 03:29:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.798 03:29:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:13.798 03:29:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.798 03:29:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:21:13.798 03:29:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2682600 00:21:13.798 03:29:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2682600 ']' 00:21:13.798 03:29:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2682600 00:21:13.798 03:29:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:13.798 03:29:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:13.798 03:29:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2682600 00:21:14.057 03:29:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:14.057 03:29:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:14.057 03:29:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2682600' 00:21:14.057 killing process with pid 2682600 00:21:14.057 03:29:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2682600 00:21:14.057 03:29:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2682600 00:21:14.057 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:14.057 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.057 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:14.057 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.057 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:14.057 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.057 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:14.057 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.057 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:21:14.057 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:14.057 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:14.057 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:14.057 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:21:14.057 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:21:14.057 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:14.057 [2024-12-06 03:29:32.128261] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:21:14.057 [2024-12-06 03:29:32.128309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2682600 ] 00:21:14.057 [2024-12-06 03:29:32.190943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.057 [2024-12-06 03:29:32.235292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.057 [2024-12-06 03:29:32.734385] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name d4c98eef-956e-4e66-8d56-5ee02153d4df already exists 00:21:14.057 [2024-12-06 03:29:32.734414] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:d4c98eef-956e-4e66-8d56-5ee02153d4df alias for bdev NVMe1n1 00:21:14.057 [2024-12-06 03:29:32.734422] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:14.057 Running I/O for 1 seconds... 00:21:14.057 24268.00 IOPS, 94.80 MiB/s 00:21:14.057 Latency(us) 00:21:14.057 [2024-12-06T02:29:34.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.057 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:14.057 NVMe0n1 : 1.01 24334.43 95.06 0.00 0.00 5248.89 1531.55 9118.05 00:21:14.057 [2024-12-06T02:29:34.198Z] =================================================================================================================== 00:21:14.057 [2024-12-06T02:29:34.198Z] Total : 24334.43 95.06 0.00 0.00 5248.89 1531.55 9118.05 00:21:14.057 Received shutdown signal, test time was about 1.000000 seconds 00:21:14.057 00:21:14.057 Latency(us) 00:21:14.057 [2024-12-06T02:29:34.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.057 [2024-12-06T02:29:34.198Z] =================================================================================================================== 00:21:14.057 [2024-12-06T02:29:34.198Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:14.057 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:14.057 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:14.057 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:14.057 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:21:14.057 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:14.057 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:21:14.057 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:14.057 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:21:14.057 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:14.057 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:14.057 rmmod nvme_tcp 00:21:14.057 rmmod nvme_fabrics 00:21:14.316 rmmod nvme_keyring 00:21:14.316 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:14.316 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:21:14.316 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:21:14.316 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2682572 ']' 00:21:14.316 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2682572 00:21:14.316 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2682572 ']' 00:21:14.316 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2682572 00:21:14.316 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:14.316 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:14.316 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2682572 00:21:14.316 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:14.316 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:14.316 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2682572' 00:21:14.316 killing process with pid 2682572 00:21:14.316 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2682572 00:21:14.316 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2682572 00:21:14.575 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:14.575 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:14.575 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:14.575 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:21:14.575 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:21:14.575 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:14.575 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:21:14.575 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:14.575 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:14.575 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.575 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:14.575 03:29:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.489 03:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:16.489 00:21:16.489 real 0m10.579s 00:21:16.489 user 0m11.638s 00:21:16.489 sys 0m4.883s 00:21:16.489 03:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:16.489 03:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:16.489 ************************************ 00:21:16.489 END TEST nvmf_multicontroller 00:21:16.489 ************************************ 00:21:16.489 03:29:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:16.489 03:29:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:16.489 03:29:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:16.489 03:29:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.750 ************************************ 00:21:16.750 START TEST nvmf_aer 00:21:16.750 ************************************ 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:16.750 * Looking for test storage... 00:21:16.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:16.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.750 --rc genhtml_branch_coverage=1 00:21:16.750 --rc genhtml_function_coverage=1 00:21:16.750 --rc genhtml_legend=1 00:21:16.750 --rc geninfo_all_blocks=1 00:21:16.750 --rc geninfo_unexecuted_blocks=1 00:21:16.750 00:21:16.750 ' 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:16.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.750 --rc genhtml_branch_coverage=1 00:21:16.750 --rc genhtml_function_coverage=1 00:21:16.750 --rc genhtml_legend=1 00:21:16.750 --rc geninfo_all_blocks=1 00:21:16.750 --rc geninfo_unexecuted_blocks=1 00:21:16.750 00:21:16.750 ' 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:16.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.750 --rc genhtml_branch_coverage=1 00:21:16.750 --rc genhtml_function_coverage=1 00:21:16.750 --rc genhtml_legend=1 00:21:16.750 --rc geninfo_all_blocks=1 00:21:16.750 --rc geninfo_unexecuted_blocks=1 00:21:16.750 00:21:16.750 ' 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:16.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.750 --rc genhtml_branch_coverage=1 00:21:16.750 --rc genhtml_function_coverage=1 00:21:16.750 --rc genhtml_legend=1 00:21:16.750 --rc geninfo_all_blocks=1 00:21:16.750 --rc geninfo_unexecuted_blocks=1 00:21:16.750 00:21:16.750 ' 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:16.750 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:16.751 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:16.751 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:16.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:16.751 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:16.751 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:16.751 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:16.751 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:16.751 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:16.751 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:16.751 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:16.751 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:16.751 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:16.751 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.751 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:16.751 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.751 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:16.751 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:16.751 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:21:16.751 03:29:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:22.024 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:22.024 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:22.024 Found net devices under 0000:86:00.0: cvl_0_0 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:22.024 Found net devices under 0000:86:00.1: cvl_0_1 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:22.024 03:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:22.024 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:22.024 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:22.024 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:22.024 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:22.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:22.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:21:22.024 00:21:22.024 --- 10.0.0.2 ping statistics --- 00:21:22.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.025 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:21:22.025 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:22.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:22.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:21:22.025 00:21:22.025 --- 10.0.0.1 ping statistics --- 00:21:22.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.025 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:21:22.025 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:22.025 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:21:22.025 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:22.025 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:22.025 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:22.025 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:22.025 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:22.025 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:22.025 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:22.025 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:22.025 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:22.025 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:22.025 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:22.025 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2686364 00:21:22.025 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:22.025 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2686364 00:21:22.025 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2686364 ']' 00:21:22.025 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.025 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:22.025 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:22.025 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:22.025 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:22.025 [2024-12-06 03:29:42.157227] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:21:22.025 [2024-12-06 03:29:42.157273] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:22.284 [2024-12-06 03:29:42.223450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:22.284 [2024-12-06 03:29:42.267689] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:22.284 [2024-12-06 03:29:42.267725] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:22.284 [2024-12-06 03:29:42.267732] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:22.284 [2024-12-06 03:29:42.267739] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:22.284 [2024-12-06 03:29:42.267744] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:22.284 [2024-12-06 03:29:42.269186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:22.284 [2024-12-06 03:29:42.269282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:22.284 [2024-12-06 03:29:42.269366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:22.284 [2024-12-06 03:29:42.269368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.284 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:22.284 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:21:22.284 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:22.284 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:22.284 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:22.284 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:22.284 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:22.284 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.284 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:22.284 [2024-12-06 03:29:42.416518] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:22.543 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.543 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:22.543 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.543 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:22.543 Malloc0 00:21:22.543 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.543 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:22.543 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.543 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:22.543 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.543 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:22.543 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.543 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:22.543 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.543 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:22.543 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.543 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:22.543 [2024-12-06 03:29:42.476054] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.543 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.543 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:22.543 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.543 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:22.543 [ 00:21:22.543 { 00:21:22.543 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:22.543 "subtype": "Discovery", 00:21:22.543 "listen_addresses": [], 00:21:22.543 "allow_any_host": true, 00:21:22.543 "hosts": [] 00:21:22.543 }, 00:21:22.543 { 00:21:22.543 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.543 "subtype": "NVMe", 00:21:22.543 "listen_addresses": [ 00:21:22.543 { 00:21:22.543 "trtype": "TCP", 00:21:22.543 "adrfam": "IPv4", 00:21:22.543 "traddr": "10.0.0.2", 00:21:22.543 "trsvcid": "4420" 00:21:22.543 } 00:21:22.543 ], 00:21:22.543 "allow_any_host": true, 00:21:22.543 "hosts": [], 00:21:22.543 "serial_number": "SPDK00000000000001", 00:21:22.543 "model_number": "SPDK bdev Controller", 00:21:22.544 "max_namespaces": 2, 00:21:22.544 "min_cntlid": 1, 00:21:22.544 "max_cntlid": 65519, 00:21:22.544 "namespaces": [ 00:21:22.544 { 00:21:22.544 "nsid": 1, 00:21:22.544 "bdev_name": "Malloc0", 00:21:22.544 "name": "Malloc0", 00:21:22.544 "nguid": "2BFFFDDA72404FA3BA5768DA47E7FAB0", 00:21:22.544 "uuid": "2bfffdda-7240-4fa3-ba57-68da47e7fab0" 00:21:22.544 } 00:21:22.544 ] 00:21:22.544 } 00:21:22.544 ] 00:21:22.544 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.544 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:22.544 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:22.544 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2686431 00:21:22.544 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:22.544 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:22.544 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:21:22.544 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:22.544 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:21:22.544 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:21:22.544 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:22.544 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:22.544 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:21:22.544 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:21:22.544 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:22.803 Malloc1 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:22.803 [ 00:21:22.803 { 00:21:22.803 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:22.803 "subtype": "Discovery", 00:21:22.803 "listen_addresses": [], 00:21:22.803 "allow_any_host": true, 00:21:22.803 "hosts": [] 00:21:22.803 }, 00:21:22.803 { 00:21:22.803 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.803 "subtype": "NVMe", 00:21:22.803 "listen_addresses": [ 00:21:22.803 { 00:21:22.803 "trtype": "TCP", 00:21:22.803 "adrfam": "IPv4", 00:21:22.803 "traddr": "10.0.0.2", 00:21:22.803 "trsvcid": "4420" 00:21:22.803 } 00:21:22.803 ], 00:21:22.803 "allow_any_host": true, 00:21:22.803 "hosts": [], 00:21:22.803 "serial_number": "SPDK00000000000001", 00:21:22.803 "model_number": "SPDK bdev Controller", 00:21:22.803 "max_namespaces": 2, 00:21:22.803 "min_cntlid": 1, 00:21:22.803 "max_cntlid": 65519, 00:21:22.803 "namespaces": [ 00:21:22.803 { 00:21:22.803 "nsid": 1, 00:21:22.803 "bdev_name": "Malloc0", 00:21:22.803 "name": "Malloc0", 00:21:22.803 "nguid": "2BFFFDDA72404FA3BA5768DA47E7FAB0", 00:21:22.803 "uuid": "2bfffdda-7240-4fa3-ba57-68da47e7fab0" 00:21:22.803 Asynchronous Event Request test 00:21:22.803 Attaching to 10.0.0.2 00:21:22.803 Attached to 10.0.0.2 00:21:22.803 Registering asynchronous event callbacks... 00:21:22.803 Starting namespace attribute notice tests for all controllers... 00:21:22.803 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:22.803 aer_cb - Changed Namespace 00:21:22.803 Cleaning up... 00:21:22.803 }, 00:21:22.803 { 00:21:22.803 "nsid": 2, 00:21:22.803 "bdev_name": "Malloc1", 00:21:22.803 "name": "Malloc1", 00:21:22.803 "nguid": "298604C335CA46658CA5FE938A3A0396", 00:21:22.803 "uuid": "298604c3-35ca-4665-8ca5-fe938a3a0396" 00:21:22.803 } 00:21:22.803 ] 00:21:22.803 } 00:21:22.803 ] 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2686431 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:22.803 rmmod nvme_tcp 00:21:22.803 rmmod nvme_fabrics 00:21:22.803 rmmod nvme_keyring 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2686364 ']' 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2686364 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2686364 ']' 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2686364 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:22.803 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2686364 00:21:23.062 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:23.062 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:23.062 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2686364' 00:21:23.062 killing process with pid 2686364 00:21:23.062 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2686364 00:21:23.062 03:29:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2686364 00:21:23.062 03:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:23.062 03:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:23.062 03:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:23.062 03:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:21:23.062 03:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:21:23.062 03:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:21:23.062 03:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:23.062 03:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:23.062 03:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:23.062 03:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:23.062 03:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:23.062 03:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.599 03:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:25.599 00:21:25.599 real 0m8.557s 00:21:25.599 user 0m4.927s 00:21:25.599 sys 0m4.394s 00:21:25.599 03:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:25.599 03:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:25.599 ************************************ 00:21:25.599 END TEST nvmf_aer 00:21:25.599 ************************************ 00:21:25.599 03:29:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:25.599 03:29:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:25.599 03:29:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:25.599 03:29:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.599 ************************************ 00:21:25.599 START TEST nvmf_async_init 00:21:25.599 ************************************ 00:21:25.599 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:25.599 * Looking for test storage... 00:21:25.599 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:25.599 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:25.599 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:21:25.599 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:25.599 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:25.599 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:25.599 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:25.599 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:25.599 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:21:25.599 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:21:25.599 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:21:25.599 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:21:25.599 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:21:25.599 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:21:25.599 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:21:25.599 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:25.599 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:21:25.599 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:21:25.599 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:25.599 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:25.599 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:21:25.599 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:21:25.599 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:25.599 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:21:25.599 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:21:25.599 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:21:25.599 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:21:25.599 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:25.599 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:21:25.599 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:21:25.599 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:25.599 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:25.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.600 --rc genhtml_branch_coverage=1 00:21:25.600 --rc genhtml_function_coverage=1 00:21:25.600 --rc genhtml_legend=1 00:21:25.600 --rc geninfo_all_blocks=1 00:21:25.600 --rc geninfo_unexecuted_blocks=1 00:21:25.600 00:21:25.600 ' 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:25.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.600 --rc genhtml_branch_coverage=1 00:21:25.600 --rc genhtml_function_coverage=1 00:21:25.600 --rc genhtml_legend=1 00:21:25.600 --rc geninfo_all_blocks=1 00:21:25.600 --rc geninfo_unexecuted_blocks=1 00:21:25.600 00:21:25.600 ' 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:25.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.600 --rc genhtml_branch_coverage=1 00:21:25.600 --rc genhtml_function_coverage=1 00:21:25.600 --rc genhtml_legend=1 00:21:25.600 --rc geninfo_all_blocks=1 00:21:25.600 --rc geninfo_unexecuted_blocks=1 00:21:25.600 00:21:25.600 ' 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:25.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.600 --rc genhtml_branch_coverage=1 00:21:25.600 --rc genhtml_function_coverage=1 00:21:25.600 --rc genhtml_legend=1 00:21:25.600 --rc geninfo_all_blocks=1 00:21:25.600 --rc geninfo_unexecuted_blocks=1 00:21:25.600 00:21:25.600 ' 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:25.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=6a057ac39936421c86392c8f5f72c4d1 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:21:25.600 03:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:32.170 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:32.170 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:32.170 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:32.171 Found net devices under 0000:86:00.0: cvl_0_0 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:32.171 Found net devices under 0000:86:00.1: cvl_0_1 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:32.171 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:32.171 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:21:32.171 00:21:32.171 --- 10.0.0.2 ping statistics --- 00:21:32.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.171 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:32.171 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:32.171 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:21:32.171 00:21:32.171 --- 10.0.0.1 ping statistics --- 00:21:32.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.171 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2690129 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2690129 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2690129 ']' 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:32.171 [2024-12-06 03:29:51.420128] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:21:32.171 [2024-12-06 03:29:51.420175] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:32.171 [2024-12-06 03:29:51.487008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.171 [2024-12-06 03:29:51.530219] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:32.171 [2024-12-06 03:29:51.530255] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:32.171 [2024-12-06 03:29:51.530262] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:32.171 [2024-12-06 03:29:51.530268] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:32.171 [2024-12-06 03:29:51.530273] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:32.171 [2024-12-06 03:29:51.530780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:32.171 [2024-12-06 03:29:51.667836] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:32.171 null0 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:32.171 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.172 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:32.172 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.172 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 6a057ac39936421c86392c8f5f72c4d1 00:21:32.172 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.172 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:32.172 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.172 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:32.172 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.172 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:32.172 [2024-12-06 03:29:51.708089] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:32.172 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.172 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:32.172 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.172 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:32.172 nvme0n1 00:21:32.172 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.172 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:32.172 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.172 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:32.172 [ 00:21:32.172 { 00:21:32.172 "name": "nvme0n1", 00:21:32.172 "aliases": [ 00:21:32.172 "6a057ac3-9936-421c-8639-2c8f5f72c4d1" 00:21:32.172 ], 00:21:32.172 "product_name": "NVMe disk", 00:21:32.172 "block_size": 512, 00:21:32.172 "num_blocks": 2097152, 00:21:32.172 "uuid": "6a057ac3-9936-421c-8639-2c8f5f72c4d1", 00:21:32.172 "numa_id": 1, 00:21:32.172 "assigned_rate_limits": { 00:21:32.172 "rw_ios_per_sec": 0, 00:21:32.172 "rw_mbytes_per_sec": 0, 00:21:32.172 "r_mbytes_per_sec": 0, 00:21:32.172 "w_mbytes_per_sec": 0 00:21:32.172 }, 00:21:32.172 "claimed": false, 00:21:32.172 "zoned": false, 00:21:32.172 "supported_io_types": { 00:21:32.172 "read": true, 00:21:32.172 "write": true, 00:21:32.172 "unmap": false, 00:21:32.172 "flush": true, 00:21:32.172 "reset": true, 00:21:32.172 "nvme_admin": true, 00:21:32.172 "nvme_io": true, 00:21:32.172 "nvme_io_md": false, 00:21:32.172 "write_zeroes": true, 00:21:32.172 "zcopy": false, 00:21:32.172 "get_zone_info": false, 00:21:32.172 "zone_management": false, 00:21:32.172 "zone_append": false, 00:21:32.172 "compare": true, 00:21:32.172 "compare_and_write": true, 00:21:32.172 "abort": true, 00:21:32.172 "seek_hole": false, 00:21:32.172 "seek_data": false, 00:21:32.172 "copy": true, 00:21:32.172 "nvme_iov_md": false 00:21:32.172 }, 00:21:32.172 "memory_domains": [ 00:21:32.172 { 00:21:32.172 "dma_device_id": "system", 00:21:32.172 "dma_device_type": 1 00:21:32.172 } 00:21:32.172 ], 00:21:32.172 "driver_specific": { 00:21:32.172 "nvme": [ 00:21:32.172 { 00:21:32.172 "trid": { 00:21:32.172 "trtype": "TCP", 00:21:32.172 "adrfam": "IPv4", 00:21:32.172 "traddr": "10.0.0.2", 00:21:32.172 "trsvcid": "4420", 00:21:32.172 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:32.172 }, 00:21:32.172 "ctrlr_data": { 00:21:32.172 "cntlid": 1, 00:21:32.172 "vendor_id": "0x8086", 00:21:32.172 "model_number": "SPDK bdev Controller", 00:21:32.172 "serial_number": "00000000000000000000", 00:21:32.172 "firmware_revision": "25.01", 00:21:32.172 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:32.172 "oacs": { 00:21:32.172 "security": 0, 00:21:32.172 "format": 0, 00:21:32.172 "firmware": 0, 00:21:32.172 "ns_manage": 0 00:21:32.172 }, 00:21:32.172 "multi_ctrlr": true, 00:21:32.172 "ana_reporting": false 00:21:32.172 }, 00:21:32.172 "vs": { 00:21:32.172 "nvme_version": "1.3" 00:21:32.172 }, 00:21:32.172 "ns_data": { 00:21:32.172 "id": 1, 00:21:32.172 "can_share": true 00:21:32.172 } 00:21:32.172 } 00:21:32.172 ], 00:21:32.172 "mp_policy": "active_passive" 00:21:32.172 } 00:21:32.172 } 00:21:32.172 ] 00:21:32.172 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.172 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:32.172 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.172 03:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:32.172 [2024-12-06 03:29:51.956753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:32.172 [2024-12-06 03:29:51.956808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x908c00 (9): Bad file descriptor 00:21:32.172 [2024-12-06 03:29:52.089039] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:21:32.172 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.172 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:32.172 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.172 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:32.172 [ 00:21:32.172 { 00:21:32.172 "name": "nvme0n1", 00:21:32.172 "aliases": [ 00:21:32.172 "6a057ac3-9936-421c-8639-2c8f5f72c4d1" 00:21:32.172 ], 00:21:32.172 "product_name": "NVMe disk", 00:21:32.172 "block_size": 512, 00:21:32.172 "num_blocks": 2097152, 00:21:32.172 "uuid": "6a057ac3-9936-421c-8639-2c8f5f72c4d1", 00:21:32.172 "numa_id": 1, 00:21:32.172 "assigned_rate_limits": { 00:21:32.172 "rw_ios_per_sec": 0, 00:21:32.172 "rw_mbytes_per_sec": 0, 00:21:32.172 "r_mbytes_per_sec": 0, 00:21:32.172 "w_mbytes_per_sec": 0 00:21:32.172 }, 00:21:32.172 "claimed": false, 00:21:32.172 "zoned": false, 00:21:32.172 "supported_io_types": { 00:21:32.172 "read": true, 00:21:32.172 "write": true, 00:21:32.172 "unmap": false, 00:21:32.172 "flush": true, 00:21:32.172 "reset": true, 00:21:32.172 "nvme_admin": true, 00:21:32.172 "nvme_io": true, 00:21:32.172 "nvme_io_md": false, 00:21:32.172 "write_zeroes": true, 00:21:32.172 "zcopy": false, 00:21:32.172 "get_zone_info": false, 00:21:32.172 "zone_management": false, 00:21:32.172 "zone_append": false, 00:21:32.172 "compare": true, 00:21:32.172 "compare_and_write": true, 00:21:32.172 "abort": true, 00:21:32.172 "seek_hole": false, 00:21:32.172 "seek_data": false, 00:21:32.172 "copy": true, 00:21:32.172 "nvme_iov_md": false 00:21:32.172 }, 00:21:32.172 "memory_domains": [ 00:21:32.172 { 00:21:32.172 "dma_device_id": "system", 00:21:32.172 "dma_device_type": 1 00:21:32.172 } 00:21:32.172 ], 00:21:32.172 "driver_specific": { 00:21:32.172 "nvme": [ 00:21:32.172 { 00:21:32.172 "trid": { 00:21:32.172 "trtype": "TCP", 00:21:32.172 "adrfam": "IPv4", 00:21:32.172 "traddr": "10.0.0.2", 00:21:32.172 "trsvcid": "4420", 00:21:32.172 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:32.172 }, 00:21:32.172 "ctrlr_data": { 00:21:32.172 "cntlid": 2, 00:21:32.172 "vendor_id": "0x8086", 00:21:32.172 "model_number": "SPDK bdev Controller", 00:21:32.172 "serial_number": "00000000000000000000", 00:21:32.172 "firmware_revision": "25.01", 00:21:32.172 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:32.172 "oacs": { 00:21:32.172 "security": 0, 00:21:32.172 "format": 0, 00:21:32.172 "firmware": 0, 00:21:32.172 "ns_manage": 0 00:21:32.172 }, 00:21:32.172 "multi_ctrlr": true, 00:21:32.172 "ana_reporting": false 00:21:32.172 }, 00:21:32.172 "vs": { 00:21:32.172 "nvme_version": "1.3" 00:21:32.172 }, 00:21:32.172 "ns_data": { 00:21:32.172 "id": 1, 00:21:32.172 "can_share": true 00:21:32.172 } 00:21:32.172 } 00:21:32.172 ], 00:21:32.172 "mp_policy": "active_passive" 00:21:32.172 } 00:21:32.172 } 00:21:32.172 ] 00:21:32.172 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.172 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:32.172 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.172 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:32.172 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.172 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:32.172 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.fY9rEvQVMU 00:21:32.172 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:32.172 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.fY9rEvQVMU 00:21:32.172 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.fY9rEvQVMU 00:21:32.172 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.172 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:32.172 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.172 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:32.172 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.173 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:32.173 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.173 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:32.173 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.173 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:32.173 [2024-12-06 03:29:52.141313] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:32.173 [2024-12-06 03:29:52.141411] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:32.173 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.173 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:21:32.173 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.173 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:32.173 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.173 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:32.173 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.173 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:32.173 [2024-12-06 03:29:52.157367] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:32.173 nvme0n1 00:21:32.173 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.173 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:32.173 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.173 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:32.173 [ 00:21:32.173 { 00:21:32.173 "name": "nvme0n1", 00:21:32.173 "aliases": [ 00:21:32.173 "6a057ac3-9936-421c-8639-2c8f5f72c4d1" 00:21:32.173 ], 00:21:32.173 "product_name": "NVMe disk", 00:21:32.173 "block_size": 512, 00:21:32.173 "num_blocks": 2097152, 00:21:32.173 "uuid": "6a057ac3-9936-421c-8639-2c8f5f72c4d1", 00:21:32.173 "numa_id": 1, 00:21:32.173 "assigned_rate_limits": { 00:21:32.173 "rw_ios_per_sec": 0, 00:21:32.173 "rw_mbytes_per_sec": 0, 00:21:32.173 "r_mbytes_per_sec": 0, 00:21:32.173 "w_mbytes_per_sec": 0 00:21:32.173 }, 00:21:32.173 "claimed": false, 00:21:32.173 "zoned": false, 00:21:32.173 "supported_io_types": { 00:21:32.173 "read": true, 00:21:32.173 "write": true, 00:21:32.173 "unmap": false, 00:21:32.173 "flush": true, 00:21:32.173 "reset": true, 00:21:32.173 "nvme_admin": true, 00:21:32.173 "nvme_io": true, 00:21:32.173 "nvme_io_md": false, 00:21:32.173 "write_zeroes": true, 00:21:32.173 "zcopy": false, 00:21:32.173 "get_zone_info": false, 00:21:32.173 "zone_management": false, 00:21:32.173 "zone_append": false, 00:21:32.173 "compare": true, 00:21:32.173 "compare_and_write": true, 00:21:32.173 "abort": true, 00:21:32.173 "seek_hole": false, 00:21:32.173 "seek_data": false, 00:21:32.173 "copy": true, 00:21:32.173 "nvme_iov_md": false 00:21:32.173 }, 00:21:32.173 "memory_domains": [ 00:21:32.173 { 00:21:32.173 "dma_device_id": "system", 00:21:32.173 "dma_device_type": 1 00:21:32.173 } 00:21:32.173 ], 00:21:32.173 "driver_specific": { 00:21:32.173 "nvme": [ 00:21:32.173 { 00:21:32.173 "trid": { 00:21:32.173 "trtype": "TCP", 00:21:32.173 "adrfam": "IPv4", 00:21:32.173 "traddr": "10.0.0.2", 00:21:32.173 "trsvcid": "4421", 00:21:32.173 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:32.173 }, 00:21:32.173 "ctrlr_data": { 00:21:32.173 "cntlid": 3, 00:21:32.173 "vendor_id": "0x8086", 00:21:32.173 "model_number": "SPDK bdev Controller", 00:21:32.173 "serial_number": "00000000000000000000", 00:21:32.173 "firmware_revision": "25.01", 00:21:32.173 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:32.173 "oacs": { 00:21:32.173 "security": 0, 00:21:32.173 "format": 0, 00:21:32.173 "firmware": 0, 00:21:32.173 "ns_manage": 0 00:21:32.173 }, 00:21:32.173 "multi_ctrlr": true, 00:21:32.173 "ana_reporting": false 00:21:32.173 }, 00:21:32.173 "vs": { 00:21:32.173 "nvme_version": "1.3" 00:21:32.173 }, 00:21:32.173 "ns_data": { 00:21:32.173 "id": 1, 00:21:32.173 "can_share": true 00:21:32.173 } 00:21:32.173 } 00:21:32.173 ], 00:21:32.173 "mp_policy": "active_passive" 00:21:32.173 } 00:21:32.173 } 00:21:32.173 ] 00:21:32.173 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.173 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:32.173 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.173 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:32.173 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.173 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.fY9rEvQVMU 00:21:32.173 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:21:32.173 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:21:32.173 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:32.173 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:21:32.173 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:32.173 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:21:32.173 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:32.173 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:32.173 rmmod nvme_tcp 00:21:32.173 rmmod nvme_fabrics 00:21:32.173 rmmod nvme_keyring 00:21:32.173 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:32.446 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:21:32.446 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:21:32.446 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2690129 ']' 00:21:32.446 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2690129 00:21:32.446 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2690129 ']' 00:21:32.446 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2690129 00:21:32.446 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:21:32.446 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:32.446 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2690129 00:21:32.446 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:32.446 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:32.446 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2690129' 00:21:32.446 killing process with pid 2690129 00:21:32.446 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2690129 00:21:32.446 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2690129 00:21:32.446 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:32.446 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:32.446 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:32.446 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:21:32.446 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:21:32.446 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:32.446 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:21:32.446 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:32.446 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:32.446 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.446 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:32.446 03:29:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:34.989 00:21:34.989 real 0m9.319s 00:21:34.989 user 0m2.961s 00:21:34.989 sys 0m4.725s 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:34.989 ************************************ 00:21:34.989 END TEST nvmf_async_init 00:21:34.989 ************************************ 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.989 ************************************ 00:21:34.989 START TEST dma 00:21:34.989 ************************************ 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:34.989 * Looking for test storage... 00:21:34.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:34.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.989 --rc genhtml_branch_coverage=1 00:21:34.989 --rc genhtml_function_coverage=1 00:21:34.989 --rc genhtml_legend=1 00:21:34.989 --rc geninfo_all_blocks=1 00:21:34.989 --rc geninfo_unexecuted_blocks=1 00:21:34.989 00:21:34.989 ' 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:34.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.989 --rc genhtml_branch_coverage=1 00:21:34.989 --rc genhtml_function_coverage=1 00:21:34.989 --rc genhtml_legend=1 00:21:34.989 --rc geninfo_all_blocks=1 00:21:34.989 --rc geninfo_unexecuted_blocks=1 00:21:34.989 00:21:34.989 ' 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:34.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.989 --rc genhtml_branch_coverage=1 00:21:34.989 --rc genhtml_function_coverage=1 00:21:34.989 --rc genhtml_legend=1 00:21:34.989 --rc geninfo_all_blocks=1 00:21:34.989 --rc geninfo_unexecuted_blocks=1 00:21:34.989 00:21:34.989 ' 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:34.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.989 --rc genhtml_branch_coverage=1 00:21:34.989 --rc genhtml_function_coverage=1 00:21:34.989 --rc genhtml_legend=1 00:21:34.989 --rc geninfo_all_blocks=1 00:21:34.989 --rc geninfo_unexecuted_blocks=1 00:21:34.989 00:21:34.989 ' 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:34.989 03:29:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:21:34.990 03:29:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:34.990 03:29:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:34.990 03:29:54 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:34.990 03:29:54 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.990 03:29:54 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.990 03:29:54 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.990 03:29:54 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:21:34.990 03:29:54 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.990 03:29:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:21:34.990 03:29:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:34.990 03:29:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:34.990 03:29:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:34.990 03:29:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:34.990 03:29:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:34.990 03:29:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:34.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:34.990 03:29:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:34.990 03:29:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:34.990 03:29:54 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:34.990 03:29:54 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:34.990 03:29:54 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:21:34.990 00:21:34.990 real 0m0.189s 00:21:34.990 user 0m0.105s 00:21:34.990 sys 0m0.096s 00:21:34.990 03:29:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:34.990 03:29:54 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:21:34.990 ************************************ 00:21:34.990 END TEST dma 00:21:34.990 ************************************ 00:21:34.990 03:29:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:34.990 03:29:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:34.990 03:29:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:34.990 03:29:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.990 ************************************ 00:21:34.990 START TEST nvmf_identify 00:21:34.990 ************************************ 00:21:34.990 03:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:34.990 * Looking for test storage... 00:21:34.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:34.990 03:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:34.990 03:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:21:34.990 03:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:34.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.990 --rc genhtml_branch_coverage=1 00:21:34.990 --rc genhtml_function_coverage=1 00:21:34.990 --rc genhtml_legend=1 00:21:34.990 --rc geninfo_all_blocks=1 00:21:34.990 --rc geninfo_unexecuted_blocks=1 00:21:34.990 00:21:34.990 ' 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:34.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.990 --rc genhtml_branch_coverage=1 00:21:34.990 --rc genhtml_function_coverage=1 00:21:34.990 --rc genhtml_legend=1 00:21:34.990 --rc geninfo_all_blocks=1 00:21:34.990 --rc geninfo_unexecuted_blocks=1 00:21:34.990 00:21:34.990 ' 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:34.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.990 --rc genhtml_branch_coverage=1 00:21:34.990 --rc genhtml_function_coverage=1 00:21:34.990 --rc genhtml_legend=1 00:21:34.990 --rc geninfo_all_blocks=1 00:21:34.990 --rc geninfo_unexecuted_blocks=1 00:21:34.990 00:21:34.990 ' 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:34.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.990 --rc genhtml_branch_coverage=1 00:21:34.990 --rc genhtml_function_coverage=1 00:21:34.990 --rc genhtml_legend=1 00:21:34.990 --rc geninfo_all_blocks=1 00:21:34.990 --rc geninfo_unexecuted_blocks=1 00:21:34.990 00:21:34.990 ' 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:34.990 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:34.991 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:34.991 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:34.991 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:34.991 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:34.991 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:34.991 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:21:34.991 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:34.991 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:34.991 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:34.991 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.991 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.991 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.991 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:34.991 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.991 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:21:34.991 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:34.991 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:34.991 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:34.991 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:34.991 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:34.991 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:34.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:34.991 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:34.991 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:34.991 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:34.991 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:34.991 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:34.991 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:34.991 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:34.991 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:34.991 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:34.991 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:34.991 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:34.991 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.991 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:34.991 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.991 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:34.991 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:34.991 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:21:34.991 03:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:40.261 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:40.261 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:40.261 Found net devices under 0000:86:00.0: cvl_0_0 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:40.261 Found net devices under 0000:86:00.1: cvl_0_1 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:40.261 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:40.522 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:40.522 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:40.522 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:40.522 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:40.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:40.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.374 ms 00:21:40.522 00:21:40.522 --- 10.0.0.2 ping statistics --- 00:21:40.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.522 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:21:40.522 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:40.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:40.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:21:40.522 00:21:40.522 --- 10.0.0.1 ping statistics --- 00:21:40.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.522 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:21:40.522 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:40.522 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:21:40.522 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:40.522 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:40.522 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:40.522 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:40.522 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:40.522 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:40.522 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:40.522 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:40.522 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:40.522 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:40.522 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:40.522 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2693763 00:21:40.522 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:40.522 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2693763 00:21:40.522 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2693763 ']' 00:21:40.522 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.522 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:40.522 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.522 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:40.522 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:40.522 [2024-12-06 03:30:00.552720] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:21:40.522 [2024-12-06 03:30:00.552767] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:40.522 [2024-12-06 03:30:00.624197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:40.783 [2024-12-06 03:30:00.669208] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.783 [2024-12-06 03:30:00.669246] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.783 [2024-12-06 03:30:00.669253] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:40.783 [2024-12-06 03:30:00.669260] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:40.783 [2024-12-06 03:30:00.669266] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.783 [2024-12-06 03:30:00.670885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.783 [2024-12-06 03:30:00.670904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:40.783 [2024-12-06 03:30:00.670975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:40.783 [2024-12-06 03:30:00.670977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.783 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:40.783 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:21:40.783 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:40.783 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.783 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:40.783 [2024-12-06 03:30:00.781972] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:40.783 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.783 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:40.783 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:40.783 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:40.783 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:40.783 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.783 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:40.783 Malloc0 00:21:40.783 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.783 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:40.783 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.783 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:40.783 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.783 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:40.783 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.783 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:40.783 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.783 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:40.783 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.783 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:40.783 [2024-12-06 03:30:00.878653] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:40.783 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.783 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:40.783 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.783 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:40.783 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.783 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:40.783 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.783 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:40.783 [ 00:21:40.783 { 00:21:40.783 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:40.783 "subtype": "Discovery", 00:21:40.783 "listen_addresses": [ 00:21:40.783 { 00:21:40.783 "trtype": "TCP", 00:21:40.783 "adrfam": "IPv4", 00:21:40.783 "traddr": "10.0.0.2", 00:21:40.783 "trsvcid": "4420" 00:21:40.783 } 00:21:40.783 ], 00:21:40.783 "allow_any_host": true, 00:21:40.783 "hosts": [] 00:21:40.783 }, 00:21:40.783 { 00:21:40.783 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:40.783 "subtype": "NVMe", 00:21:40.783 "listen_addresses": [ 00:21:40.783 { 00:21:40.783 "trtype": "TCP", 00:21:40.783 "adrfam": "IPv4", 00:21:40.783 "traddr": "10.0.0.2", 00:21:40.783 "trsvcid": "4420" 00:21:40.783 } 00:21:40.783 ], 00:21:40.783 "allow_any_host": true, 00:21:40.783 "hosts": [], 00:21:40.783 "serial_number": "SPDK00000000000001", 00:21:40.783 "model_number": "SPDK bdev Controller", 00:21:40.783 "max_namespaces": 32, 00:21:40.783 "min_cntlid": 1, 00:21:40.783 "max_cntlid": 65519, 00:21:40.783 "namespaces": [ 00:21:40.783 { 00:21:40.783 "nsid": 1, 00:21:40.783 "bdev_name": "Malloc0", 00:21:40.783 "name": "Malloc0", 00:21:40.783 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:40.783 "eui64": "ABCDEF0123456789", 00:21:40.783 "uuid": "22219eeb-c214-4360-aa69-515c1b88de03" 00:21:40.783 } 00:21:40.783 ] 00:21:40.783 } 00:21:40.783 ] 00:21:40.783 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.783 03:30:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:41.046 [2024-12-06 03:30:00.931551] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:21:41.046 [2024-12-06 03:30:00.931586] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2693861 ] 00:21:41.046 [2024-12-06 03:30:00.973507] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:21:41.046 [2024-12-06 03:30:00.973557] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:41.046 [2024-12-06 03:30:00.973563] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:41.046 [2024-12-06 03:30:00.973577] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:41.046 [2024-12-06 03:30:00.973585] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:41.046 [2024-12-06 03:30:00.977233] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:21:41.046 [2024-12-06 03:30:00.977273] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1452690 0 00:21:41.046 [2024-12-06 03:30:00.984965] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:41.046 [2024-12-06 03:30:00.984981] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:41.046 [2024-12-06 03:30:00.984988] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:41.046 [2024-12-06 03:30:00.984991] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:41.046 [2024-12-06 03:30:00.985026] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.046 [2024-12-06 03:30:00.985031] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.046 [2024-12-06 03:30:00.985035] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1452690) 00:21:41.046 [2024-12-06 03:30:00.985046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:41.046 [2024-12-06 03:30:00.985064] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4100, cid 0, qid 0 00:21:41.046 [2024-12-06 03:30:00.991960] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.046 [2024-12-06 03:30:00.991969] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.046 [2024-12-06 03:30:00.991972] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.046 [2024-12-06 03:30:00.991977] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4100) on tqpair=0x1452690 00:21:41.046 [2024-12-06 03:30:00.991988] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:41.046 [2024-12-06 03:30:00.991994] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:21:41.046 [2024-12-06 03:30:00.991999] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:21:41.046 [2024-12-06 03:30:00.992013] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.046 [2024-12-06 03:30:00.992017] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.046 [2024-12-06 03:30:00.992020] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1452690) 00:21:41.046 [2024-12-06 03:30:00.992027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.046 [2024-12-06 03:30:00.992041] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4100, cid 0, qid 0 00:21:41.046 [2024-12-06 03:30:00.992134] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.046 [2024-12-06 03:30:00.992140] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.046 [2024-12-06 03:30:00.992143] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.046 [2024-12-06 03:30:00.992147] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4100) on tqpair=0x1452690 00:21:41.046 [2024-12-06 03:30:00.992151] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:21:41.046 [2024-12-06 03:30:00.992158] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:21:41.047 [2024-12-06 03:30:00.992164] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.047 [2024-12-06 03:30:00.992168] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.047 [2024-12-06 03:30:00.992171] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1452690) 00:21:41.047 [2024-12-06 03:30:00.992177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.047 [2024-12-06 03:30:00.992187] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4100, cid 0, qid 0 00:21:41.047 [2024-12-06 03:30:00.992253] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.047 [2024-12-06 03:30:00.992259] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.047 [2024-12-06 03:30:00.992263] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.047 [2024-12-06 03:30:00.992266] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4100) on tqpair=0x1452690 00:21:41.047 [2024-12-06 03:30:00.992270] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:21:41.047 [2024-12-06 03:30:00.992280] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:41.047 [2024-12-06 03:30:00.992286] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.047 [2024-12-06 03:30:00.992289] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.047 [2024-12-06 03:30:00.992293] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1452690) 00:21:41.047 [2024-12-06 03:30:00.992298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.047 [2024-12-06 03:30:00.992308] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4100, cid 0, qid 0 00:21:41.047 [2024-12-06 03:30:00.992375] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.047 [2024-12-06 03:30:00.992381] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.047 [2024-12-06 03:30:00.992384] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.047 [2024-12-06 03:30:00.992387] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4100) on tqpair=0x1452690 00:21:41.047 [2024-12-06 03:30:00.992391] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:41.047 [2024-12-06 03:30:00.992400] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.047 [2024-12-06 03:30:00.992403] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.047 [2024-12-06 03:30:00.992407] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1452690) 00:21:41.047 [2024-12-06 03:30:00.992412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.047 [2024-12-06 03:30:00.992422] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4100, cid 0, qid 0 00:21:41.047 [2024-12-06 03:30:00.992493] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.047 [2024-12-06 03:30:00.992499] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.047 [2024-12-06 03:30:00.992502] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.047 [2024-12-06 03:30:00.992505] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4100) on tqpair=0x1452690 00:21:41.047 [2024-12-06 03:30:00.992509] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:41.047 [2024-12-06 03:30:00.992514] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:41.047 [2024-12-06 03:30:00.992521] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:41.047 [2024-12-06 03:30:00.992631] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:21:41.047 [2024-12-06 03:30:00.992636] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:41.047 [2024-12-06 03:30:00.992643] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.047 [2024-12-06 03:30:00.992647] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.047 [2024-12-06 03:30:00.992650] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1452690) 00:21:41.047 [2024-12-06 03:30:00.992655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.047 [2024-12-06 03:30:00.992665] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4100, cid 0, qid 0 00:21:41.047 [2024-12-06 03:30:00.992748] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.047 [2024-12-06 03:30:00.992754] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.047 [2024-12-06 03:30:00.992759] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.047 [2024-12-06 03:30:00.992762] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4100) on tqpair=0x1452690 00:21:41.047 [2024-12-06 03:30:00.992766] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:41.047 [2024-12-06 03:30:00.992775] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.047 [2024-12-06 03:30:00.992778] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.047 [2024-12-06 03:30:00.992782] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1452690) 00:21:41.047 [2024-12-06 03:30:00.992787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.047 [2024-12-06 03:30:00.992797] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4100, cid 0, qid 0 00:21:41.047 [2024-12-06 03:30:00.992858] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.047 [2024-12-06 03:30:00.992864] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.047 [2024-12-06 03:30:00.992867] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.047 [2024-12-06 03:30:00.992870] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4100) on tqpair=0x1452690 00:21:41.047 [2024-12-06 03:30:00.992874] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:41.047 [2024-12-06 03:30:00.992878] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:41.047 [2024-12-06 03:30:00.992885] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:21:41.047 [2024-12-06 03:30:00.992898] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:41.047 [2024-12-06 03:30:00.992906] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.047 [2024-12-06 03:30:00.992909] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1452690) 00:21:41.047 [2024-12-06 03:30:00.992915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.047 [2024-12-06 03:30:00.992925] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4100, cid 0, qid 0 00:21:41.047 [2024-12-06 03:30:00.993039] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:41.047 [2024-12-06 03:30:00.993046] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:41.047 [2024-12-06 03:30:00.993049] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:41.047 [2024-12-06 03:30:00.993052] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1452690): datao=0, datal=4096, cccid=0 00:21:41.047 [2024-12-06 03:30:00.993057] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14b4100) on tqpair(0x1452690): expected_datao=0, payload_size=4096 00:21:41.047 [2024-12-06 03:30:00.993061] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.047 [2024-12-06 03:30:00.993068] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:41.047 [2024-12-06 03:30:00.993072] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:41.047 [2024-12-06 03:30:00.993090] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.047 [2024-12-06 03:30:00.993095] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.047 [2024-12-06 03:30:00.993098] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.047 [2024-12-06 03:30:00.993101] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4100) on tqpair=0x1452690 00:21:41.047 [2024-12-06 03:30:00.993109] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:21:41.047 [2024-12-06 03:30:00.993118] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:21:41.047 [2024-12-06 03:30:00.993122] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:21:41.047 [2024-12-06 03:30:00.993127] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:21:41.047 [2024-12-06 03:30:00.993131] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:21:41.047 [2024-12-06 03:30:00.993135] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:21:41.047 [2024-12-06 03:30:00.993144] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:41.047 [2024-12-06 03:30:00.993150] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.047 [2024-12-06 03:30:00.993154] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.047 [2024-12-06 03:30:00.993157] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1452690) 00:21:41.047 [2024-12-06 03:30:00.993163] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:41.047 [2024-12-06 03:30:00.993176] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4100, cid 0, qid 0 00:21:41.047 [2024-12-06 03:30:00.993239] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.047 [2024-12-06 03:30:00.993245] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.047 [2024-12-06 03:30:00.993248] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.047 [2024-12-06 03:30:00.993252] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4100) on tqpair=0x1452690 00:21:41.047 [2024-12-06 03:30:00.993259] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.047 [2024-12-06 03:30:00.993262] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.047 [2024-12-06 03:30:00.993265] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1452690) 00:21:41.047 [2024-12-06 03:30:00.993271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.047 [2024-12-06 03:30:00.993276] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.047 [2024-12-06 03:30:00.993279] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.047 [2024-12-06 03:30:00.993282] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1452690) 00:21:41.047 [2024-12-06 03:30:00.993287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.048 [2024-12-06 03:30:00.993293] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.048 [2024-12-06 03:30:00.993296] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.048 [2024-12-06 03:30:00.993299] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1452690) 00:21:41.048 [2024-12-06 03:30:00.993304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.048 [2024-12-06 03:30:00.993309] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.048 [2024-12-06 03:30:00.993312] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.048 [2024-12-06 03:30:00.993315] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1452690) 00:21:41.048 [2024-12-06 03:30:00.993320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.048 [2024-12-06 03:30:00.993324] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:41.048 [2024-12-06 03:30:00.993335] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:41.048 [2024-12-06 03:30:00.993342] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.048 [2024-12-06 03:30:00.993346] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1452690) 00:21:41.048 [2024-12-06 03:30:00.993352] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.048 [2024-12-06 03:30:00.993363] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4100, cid 0, qid 0 00:21:41.048 [2024-12-06 03:30:00.993368] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4280, cid 1, qid 0 00:21:41.048 [2024-12-06 03:30:00.993372] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4400, cid 2, qid 0 00:21:41.048 [2024-12-06 03:30:00.993376] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4580, cid 3, qid 0 00:21:41.048 [2024-12-06 03:30:00.993380] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4700, cid 4, qid 0 00:21:41.048 [2024-12-06 03:30:00.993479] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.048 [2024-12-06 03:30:00.993485] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.048 [2024-12-06 03:30:00.993488] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.048 [2024-12-06 03:30:00.993492] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4700) on tqpair=0x1452690 00:21:41.048 [2024-12-06 03:30:00.993496] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:21:41.048 [2024-12-06 03:30:00.993501] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:21:41.048 [2024-12-06 03:30:00.993510] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.048 [2024-12-06 03:30:00.993513] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1452690) 00:21:41.048 [2024-12-06 03:30:00.993519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.048 [2024-12-06 03:30:00.993528] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4700, cid 4, qid 0 00:21:41.048 [2024-12-06 03:30:00.993604] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:41.048 [2024-12-06 03:30:00.993611] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:41.048 [2024-12-06 03:30:00.993614] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:41.048 [2024-12-06 03:30:00.993617] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1452690): datao=0, datal=4096, cccid=4 00:21:41.048 [2024-12-06 03:30:00.993621] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14b4700) on tqpair(0x1452690): expected_datao=0, payload_size=4096 00:21:41.048 [2024-12-06 03:30:00.993625] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.048 [2024-12-06 03:30:00.993635] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:41.048 [2024-12-06 03:30:00.993638] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:41.048 [2024-12-06 03:30:01.034022] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.048 [2024-12-06 03:30:01.034035] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.048 [2024-12-06 03:30:01.034039] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.048 [2024-12-06 03:30:01.034043] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4700) on tqpair=0x1452690 00:21:41.048 [2024-12-06 03:30:01.034056] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:21:41.048 [2024-12-06 03:30:01.034079] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.048 [2024-12-06 03:30:01.034084] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1452690) 00:21:41.048 [2024-12-06 03:30:01.034091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.048 [2024-12-06 03:30:01.034100] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.048 [2024-12-06 03:30:01.034104] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.048 [2024-12-06 03:30:01.034108] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1452690) 00:21:41.048 [2024-12-06 03:30:01.034113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.048 [2024-12-06 03:30:01.034129] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4700, cid 4, qid 0 00:21:41.048 [2024-12-06 03:30:01.034134] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4880, cid 5, qid 0 00:21:41.048 [2024-12-06 03:30:01.034238] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:41.048 [2024-12-06 03:30:01.034244] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:41.048 [2024-12-06 03:30:01.034247] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:41.048 [2024-12-06 03:30:01.034251] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1452690): datao=0, datal=1024, cccid=4 00:21:41.048 [2024-12-06 03:30:01.034255] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14b4700) on tqpair(0x1452690): expected_datao=0, payload_size=1024 00:21:41.048 [2024-12-06 03:30:01.034259] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.048 [2024-12-06 03:30:01.034265] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:41.048 [2024-12-06 03:30:01.034268] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:41.048 [2024-12-06 03:30:01.034273] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.048 [2024-12-06 03:30:01.034278] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.048 [2024-12-06 03:30:01.034282] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.048 [2024-12-06 03:30:01.034285] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4880) on tqpair=0x1452690 00:21:41.048 [2024-12-06 03:30:01.078958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.048 [2024-12-06 03:30:01.078968] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.048 [2024-12-06 03:30:01.078971] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.048 [2024-12-06 03:30:01.078975] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4700) on tqpair=0x1452690 00:21:41.048 [2024-12-06 03:30:01.078985] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.048 [2024-12-06 03:30:01.078989] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1452690) 00:21:41.048 [2024-12-06 03:30:01.078996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.048 [2024-12-06 03:30:01.079012] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4700, cid 4, qid 0 00:21:41.048 [2024-12-06 03:30:01.079088] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:41.048 [2024-12-06 03:30:01.079095] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:41.048 [2024-12-06 03:30:01.079098] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:41.048 [2024-12-06 03:30:01.079101] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1452690): datao=0, datal=3072, cccid=4 00:21:41.048 [2024-12-06 03:30:01.079105] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14b4700) on tqpair(0x1452690): expected_datao=0, payload_size=3072 00:21:41.048 [2024-12-06 03:30:01.079109] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.048 [2024-12-06 03:30:01.079120] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:41.048 [2024-12-06 03:30:01.079124] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:41.048 [2024-12-06 03:30:01.079165] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.048 [2024-12-06 03:30:01.079171] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.048 [2024-12-06 03:30:01.079178] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.048 [2024-12-06 03:30:01.079182] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4700) on tqpair=0x1452690 00:21:41.048 [2024-12-06 03:30:01.079189] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.048 [2024-12-06 03:30:01.079193] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1452690) 00:21:41.048 [2024-12-06 03:30:01.079199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.048 [2024-12-06 03:30:01.079212] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4700, cid 4, qid 0 00:21:41.048 [2024-12-06 03:30:01.079286] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:41.048 [2024-12-06 03:30:01.079292] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:41.048 [2024-12-06 03:30:01.079295] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:41.048 [2024-12-06 03:30:01.079298] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1452690): datao=0, datal=8, cccid=4 00:21:41.048 [2024-12-06 03:30:01.079302] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14b4700) on tqpair(0x1452690): expected_datao=0, payload_size=8 00:21:41.048 [2024-12-06 03:30:01.079306] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.048 [2024-12-06 03:30:01.079312] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:41.048 [2024-12-06 03:30:01.079315] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:41.048 [2024-12-06 03:30:01.122958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.048 [2024-12-06 03:30:01.122968] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.048 [2024-12-06 03:30:01.122971] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.048 [2024-12-06 03:30:01.122974] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4700) on tqpair=0x1452690 00:21:41.048 ===================================================== 00:21:41.048 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:41.048 ===================================================== 00:21:41.048 Controller Capabilities/Features 00:21:41.048 ================================ 00:21:41.049 Vendor ID: 0000 00:21:41.049 Subsystem Vendor ID: 0000 00:21:41.049 Serial Number: .................... 00:21:41.049 Model Number: ........................................ 00:21:41.049 Firmware Version: 25.01 00:21:41.049 Recommended Arb Burst: 0 00:21:41.049 IEEE OUI Identifier: 00 00 00 00:21:41.049 Multi-path I/O 00:21:41.049 May have multiple subsystem ports: No 00:21:41.049 May have multiple controllers: No 00:21:41.049 Associated with SR-IOV VF: No 00:21:41.049 Max Data Transfer Size: 131072 00:21:41.049 Max Number of Namespaces: 0 00:21:41.049 Max Number of I/O Queues: 1024 00:21:41.049 NVMe Specification Version (VS): 1.3 00:21:41.049 NVMe Specification Version (Identify): 1.3 00:21:41.049 Maximum Queue Entries: 128 00:21:41.049 Contiguous Queues Required: Yes 00:21:41.049 Arbitration Mechanisms Supported 00:21:41.049 Weighted Round Robin: Not Supported 00:21:41.049 Vendor Specific: Not Supported 00:21:41.049 Reset Timeout: 15000 ms 00:21:41.049 Doorbell Stride: 4 bytes 00:21:41.049 NVM Subsystem Reset: Not Supported 00:21:41.049 Command Sets Supported 00:21:41.049 NVM Command Set: Supported 00:21:41.049 Boot Partition: Not Supported 00:21:41.049 Memory Page Size Minimum: 4096 bytes 00:21:41.049 Memory Page Size Maximum: 4096 bytes 00:21:41.049 Persistent Memory Region: Not Supported 00:21:41.049 Optional Asynchronous Events Supported 00:21:41.049 Namespace Attribute Notices: Not Supported 00:21:41.049 Firmware Activation Notices: Not Supported 00:21:41.049 ANA Change Notices: Not Supported 00:21:41.049 PLE Aggregate Log Change Notices: Not Supported 00:21:41.049 LBA Status Info Alert Notices: Not Supported 00:21:41.049 EGE Aggregate Log Change Notices: Not Supported 00:21:41.049 Normal NVM Subsystem Shutdown event: Not Supported 00:21:41.049 Zone Descriptor Change Notices: Not Supported 00:21:41.049 Discovery Log Change Notices: Supported 00:21:41.049 Controller Attributes 00:21:41.049 128-bit Host Identifier: Not Supported 00:21:41.049 Non-Operational Permissive Mode: Not Supported 00:21:41.049 NVM Sets: Not Supported 00:21:41.049 Read Recovery Levels: Not Supported 00:21:41.049 Endurance Groups: Not Supported 00:21:41.049 Predictable Latency Mode: Not Supported 00:21:41.049 Traffic Based Keep ALive: Not Supported 00:21:41.049 Namespace Granularity: Not Supported 00:21:41.049 SQ Associations: Not Supported 00:21:41.049 UUID List: Not Supported 00:21:41.049 Multi-Domain Subsystem: Not Supported 00:21:41.049 Fixed Capacity Management: Not Supported 00:21:41.049 Variable Capacity Management: Not Supported 00:21:41.049 Delete Endurance Group: Not Supported 00:21:41.049 Delete NVM Set: Not Supported 00:21:41.049 Extended LBA Formats Supported: Not Supported 00:21:41.049 Flexible Data Placement Supported: Not Supported 00:21:41.049 00:21:41.049 Controller Memory Buffer Support 00:21:41.049 ================================ 00:21:41.049 Supported: No 00:21:41.049 00:21:41.049 Persistent Memory Region Support 00:21:41.049 ================================ 00:21:41.049 Supported: No 00:21:41.049 00:21:41.049 Admin Command Set Attributes 00:21:41.049 ============================ 00:21:41.049 Security Send/Receive: Not Supported 00:21:41.049 Format NVM: Not Supported 00:21:41.049 Firmware Activate/Download: Not Supported 00:21:41.049 Namespace Management: Not Supported 00:21:41.049 Device Self-Test: Not Supported 00:21:41.049 Directives: Not Supported 00:21:41.049 NVMe-MI: Not Supported 00:21:41.049 Virtualization Management: Not Supported 00:21:41.049 Doorbell Buffer Config: Not Supported 00:21:41.049 Get LBA Status Capability: Not Supported 00:21:41.049 Command & Feature Lockdown Capability: Not Supported 00:21:41.049 Abort Command Limit: 1 00:21:41.049 Async Event Request Limit: 4 00:21:41.049 Number of Firmware Slots: N/A 00:21:41.049 Firmware Slot 1 Read-Only: N/A 00:21:41.049 Firmware Activation Without Reset: N/A 00:21:41.049 Multiple Update Detection Support: N/A 00:21:41.049 Firmware Update Granularity: No Information Provided 00:21:41.049 Per-Namespace SMART Log: No 00:21:41.049 Asymmetric Namespace Access Log Page: Not Supported 00:21:41.049 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:41.049 Command Effects Log Page: Not Supported 00:21:41.049 Get Log Page Extended Data: Supported 00:21:41.049 Telemetry Log Pages: Not Supported 00:21:41.049 Persistent Event Log Pages: Not Supported 00:21:41.049 Supported Log Pages Log Page: May Support 00:21:41.049 Commands Supported & Effects Log Page: Not Supported 00:21:41.049 Feature Identifiers & Effects Log Page:May Support 00:21:41.049 NVMe-MI Commands & Effects Log Page: May Support 00:21:41.049 Data Area 4 for Telemetry Log: Not Supported 00:21:41.049 Error Log Page Entries Supported: 128 00:21:41.049 Keep Alive: Not Supported 00:21:41.049 00:21:41.049 NVM Command Set Attributes 00:21:41.049 ========================== 00:21:41.049 Submission Queue Entry Size 00:21:41.049 Max: 1 00:21:41.049 Min: 1 00:21:41.049 Completion Queue Entry Size 00:21:41.049 Max: 1 00:21:41.049 Min: 1 00:21:41.049 Number of Namespaces: 0 00:21:41.049 Compare Command: Not Supported 00:21:41.049 Write Uncorrectable Command: Not Supported 00:21:41.049 Dataset Management Command: Not Supported 00:21:41.049 Write Zeroes Command: Not Supported 00:21:41.049 Set Features Save Field: Not Supported 00:21:41.049 Reservations: Not Supported 00:21:41.049 Timestamp: Not Supported 00:21:41.049 Copy: Not Supported 00:21:41.049 Volatile Write Cache: Not Present 00:21:41.049 Atomic Write Unit (Normal): 1 00:21:41.049 Atomic Write Unit (PFail): 1 00:21:41.049 Atomic Compare & Write Unit: 1 00:21:41.049 Fused Compare & Write: Supported 00:21:41.049 Scatter-Gather List 00:21:41.049 SGL Command Set: Supported 00:21:41.049 SGL Keyed: Supported 00:21:41.049 SGL Bit Bucket Descriptor: Not Supported 00:21:41.049 SGL Metadata Pointer: Not Supported 00:21:41.049 Oversized SGL: Not Supported 00:21:41.049 SGL Metadata Address: Not Supported 00:21:41.049 SGL Offset: Supported 00:21:41.049 Transport SGL Data Block: Not Supported 00:21:41.049 Replay Protected Memory Block: Not Supported 00:21:41.049 00:21:41.049 Firmware Slot Information 00:21:41.049 ========================= 00:21:41.049 Active slot: 0 00:21:41.049 00:21:41.049 00:21:41.049 Error Log 00:21:41.049 ========= 00:21:41.049 00:21:41.049 Active Namespaces 00:21:41.049 ================= 00:21:41.049 Discovery Log Page 00:21:41.049 ================== 00:21:41.049 Generation Counter: 2 00:21:41.049 Number of Records: 2 00:21:41.049 Record Format: 0 00:21:41.049 00:21:41.049 Discovery Log Entry 0 00:21:41.049 ---------------------- 00:21:41.049 Transport Type: 3 (TCP) 00:21:41.049 Address Family: 1 (IPv4) 00:21:41.049 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:41.049 Entry Flags: 00:21:41.049 Duplicate Returned Information: 1 00:21:41.049 Explicit Persistent Connection Support for Discovery: 1 00:21:41.049 Transport Requirements: 00:21:41.049 Secure Channel: Not Required 00:21:41.049 Port ID: 0 (0x0000) 00:21:41.049 Controller ID: 65535 (0xffff) 00:21:41.049 Admin Max SQ Size: 128 00:21:41.049 Transport Service Identifier: 4420 00:21:41.049 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:41.049 Transport Address: 10.0.0.2 00:21:41.049 Discovery Log Entry 1 00:21:41.049 ---------------------- 00:21:41.049 Transport Type: 3 (TCP) 00:21:41.049 Address Family: 1 (IPv4) 00:21:41.049 Subsystem Type: 2 (NVM Subsystem) 00:21:41.049 Entry Flags: 00:21:41.049 Duplicate Returned Information: 0 00:21:41.049 Explicit Persistent Connection Support for Discovery: 0 00:21:41.049 Transport Requirements: 00:21:41.049 Secure Channel: Not Required 00:21:41.049 Port ID: 0 (0x0000) 00:21:41.049 Controller ID: 65535 (0xffff) 00:21:41.049 Admin Max SQ Size: 128 00:21:41.049 Transport Service Identifier: 4420 00:21:41.049 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:41.049 Transport Address: 10.0.0.2 [2024-12-06 03:30:01.123060] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:21:41.049 [2024-12-06 03:30:01.123071] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4100) on tqpair=0x1452690 00:21:41.049 [2024-12-06 03:30:01.123077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.049 [2024-12-06 03:30:01.123082] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4280) on tqpair=0x1452690 00:21:41.049 [2024-12-06 03:30:01.123086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.049 [2024-12-06 03:30:01.123090] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4400) on tqpair=0x1452690 00:21:41.050 [2024-12-06 03:30:01.123094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.050 [2024-12-06 03:30:01.123099] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4580) on tqpair=0x1452690 00:21:41.050 [2024-12-06 03:30:01.123103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.050 [2024-12-06 03:30:01.123113] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.050 [2024-12-06 03:30:01.123116] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.050 [2024-12-06 03:30:01.123120] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1452690) 00:21:41.050 [2024-12-06 03:30:01.123126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.050 [2024-12-06 03:30:01.123141] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4580, cid 3, qid 0 00:21:41.050 [2024-12-06 03:30:01.123202] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.050 [2024-12-06 03:30:01.123208] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.050 [2024-12-06 03:30:01.123213] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.050 [2024-12-06 03:30:01.123216] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4580) on tqpair=0x1452690 00:21:41.050 [2024-12-06 03:30:01.123222] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.050 [2024-12-06 03:30:01.123226] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.050 [2024-12-06 03:30:01.123229] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1452690) 00:21:41.050 [2024-12-06 03:30:01.123234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.050 [2024-12-06 03:30:01.123247] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4580, cid 3, qid 0 00:21:41.050 [2024-12-06 03:30:01.123327] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.050 [2024-12-06 03:30:01.123333] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.050 [2024-12-06 03:30:01.123336] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.050 [2024-12-06 03:30:01.123339] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4580) on tqpair=0x1452690 00:21:41.050 [2024-12-06 03:30:01.123344] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:21:41.050 [2024-12-06 03:30:01.123348] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:21:41.050 [2024-12-06 03:30:01.123356] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.050 [2024-12-06 03:30:01.123360] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.050 [2024-12-06 03:30:01.123363] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1452690) 00:21:41.050 [2024-12-06 03:30:01.123368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.050 [2024-12-06 03:30:01.123378] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4580, cid 3, qid 0 00:21:41.050 [2024-12-06 03:30:01.123449] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.050 [2024-12-06 03:30:01.123454] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.050 [2024-12-06 03:30:01.123457] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.050 [2024-12-06 03:30:01.123461] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4580) on tqpair=0x1452690 00:21:41.050 [2024-12-06 03:30:01.123470] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.050 [2024-12-06 03:30:01.123473] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.050 [2024-12-06 03:30:01.123476] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1452690) 00:21:41.050 [2024-12-06 03:30:01.123482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.050 [2024-12-06 03:30:01.123491] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4580, cid 3, qid 0 00:21:41.050 [2024-12-06 03:30:01.123559] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.050 [2024-12-06 03:30:01.123564] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.050 [2024-12-06 03:30:01.123568] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.050 [2024-12-06 03:30:01.123571] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4580) on tqpair=0x1452690 00:21:41.050 [2024-12-06 03:30:01.123579] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.050 [2024-12-06 03:30:01.123582] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.050 [2024-12-06 03:30:01.123585] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1452690) 00:21:41.050 [2024-12-06 03:30:01.123591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.050 [2024-12-06 03:30:01.123600] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4580, cid 3, qid 0 00:21:41.050 [2024-12-06 03:30:01.123667] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.050 [2024-12-06 03:30:01.123673] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.050 [2024-12-06 03:30:01.123676] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.050 [2024-12-06 03:30:01.123679] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4580) on tqpair=0x1452690 00:21:41.050 [2024-12-06 03:30:01.123687] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.050 [2024-12-06 03:30:01.123691] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.050 [2024-12-06 03:30:01.123694] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1452690) 00:21:41.050 [2024-12-06 03:30:01.123700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.050 [2024-12-06 03:30:01.123709] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4580, cid 3, qid 0 00:21:41.050 [2024-12-06 03:30:01.123774] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.050 [2024-12-06 03:30:01.123780] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.050 [2024-12-06 03:30:01.123782] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.050 [2024-12-06 03:30:01.123786] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4580) on tqpair=0x1452690 00:21:41.050 [2024-12-06 03:30:01.123794] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.050 [2024-12-06 03:30:01.123797] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.050 [2024-12-06 03:30:01.123800] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1452690) 00:21:41.050 [2024-12-06 03:30:01.123806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.050 [2024-12-06 03:30:01.123815] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4580, cid 3, qid 0 00:21:41.050 [2024-12-06 03:30:01.123887] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.050 [2024-12-06 03:30:01.123892] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.050 [2024-12-06 03:30:01.123896] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.050 [2024-12-06 03:30:01.123899] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4580) on tqpair=0x1452690 00:21:41.050 [2024-12-06 03:30:01.123906] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.050 [2024-12-06 03:30:01.123910] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.050 [2024-12-06 03:30:01.123913] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1452690) 00:21:41.050 [2024-12-06 03:30:01.123919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.050 [2024-12-06 03:30:01.123928] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4580, cid 3, qid 0 00:21:41.050 [2024-12-06 03:30:01.123997] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.050 [2024-12-06 03:30:01.124003] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.050 [2024-12-06 03:30:01.124006] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.050 [2024-12-06 03:30:01.124009] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4580) on tqpair=0x1452690 00:21:41.050 [2024-12-06 03:30:01.124017] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.050 [2024-12-06 03:30:01.124021] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.050 [2024-12-06 03:30:01.124024] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1452690) 00:21:41.050 [2024-12-06 03:30:01.124030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.050 [2024-12-06 03:30:01.124040] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4580, cid 3, qid 0 00:21:41.050 [2024-12-06 03:30:01.124105] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.050 [2024-12-06 03:30:01.124112] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.050 [2024-12-06 03:30:01.124115] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.050 [2024-12-06 03:30:01.124118] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4580) on tqpair=0x1452690 00:21:41.050 [2024-12-06 03:30:01.124126] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.050 [2024-12-06 03:30:01.124130] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.050 [2024-12-06 03:30:01.124133] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1452690) 00:21:41.050 [2024-12-06 03:30:01.124139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.050 [2024-12-06 03:30:01.124148] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4580, cid 3, qid 0 00:21:41.050 [2024-12-06 03:30:01.124216] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.050 [2024-12-06 03:30:01.124222] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.050 [2024-12-06 03:30:01.124225] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.050 [2024-12-06 03:30:01.124228] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4580) on tqpair=0x1452690 00:21:41.050 [2024-12-06 03:30:01.124236] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.050 [2024-12-06 03:30:01.124240] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.050 [2024-12-06 03:30:01.124243] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1452690) 00:21:41.050 [2024-12-06 03:30:01.124249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.050 [2024-12-06 03:30:01.124258] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4580, cid 3, qid 0 00:21:41.050 [2024-12-06 03:30:01.124326] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.050 [2024-12-06 03:30:01.124331] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.050 [2024-12-06 03:30:01.124334] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.050 [2024-12-06 03:30:01.124338] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4580) on tqpair=0x1452690 00:21:41.050 [2024-12-06 03:30:01.124345] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.050 [2024-12-06 03:30:01.124349] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.050 [2024-12-06 03:30:01.124352] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1452690) 00:21:41.051 [2024-12-06 03:30:01.124358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.051 [2024-12-06 03:30:01.124367] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4580, cid 3, qid 0 00:21:41.051 [2024-12-06 03:30:01.124433] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.051 [2024-12-06 03:30:01.124439] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.051 [2024-12-06 03:30:01.124442] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.051 [2024-12-06 03:30:01.124445] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4580) on tqpair=0x1452690 00:21:41.051 [2024-12-06 03:30:01.124453] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.051 [2024-12-06 03:30:01.124457] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.051 [2024-12-06 03:30:01.124460] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1452690) 00:21:41.051 [2024-12-06 03:30:01.124465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.051 [2024-12-06 03:30:01.124475] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4580, cid 3, qid 0 00:21:41.051 [2024-12-06 03:30:01.124542] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.051 [2024-12-06 03:30:01.124548] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.051 [2024-12-06 03:30:01.124553] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.051 [2024-12-06 03:30:01.124557] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4580) on tqpair=0x1452690 00:21:41.051 [2024-12-06 03:30:01.124565] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.051 [2024-12-06 03:30:01.124568] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.051 [2024-12-06 03:30:01.124571] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1452690) 00:21:41.051 [2024-12-06 03:30:01.124577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.051 [2024-12-06 03:30:01.124586] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4580, cid 3, qid 0 00:21:41.051 [2024-12-06 03:30:01.124671] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.051 [2024-12-06 03:30:01.124677] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.051 [2024-12-06 03:30:01.124680] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.051 [2024-12-06 03:30:01.124683] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4580) on tqpair=0x1452690 00:21:41.051 [2024-12-06 03:30:01.124691] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.051 [2024-12-06 03:30:01.124695] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.051 [2024-12-06 03:30:01.124698] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1452690) 00:21:41.051 [2024-12-06 03:30:01.124704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.051 [2024-12-06 03:30:01.124713] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4580, cid 3, qid 0 00:21:41.051 [2024-12-06 03:30:01.124774] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.051 [2024-12-06 03:30:01.124780] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.051 [2024-12-06 03:30:01.124783] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.051 [2024-12-06 03:30:01.124787] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4580) on tqpair=0x1452690 00:21:41.051 [2024-12-06 03:30:01.124794] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.051 [2024-12-06 03:30:01.124798] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.051 [2024-12-06 03:30:01.124801] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1452690) 00:21:41.051 [2024-12-06 03:30:01.124807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.051 [2024-12-06 03:30:01.124816] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4580, cid 3, qid 0 00:21:41.051 [2024-12-06 03:30:01.124883] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.051 [2024-12-06 03:30:01.124888] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.051 [2024-12-06 03:30:01.124891] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.051 [2024-12-06 03:30:01.124895] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4580) on tqpair=0x1452690 00:21:41.051 [2024-12-06 03:30:01.124902] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.051 [2024-12-06 03:30:01.124906] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.051 [2024-12-06 03:30:01.124909] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1452690) 00:21:41.051 [2024-12-06 03:30:01.124915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.051 [2024-12-06 03:30:01.124924] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4580, cid 3, qid 0 00:21:41.051 [2024-12-06 03:30:01.124992] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.051 [2024-12-06 03:30:01.124998] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.051 [2024-12-06 03:30:01.125001] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.051 [2024-12-06 03:30:01.125005] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4580) on tqpair=0x1452690 00:21:41.051 [2024-12-06 03:30:01.125015] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.051 [2024-12-06 03:30:01.125018] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.051 [2024-12-06 03:30:01.125021] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1452690) 00:21:41.051 [2024-12-06 03:30:01.125027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.051 [2024-12-06 03:30:01.125037] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4580, cid 3, qid 0 00:21:41.051 [2024-12-06 03:30:01.125100] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.051 [2024-12-06 03:30:01.125106] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.051 [2024-12-06 03:30:01.125109] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.051 [2024-12-06 03:30:01.125112] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4580) on tqpair=0x1452690 00:21:41.051 [2024-12-06 03:30:01.125120] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.051 [2024-12-06 03:30:01.125124] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.051 [2024-12-06 03:30:01.125127] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1452690) 00:21:41.051 [2024-12-06 03:30:01.125133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.051 [2024-12-06 03:30:01.125143] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4580, cid 3, qid 0 00:21:41.051 [2024-12-06 03:30:01.125213] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.051 [2024-12-06 03:30:01.125219] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.051 [2024-12-06 03:30:01.125223] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.051 [2024-12-06 03:30:01.125226] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4580) on tqpair=0x1452690 00:21:41.051 [2024-12-06 03:30:01.125234] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.051 [2024-12-06 03:30:01.125238] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.051 [2024-12-06 03:30:01.125241] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1452690) 00:21:41.051 [2024-12-06 03:30:01.125246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.051 [2024-12-06 03:30:01.125255] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4580, cid 3, qid 0 00:21:41.051 [2024-12-06 03:30:01.125320] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.051 [2024-12-06 03:30:01.125326] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.051 [2024-12-06 03:30:01.125329] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.051 [2024-12-06 03:30:01.125332] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4580) on tqpair=0x1452690 00:21:41.051 [2024-12-06 03:30:01.125341] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.051 [2024-12-06 03:30:01.125344] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.051 [2024-12-06 03:30:01.125347] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1452690) 00:21:41.051 [2024-12-06 03:30:01.125353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.051 [2024-12-06 03:30:01.125362] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4580, cid 3, qid 0 00:21:41.051 [2024-12-06 03:30:01.125430] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.051 [2024-12-06 03:30:01.125436] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.051 [2024-12-06 03:30:01.125439] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.051 [2024-12-06 03:30:01.125443] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4580) on tqpair=0x1452690 00:21:41.051 [2024-12-06 03:30:01.125452] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.051 [2024-12-06 03:30:01.125456] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.051 [2024-12-06 03:30:01.125459] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1452690) 00:21:41.052 [2024-12-06 03:30:01.125465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.052 [2024-12-06 03:30:01.125474] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4580, cid 3, qid 0 00:21:41.052 [2024-12-06 03:30:01.128956] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.052 [2024-12-06 03:30:01.128965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.052 [2024-12-06 03:30:01.128968] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.052 [2024-12-06 03:30:01.128971] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4580) on tqpair=0x1452690 00:21:41.052 [2024-12-06 03:30:01.128981] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.052 [2024-12-06 03:30:01.128985] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.052 [2024-12-06 03:30:01.128988] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1452690) 00:21:41.052 [2024-12-06 03:30:01.128994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.052 [2024-12-06 03:30:01.129005] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b4580, cid 3, qid 0 00:21:41.052 [2024-12-06 03:30:01.129135] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.052 [2024-12-06 03:30:01.129141] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.052 [2024-12-06 03:30:01.129144] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.052 [2024-12-06 03:30:01.129147] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b4580) on tqpair=0x1452690 00:21:41.052 [2024-12-06 03:30:01.129154] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:21:41.052 00:21:41.052 03:30:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:41.052 [2024-12-06 03:30:01.165961] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:21:41.052 [2024-12-06 03:30:01.165999] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2694029 ] 00:21:41.316 [2024-12-06 03:30:01.207443] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:21:41.316 [2024-12-06 03:30:01.207489] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:41.316 [2024-12-06 03:30:01.207494] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:41.316 [2024-12-06 03:30:01.207507] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:41.316 [2024-12-06 03:30:01.207516] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:41.316 [2024-12-06 03:30:01.207918] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:21:41.316 [2024-12-06 03:30:01.207946] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x111b690 0 00:21:41.316 [2024-12-06 03:30:01.217987] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:41.316 [2024-12-06 03:30:01.218001] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:41.316 [2024-12-06 03:30:01.218008] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:41.316 [2024-12-06 03:30:01.218011] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:41.316 [2024-12-06 03:30:01.218039] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.316 [2024-12-06 03:30:01.218044] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.316 [2024-12-06 03:30:01.218048] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x111b690) 00:21:41.316 [2024-12-06 03:30:01.218058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:41.316 [2024-12-06 03:30:01.218074] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x117d100, cid 0, qid 0 00:21:41.316 [2024-12-06 03:30:01.228958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.316 [2024-12-06 03:30:01.228966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.316 [2024-12-06 03:30:01.228969] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.316 [2024-12-06 03:30:01.228973] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x117d100) on tqpair=0x111b690 00:21:41.316 [2024-12-06 03:30:01.228982] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:41.316 [2024-12-06 03:30:01.228987] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:21:41.316 [2024-12-06 03:30:01.228992] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:21:41.316 [2024-12-06 03:30:01.229002] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.316 [2024-12-06 03:30:01.229006] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.316 [2024-12-06 03:30:01.229009] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x111b690) 00:21:41.316 [2024-12-06 03:30:01.229016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.316 [2024-12-06 03:30:01.229029] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x117d100, cid 0, qid 0 00:21:41.316 [2024-12-06 03:30:01.229152] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.316 [2024-12-06 03:30:01.229159] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.316 [2024-12-06 03:30:01.229162] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.316 [2024-12-06 03:30:01.229165] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x117d100) on tqpair=0x111b690 00:21:41.316 [2024-12-06 03:30:01.229169] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:21:41.316 [2024-12-06 03:30:01.229176] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:21:41.316 [2024-12-06 03:30:01.229182] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.316 [2024-12-06 03:30:01.229186] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.316 [2024-12-06 03:30:01.229189] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x111b690) 00:21:41.316 [2024-12-06 03:30:01.229195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.316 [2024-12-06 03:30:01.229205] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x117d100, cid 0, qid 0 00:21:41.316 [2024-12-06 03:30:01.229299] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.316 [2024-12-06 03:30:01.229305] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.316 [2024-12-06 03:30:01.229308] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.316 [2024-12-06 03:30:01.229311] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x117d100) on tqpair=0x111b690 00:21:41.316 [2024-12-06 03:30:01.229316] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:21:41.316 [2024-12-06 03:30:01.229325] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:41.316 [2024-12-06 03:30:01.229331] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.316 [2024-12-06 03:30:01.229335] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.316 [2024-12-06 03:30:01.229338] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x111b690) 00:21:41.317 [2024-12-06 03:30:01.229343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.317 [2024-12-06 03:30:01.229353] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x117d100, cid 0, qid 0 00:21:41.317 [2024-12-06 03:30:01.229417] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.317 [2024-12-06 03:30:01.229422] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.317 [2024-12-06 03:30:01.229425] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.317 [2024-12-06 03:30:01.229428] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x117d100) on tqpair=0x111b690 00:21:41.317 [2024-12-06 03:30:01.229433] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:41.317 [2024-12-06 03:30:01.229441] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.317 [2024-12-06 03:30:01.229445] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.317 [2024-12-06 03:30:01.229448] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x111b690) 00:21:41.317 [2024-12-06 03:30:01.229453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.317 [2024-12-06 03:30:01.229463] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x117d100, cid 0, qid 0 00:21:41.317 [2024-12-06 03:30:01.229551] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.317 [2024-12-06 03:30:01.229556] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.317 [2024-12-06 03:30:01.229560] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.317 [2024-12-06 03:30:01.229563] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x117d100) on tqpair=0x111b690 00:21:41.317 [2024-12-06 03:30:01.229566] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:41.317 [2024-12-06 03:30:01.229570] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:41.317 [2024-12-06 03:30:01.229577] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:41.317 [2024-12-06 03:30:01.229685] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:21:41.317 [2024-12-06 03:30:01.229689] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:41.317 [2024-12-06 03:30:01.229695] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.317 [2024-12-06 03:30:01.229699] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.317 [2024-12-06 03:30:01.229702] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x111b690) 00:21:41.317 [2024-12-06 03:30:01.229708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.317 [2024-12-06 03:30:01.229718] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x117d100, cid 0, qid 0 00:21:41.317 [2024-12-06 03:30:01.229785] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.317 [2024-12-06 03:30:01.229791] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.317 [2024-12-06 03:30:01.229794] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.317 [2024-12-06 03:30:01.229797] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x117d100) on tqpair=0x111b690 00:21:41.317 [2024-12-06 03:30:01.229805] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:41.317 [2024-12-06 03:30:01.229813] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.317 [2024-12-06 03:30:01.229817] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.317 [2024-12-06 03:30:01.229820] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x111b690) 00:21:41.317 [2024-12-06 03:30:01.229826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.317 [2024-12-06 03:30:01.229835] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x117d100, cid 0, qid 0 00:21:41.317 [2024-12-06 03:30:01.229936] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.317 [2024-12-06 03:30:01.229942] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.317 [2024-12-06 03:30:01.229945] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.317 [2024-12-06 03:30:01.229955] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x117d100) on tqpair=0x111b690 00:21:41.317 [2024-12-06 03:30:01.229959] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:41.317 [2024-12-06 03:30:01.229963] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:41.317 [2024-12-06 03:30:01.229970] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:21:41.317 [2024-12-06 03:30:01.229976] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:41.317 [2024-12-06 03:30:01.229984] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.317 [2024-12-06 03:30:01.229988] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x111b690) 00:21:41.317 [2024-12-06 03:30:01.229993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.317 [2024-12-06 03:30:01.230003] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x117d100, cid 0, qid 0 00:21:41.317 [2024-12-06 03:30:01.230098] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:41.317 [2024-12-06 03:30:01.230104] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:41.317 [2024-12-06 03:30:01.230107] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:41.317 [2024-12-06 03:30:01.230110] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x111b690): datao=0, datal=4096, cccid=0 00:21:41.317 [2024-12-06 03:30:01.230114] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x117d100) on tqpair(0x111b690): expected_datao=0, payload_size=4096 00:21:41.317 [2024-12-06 03:30:01.230118] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.317 [2024-12-06 03:30:01.230139] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:41.317 [2024-12-06 03:30:01.230144] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:41.317 [2024-12-06 03:30:01.230187] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.317 [2024-12-06 03:30:01.230192] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.317 [2024-12-06 03:30:01.230195] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.317 [2024-12-06 03:30:01.230199] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x117d100) on tqpair=0x111b690 00:21:41.317 [2024-12-06 03:30:01.230205] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:21:41.317 [2024-12-06 03:30:01.230211] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:21:41.317 [2024-12-06 03:30:01.230215] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:21:41.317 [2024-12-06 03:30:01.230220] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:21:41.317 [2024-12-06 03:30:01.230224] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:21:41.317 [2024-12-06 03:30:01.230229] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:21:41.317 [2024-12-06 03:30:01.230237] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:41.317 [2024-12-06 03:30:01.230243] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.317 [2024-12-06 03:30:01.230246] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.317 [2024-12-06 03:30:01.230249] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x111b690) 00:21:41.317 [2024-12-06 03:30:01.230255] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:41.317 [2024-12-06 03:30:01.230265] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x117d100, cid 0, qid 0 00:21:41.317 [2024-12-06 03:30:01.230328] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.317 [2024-12-06 03:30:01.230334] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.317 [2024-12-06 03:30:01.230337] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.317 [2024-12-06 03:30:01.230340] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x117d100) on tqpair=0x111b690 00:21:41.317 [2024-12-06 03:30:01.230346] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.317 [2024-12-06 03:30:01.230349] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.317 [2024-12-06 03:30:01.230352] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x111b690) 00:21:41.317 [2024-12-06 03:30:01.230357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.317 [2024-12-06 03:30:01.230363] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.317 [2024-12-06 03:30:01.230366] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.317 [2024-12-06 03:30:01.230369] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x111b690) 00:21:41.317 [2024-12-06 03:30:01.230374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.317 [2024-12-06 03:30:01.230379] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.317 [2024-12-06 03:30:01.230382] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.317 [2024-12-06 03:30:01.230385] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x111b690) 00:21:41.317 [2024-12-06 03:30:01.230390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.317 [2024-12-06 03:30:01.230395] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.317 [2024-12-06 03:30:01.230398] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.317 [2024-12-06 03:30:01.230401] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x111b690) 00:21:41.317 [2024-12-06 03:30:01.230406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.317 [2024-12-06 03:30:01.230410] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:41.317 [2024-12-06 03:30:01.230420] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:41.317 [2024-12-06 03:30:01.230426] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.317 [2024-12-06 03:30:01.230429] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x111b690) 00:21:41.317 [2024-12-06 03:30:01.230436] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.318 [2024-12-06 03:30:01.230447] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x117d100, cid 0, qid 0 00:21:41.318 [2024-12-06 03:30:01.230451] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x117d280, cid 1, qid 0 00:21:41.318 [2024-12-06 03:30:01.230456] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x117d400, cid 2, qid 0 00:21:41.318 [2024-12-06 03:30:01.230460] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x117d580, cid 3, qid 0 00:21:41.318 [2024-12-06 03:30:01.230463] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x117d700, cid 4, qid 0 00:21:41.318 [2024-12-06 03:30:01.230581] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.318 [2024-12-06 03:30:01.230587] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.318 [2024-12-06 03:30:01.230590] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.318 [2024-12-06 03:30:01.230593] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x117d700) on tqpair=0x111b690 00:21:41.318 [2024-12-06 03:30:01.230597] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:21:41.318 [2024-12-06 03:30:01.230602] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:41.318 [2024-12-06 03:30:01.230609] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:21:41.318 [2024-12-06 03:30:01.230615] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:41.318 [2024-12-06 03:30:01.230621] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.318 [2024-12-06 03:30:01.230624] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.318 [2024-12-06 03:30:01.230628] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x111b690) 00:21:41.318 [2024-12-06 03:30:01.230633] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:41.318 [2024-12-06 03:30:01.230643] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x117d700, cid 4, qid 0 00:21:41.318 [2024-12-06 03:30:01.230731] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.318 [2024-12-06 03:30:01.230737] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.318 [2024-12-06 03:30:01.230740] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.318 [2024-12-06 03:30:01.230744] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x117d700) on tqpair=0x111b690 00:21:41.318 [2024-12-06 03:30:01.230797] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:21:41.318 [2024-12-06 03:30:01.230807] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:41.318 [2024-12-06 03:30:01.230813] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.318 [2024-12-06 03:30:01.230816] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x111b690) 00:21:41.318 [2024-12-06 03:30:01.230822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.318 [2024-12-06 03:30:01.230832] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x117d700, cid 4, qid 0 00:21:41.318 [2024-12-06 03:30:01.230905] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:41.318 [2024-12-06 03:30:01.230911] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:41.318 [2024-12-06 03:30:01.230915] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:41.318 [2024-12-06 03:30:01.230919] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x111b690): datao=0, datal=4096, cccid=4 00:21:41.318 [2024-12-06 03:30:01.230923] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x117d700) on tqpair(0x111b690): expected_datao=0, payload_size=4096 00:21:41.318 [2024-12-06 03:30:01.230927] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.318 [2024-12-06 03:30:01.230957] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:41.318 [2024-12-06 03:30:01.230961] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:41.318 [2024-12-06 03:30:01.272008] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.318 [2024-12-06 03:30:01.272023] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.318 [2024-12-06 03:30:01.272027] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.318 [2024-12-06 03:30:01.272031] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x117d700) on tqpair=0x111b690 00:21:41.318 [2024-12-06 03:30:01.272042] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:21:41.318 [2024-12-06 03:30:01.272058] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:21:41.318 [2024-12-06 03:30:01.272069] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:21:41.318 [2024-12-06 03:30:01.272076] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.318 [2024-12-06 03:30:01.272080] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x111b690) 00:21:41.318 [2024-12-06 03:30:01.272088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.318 [2024-12-06 03:30:01.272102] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x117d700, cid 4, qid 0 00:21:41.318 [2024-12-06 03:30:01.272179] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:41.318 [2024-12-06 03:30:01.272185] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:41.318 [2024-12-06 03:30:01.272189] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:41.318 [2024-12-06 03:30:01.272192] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x111b690): datao=0, datal=4096, cccid=4 00:21:41.318 [2024-12-06 03:30:01.272196] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x117d700) on tqpair(0x111b690): expected_datao=0, payload_size=4096 00:21:41.318 [2024-12-06 03:30:01.272200] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.318 [2024-12-06 03:30:01.272211] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:41.318 [2024-12-06 03:30:01.272215] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:41.318 [2024-12-06 03:30:01.313063] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.318 [2024-12-06 03:30:01.313074] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.318 [2024-12-06 03:30:01.313078] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.318 [2024-12-06 03:30:01.313081] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x117d700) on tqpair=0x111b690 00:21:41.318 [2024-12-06 03:30:01.313095] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:41.318 [2024-12-06 03:30:01.313105] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:41.318 [2024-12-06 03:30:01.313113] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.318 [2024-12-06 03:30:01.313117] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x111b690) 00:21:41.318 [2024-12-06 03:30:01.313124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.318 [2024-12-06 03:30:01.313136] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x117d700, cid 4, qid 0 00:21:41.318 [2024-12-06 03:30:01.313204] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:41.318 [2024-12-06 03:30:01.313210] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:41.318 [2024-12-06 03:30:01.313213] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:41.318 [2024-12-06 03:30:01.313217] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x111b690): datao=0, datal=4096, cccid=4 00:21:41.318 [2024-12-06 03:30:01.313221] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x117d700) on tqpair(0x111b690): expected_datao=0, payload_size=4096 00:21:41.318 [2024-12-06 03:30:01.313224] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.318 [2024-12-06 03:30:01.313238] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:41.318 [2024-12-06 03:30:01.313242] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:41.318 [2024-12-06 03:30:01.354017] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.318 [2024-12-06 03:30:01.354027] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.318 [2024-12-06 03:30:01.354030] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.318 [2024-12-06 03:30:01.354034] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x117d700) on tqpair=0x111b690 00:21:41.318 [2024-12-06 03:30:01.354042] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:41.318 [2024-12-06 03:30:01.354049] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:21:41.318 [2024-12-06 03:30:01.354058] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:21:41.318 [2024-12-06 03:30:01.354065] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:21:41.318 [2024-12-06 03:30:01.354070] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:41.318 [2024-12-06 03:30:01.354074] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:21:41.318 [2024-12-06 03:30:01.354079] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:21:41.318 [2024-12-06 03:30:01.354083] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:21:41.318 [2024-12-06 03:30:01.354088] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:21:41.318 [2024-12-06 03:30:01.354101] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.318 [2024-12-06 03:30:01.354105] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x111b690) 00:21:41.318 [2024-12-06 03:30:01.354112] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.318 [2024-12-06 03:30:01.354118] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.318 [2024-12-06 03:30:01.354121] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.318 [2024-12-06 03:30:01.354125] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x111b690) 00:21:41.318 [2024-12-06 03:30:01.354130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.318 [2024-12-06 03:30:01.354144] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x117d700, cid 4, qid 0 00:21:41.318 [2024-12-06 03:30:01.354149] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x117d880, cid 5, qid 0 00:21:41.318 [2024-12-06 03:30:01.354218] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.318 [2024-12-06 03:30:01.354224] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.318 [2024-12-06 03:30:01.354231] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.319 [2024-12-06 03:30:01.354235] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x117d700) on tqpair=0x111b690 00:21:41.319 [2024-12-06 03:30:01.354240] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.319 [2024-12-06 03:30:01.354245] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.319 [2024-12-06 03:30:01.354249] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.319 [2024-12-06 03:30:01.354252] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x117d880) on tqpair=0x111b690 00:21:41.319 [2024-12-06 03:30:01.354260] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.319 [2024-12-06 03:30:01.354264] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x111b690) 00:21:41.319 [2024-12-06 03:30:01.354269] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.319 [2024-12-06 03:30:01.354279] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x117d880, cid 5, qid 0 00:21:41.319 [2024-12-06 03:30:01.354343] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.319 [2024-12-06 03:30:01.354349] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.319 [2024-12-06 03:30:01.354352] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.319 [2024-12-06 03:30:01.354355] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x117d880) on tqpair=0x111b690 00:21:41.319 [2024-12-06 03:30:01.354363] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.319 [2024-12-06 03:30:01.354367] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x111b690) 00:21:41.319 [2024-12-06 03:30:01.354372] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.319 [2024-12-06 03:30:01.354381] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x117d880, cid 5, qid 0 00:21:41.319 [2024-12-06 03:30:01.354460] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.319 [2024-12-06 03:30:01.354466] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.319 [2024-12-06 03:30:01.354469] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.319 [2024-12-06 03:30:01.354472] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x117d880) on tqpair=0x111b690 00:21:41.319 [2024-12-06 03:30:01.354480] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.319 [2024-12-06 03:30:01.354483] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x111b690) 00:21:41.319 [2024-12-06 03:30:01.354489] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.319 [2024-12-06 03:30:01.354498] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x117d880, cid 5, qid 0 00:21:41.319 [2024-12-06 03:30:01.354557] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.319 [2024-12-06 03:30:01.354562] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.319 [2024-12-06 03:30:01.354565] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.319 [2024-12-06 03:30:01.354569] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x117d880) on tqpair=0x111b690 00:21:41.319 [2024-12-06 03:30:01.354581] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.319 [2024-12-06 03:30:01.354585] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x111b690) 00:21:41.319 [2024-12-06 03:30:01.354591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.319 [2024-12-06 03:30:01.354597] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.319 [2024-12-06 03:30:01.354600] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x111b690) 00:21:41.319 [2024-12-06 03:30:01.354607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.319 [2024-12-06 03:30:01.354613] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.319 [2024-12-06 03:30:01.354616] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x111b690) 00:21:41.319 [2024-12-06 03:30:01.354622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.319 [2024-12-06 03:30:01.354628] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.319 [2024-12-06 03:30:01.354631] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x111b690) 00:21:41.319 [2024-12-06 03:30:01.354636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.319 [2024-12-06 03:30:01.354647] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x117d880, cid 5, qid 0 00:21:41.319 [2024-12-06 03:30:01.354652] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x117d700, cid 4, qid 0 00:21:41.319 [2024-12-06 03:30:01.354656] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x117da00, cid 6, qid 0 00:21:41.319 [2024-12-06 03:30:01.354660] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x117db80, cid 7, qid 0 00:21:41.319 [2024-12-06 03:30:01.354791] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:41.319 [2024-12-06 03:30:01.354798] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:41.319 [2024-12-06 03:30:01.354801] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:41.319 [2024-12-06 03:30:01.354804] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x111b690): datao=0, datal=8192, cccid=5 00:21:41.319 [2024-12-06 03:30:01.354808] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x117d880) on tqpair(0x111b690): expected_datao=0, payload_size=8192 00:21:41.319 [2024-12-06 03:30:01.354812] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.319 [2024-12-06 03:30:01.354861] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:41.319 [2024-12-06 03:30:01.354865] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:41.319 [2024-12-06 03:30:01.354869] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:41.319 [2024-12-06 03:30:01.354874] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:41.319 [2024-12-06 03:30:01.354877] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:41.319 [2024-12-06 03:30:01.354880] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x111b690): datao=0, datal=512, cccid=4 00:21:41.319 [2024-12-06 03:30:01.354884] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x117d700) on tqpair(0x111b690): expected_datao=0, payload_size=512 00:21:41.319 [2024-12-06 03:30:01.354888] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.319 [2024-12-06 03:30:01.354893] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:41.319 [2024-12-06 03:30:01.354896] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:41.319 [2024-12-06 03:30:01.354901] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:41.319 [2024-12-06 03:30:01.354906] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:41.319 [2024-12-06 03:30:01.354909] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:41.319 [2024-12-06 03:30:01.354912] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x111b690): datao=0, datal=512, cccid=6 00:21:41.319 [2024-12-06 03:30:01.354916] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x117da00) on tqpair(0x111b690): expected_datao=0, payload_size=512 00:21:41.319 [2024-12-06 03:30:01.354920] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.319 [2024-12-06 03:30:01.354925] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:41.319 [2024-12-06 03:30:01.354928] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:41.319 [2024-12-06 03:30:01.354934] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:41.319 [2024-12-06 03:30:01.354939] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:41.319 [2024-12-06 03:30:01.354942] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:41.319 [2024-12-06 03:30:01.354945] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x111b690): datao=0, datal=4096, cccid=7 00:21:41.319 [2024-12-06 03:30:01.354955] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x117db80) on tqpair(0x111b690): expected_datao=0, payload_size=4096 00:21:41.319 [2024-12-06 03:30:01.354958] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.319 [2024-12-06 03:30:01.354964] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:41.319 [2024-12-06 03:30:01.354967] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:41.319 [2024-12-06 03:30:01.354975] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.319 [2024-12-06 03:30:01.354979] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.319 [2024-12-06 03:30:01.354982] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.319 [2024-12-06 03:30:01.354986] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x117d880) on tqpair=0x111b690 00:21:41.319 [2024-12-06 03:30:01.354996] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.319 [2024-12-06 03:30:01.355001] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.319 [2024-12-06 03:30:01.355004] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.319 [2024-12-06 03:30:01.355007] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x117d700) on tqpair=0x111b690 00:21:41.319 [2024-12-06 03:30:01.355016] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.319 [2024-12-06 03:30:01.355021] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.319 [2024-12-06 03:30:01.355024] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.319 [2024-12-06 03:30:01.355027] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x117da00) on tqpair=0x111b690 00:21:41.319 [2024-12-06 03:30:01.355033] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.319 [2024-12-06 03:30:01.355037] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.319 [2024-12-06 03:30:01.355041] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.319 [2024-12-06 03:30:01.355044] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x117db80) on tqpair=0x111b690 00:21:41.319 ===================================================== 00:21:41.319 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:41.319 ===================================================== 00:21:41.319 Controller Capabilities/Features 00:21:41.319 ================================ 00:21:41.319 Vendor ID: 8086 00:21:41.319 Subsystem Vendor ID: 8086 00:21:41.319 Serial Number: SPDK00000000000001 00:21:41.319 Model Number: SPDK bdev Controller 00:21:41.319 Firmware Version: 25.01 00:21:41.319 Recommended Arb Burst: 6 00:21:41.319 IEEE OUI Identifier: e4 d2 5c 00:21:41.319 Multi-path I/O 00:21:41.319 May have multiple subsystem ports: Yes 00:21:41.319 May have multiple controllers: Yes 00:21:41.319 Associated with SR-IOV VF: No 00:21:41.319 Max Data Transfer Size: 131072 00:21:41.319 Max Number of Namespaces: 32 00:21:41.319 Max Number of I/O Queues: 127 00:21:41.319 NVMe Specification Version (VS): 1.3 00:21:41.319 NVMe Specification Version (Identify): 1.3 00:21:41.319 Maximum Queue Entries: 128 00:21:41.319 Contiguous Queues Required: Yes 00:21:41.320 Arbitration Mechanisms Supported 00:21:41.320 Weighted Round Robin: Not Supported 00:21:41.320 Vendor Specific: Not Supported 00:21:41.320 Reset Timeout: 15000 ms 00:21:41.320 Doorbell Stride: 4 bytes 00:21:41.320 NVM Subsystem Reset: Not Supported 00:21:41.320 Command Sets Supported 00:21:41.320 NVM Command Set: Supported 00:21:41.320 Boot Partition: Not Supported 00:21:41.320 Memory Page Size Minimum: 4096 bytes 00:21:41.320 Memory Page Size Maximum: 4096 bytes 00:21:41.320 Persistent Memory Region: Not Supported 00:21:41.320 Optional Asynchronous Events Supported 00:21:41.320 Namespace Attribute Notices: Supported 00:21:41.320 Firmware Activation Notices: Not Supported 00:21:41.320 ANA Change Notices: Not Supported 00:21:41.320 PLE Aggregate Log Change Notices: Not Supported 00:21:41.320 LBA Status Info Alert Notices: Not Supported 00:21:41.320 EGE Aggregate Log Change Notices: Not Supported 00:21:41.320 Normal NVM Subsystem Shutdown event: Not Supported 00:21:41.320 Zone Descriptor Change Notices: Not Supported 00:21:41.320 Discovery Log Change Notices: Not Supported 00:21:41.320 Controller Attributes 00:21:41.320 128-bit Host Identifier: Supported 00:21:41.320 Non-Operational Permissive Mode: Not Supported 00:21:41.320 NVM Sets: Not Supported 00:21:41.320 Read Recovery Levels: Not Supported 00:21:41.320 Endurance Groups: Not Supported 00:21:41.320 Predictable Latency Mode: Not Supported 00:21:41.320 Traffic Based Keep ALive: Not Supported 00:21:41.320 Namespace Granularity: Not Supported 00:21:41.320 SQ Associations: Not Supported 00:21:41.320 UUID List: Not Supported 00:21:41.320 Multi-Domain Subsystem: Not Supported 00:21:41.320 Fixed Capacity Management: Not Supported 00:21:41.320 Variable Capacity Management: Not Supported 00:21:41.320 Delete Endurance Group: Not Supported 00:21:41.320 Delete NVM Set: Not Supported 00:21:41.320 Extended LBA Formats Supported: Not Supported 00:21:41.320 Flexible Data Placement Supported: Not Supported 00:21:41.320 00:21:41.320 Controller Memory Buffer Support 00:21:41.320 ================================ 00:21:41.320 Supported: No 00:21:41.320 00:21:41.320 Persistent Memory Region Support 00:21:41.320 ================================ 00:21:41.320 Supported: No 00:21:41.320 00:21:41.320 Admin Command Set Attributes 00:21:41.320 ============================ 00:21:41.320 Security Send/Receive: Not Supported 00:21:41.320 Format NVM: Not Supported 00:21:41.320 Firmware Activate/Download: Not Supported 00:21:41.320 Namespace Management: Not Supported 00:21:41.320 Device Self-Test: Not Supported 00:21:41.320 Directives: Not Supported 00:21:41.320 NVMe-MI: Not Supported 00:21:41.320 Virtualization Management: Not Supported 00:21:41.320 Doorbell Buffer Config: Not Supported 00:21:41.320 Get LBA Status Capability: Not Supported 00:21:41.320 Command & Feature Lockdown Capability: Not Supported 00:21:41.320 Abort Command Limit: 4 00:21:41.320 Async Event Request Limit: 4 00:21:41.320 Number of Firmware Slots: N/A 00:21:41.320 Firmware Slot 1 Read-Only: N/A 00:21:41.320 Firmware Activation Without Reset: N/A 00:21:41.320 Multiple Update Detection Support: N/A 00:21:41.320 Firmware Update Granularity: No Information Provided 00:21:41.320 Per-Namespace SMART Log: No 00:21:41.320 Asymmetric Namespace Access Log Page: Not Supported 00:21:41.320 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:41.320 Command Effects Log Page: Supported 00:21:41.320 Get Log Page Extended Data: Supported 00:21:41.320 Telemetry Log Pages: Not Supported 00:21:41.320 Persistent Event Log Pages: Not Supported 00:21:41.320 Supported Log Pages Log Page: May Support 00:21:41.320 Commands Supported & Effects Log Page: Not Supported 00:21:41.320 Feature Identifiers & Effects Log Page:May Support 00:21:41.320 NVMe-MI Commands & Effects Log Page: May Support 00:21:41.320 Data Area 4 for Telemetry Log: Not Supported 00:21:41.320 Error Log Page Entries Supported: 128 00:21:41.320 Keep Alive: Supported 00:21:41.320 Keep Alive Granularity: 10000 ms 00:21:41.320 00:21:41.320 NVM Command Set Attributes 00:21:41.320 ========================== 00:21:41.320 Submission Queue Entry Size 00:21:41.320 Max: 64 00:21:41.320 Min: 64 00:21:41.320 Completion Queue Entry Size 00:21:41.320 Max: 16 00:21:41.320 Min: 16 00:21:41.320 Number of Namespaces: 32 00:21:41.320 Compare Command: Supported 00:21:41.320 Write Uncorrectable Command: Not Supported 00:21:41.320 Dataset Management Command: Supported 00:21:41.320 Write Zeroes Command: Supported 00:21:41.320 Set Features Save Field: Not Supported 00:21:41.320 Reservations: Supported 00:21:41.320 Timestamp: Not Supported 00:21:41.320 Copy: Supported 00:21:41.320 Volatile Write Cache: Present 00:21:41.320 Atomic Write Unit (Normal): 1 00:21:41.320 Atomic Write Unit (PFail): 1 00:21:41.320 Atomic Compare & Write Unit: 1 00:21:41.320 Fused Compare & Write: Supported 00:21:41.320 Scatter-Gather List 00:21:41.320 SGL Command Set: Supported 00:21:41.320 SGL Keyed: Supported 00:21:41.320 SGL Bit Bucket Descriptor: Not Supported 00:21:41.320 SGL Metadata Pointer: Not Supported 00:21:41.320 Oversized SGL: Not Supported 00:21:41.320 SGL Metadata Address: Not Supported 00:21:41.320 SGL Offset: Supported 00:21:41.320 Transport SGL Data Block: Not Supported 00:21:41.320 Replay Protected Memory Block: Not Supported 00:21:41.320 00:21:41.320 Firmware Slot Information 00:21:41.320 ========================= 00:21:41.320 Active slot: 1 00:21:41.320 Slot 1 Firmware Revision: 25.01 00:21:41.320 00:21:41.320 00:21:41.320 Commands Supported and Effects 00:21:41.320 ============================== 00:21:41.320 Admin Commands 00:21:41.320 -------------- 00:21:41.320 Get Log Page (02h): Supported 00:21:41.320 Identify (06h): Supported 00:21:41.320 Abort (08h): Supported 00:21:41.320 Set Features (09h): Supported 00:21:41.320 Get Features (0Ah): Supported 00:21:41.320 Asynchronous Event Request (0Ch): Supported 00:21:41.320 Keep Alive (18h): Supported 00:21:41.320 I/O Commands 00:21:41.320 ------------ 00:21:41.320 Flush (00h): Supported LBA-Change 00:21:41.320 Write (01h): Supported LBA-Change 00:21:41.320 Read (02h): Supported 00:21:41.320 Compare (05h): Supported 00:21:41.320 Write Zeroes (08h): Supported LBA-Change 00:21:41.320 Dataset Management (09h): Supported LBA-Change 00:21:41.320 Copy (19h): Supported LBA-Change 00:21:41.320 00:21:41.320 Error Log 00:21:41.320 ========= 00:21:41.320 00:21:41.320 Arbitration 00:21:41.320 =========== 00:21:41.320 Arbitration Burst: 1 00:21:41.320 00:21:41.320 Power Management 00:21:41.320 ================ 00:21:41.320 Number of Power States: 1 00:21:41.320 Current Power State: Power State #0 00:21:41.320 Power State #0: 00:21:41.320 Max Power: 0.00 W 00:21:41.320 Non-Operational State: Operational 00:21:41.320 Entry Latency: Not Reported 00:21:41.320 Exit Latency: Not Reported 00:21:41.320 Relative Read Throughput: 0 00:21:41.320 Relative Read Latency: 0 00:21:41.320 Relative Write Throughput: 0 00:21:41.320 Relative Write Latency: 0 00:21:41.320 Idle Power: Not Reported 00:21:41.320 Active Power: Not Reported 00:21:41.320 Non-Operational Permissive Mode: Not Supported 00:21:41.320 00:21:41.320 Health Information 00:21:41.320 ================== 00:21:41.320 Critical Warnings: 00:21:41.320 Available Spare Space: OK 00:21:41.320 Temperature: OK 00:21:41.320 Device Reliability: OK 00:21:41.320 Read Only: No 00:21:41.320 Volatile Memory Backup: OK 00:21:41.320 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:41.320 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:41.320 Available Spare: 0% 00:21:41.320 Available Spare Threshold: 0% 00:21:41.320 Life Percentage Used:[2024-12-06 03:30:01.355126] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.320 [2024-12-06 03:30:01.355131] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x111b690) 00:21:41.320 [2024-12-06 03:30:01.355136] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.320 [2024-12-06 03:30:01.355147] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x117db80, cid 7, qid 0 00:21:41.320 [2024-12-06 03:30:01.355232] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.320 [2024-12-06 03:30:01.355238] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.320 [2024-12-06 03:30:01.355241] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.320 [2024-12-06 03:30:01.355244] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x117db80) on tqpair=0x111b690 00:21:41.320 [2024-12-06 03:30:01.355274] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:21:41.320 [2024-12-06 03:30:01.355285] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x117d100) on tqpair=0x111b690 00:21:41.320 [2024-12-06 03:30:01.355290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.321 [2024-12-06 03:30:01.355295] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x117d280) on tqpair=0x111b690 00:21:41.321 [2024-12-06 03:30:01.355299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.321 [2024-12-06 03:30:01.355304] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x117d400) on tqpair=0x111b690 00:21:41.321 [2024-12-06 03:30:01.355308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.321 [2024-12-06 03:30:01.355313] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x117d580) on tqpair=0x111b690 00:21:41.321 [2024-12-06 03:30:01.355317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.321 [2024-12-06 03:30:01.355323] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.321 [2024-12-06 03:30:01.355327] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.321 [2024-12-06 03:30:01.355330] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x111b690) 00:21:41.321 [2024-12-06 03:30:01.355335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.321 [2024-12-06 03:30:01.355347] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x117d580, cid 3, qid 0 00:21:41.321 [2024-12-06 03:30:01.355408] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.321 [2024-12-06 03:30:01.355415] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.321 [2024-12-06 03:30:01.355418] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.321 [2024-12-06 03:30:01.355421] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x117d580) on tqpair=0x111b690 00:21:41.321 [2024-12-06 03:30:01.355426] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.321 [2024-12-06 03:30:01.355429] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.321 [2024-12-06 03:30:01.355433] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x111b690) 00:21:41.321 [2024-12-06 03:30:01.355438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.321 [2024-12-06 03:30:01.355450] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x117d580, cid 3, qid 0 00:21:41.321 [2024-12-06 03:30:01.355519] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.321 [2024-12-06 03:30:01.355524] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.321 [2024-12-06 03:30:01.355527] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.321 [2024-12-06 03:30:01.355530] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x117d580) on tqpair=0x111b690 00:21:41.321 [2024-12-06 03:30:01.355535] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:21:41.321 [2024-12-06 03:30:01.355539] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:21:41.321 [2024-12-06 03:30:01.355549] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.321 [2024-12-06 03:30:01.355553] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.321 [2024-12-06 03:30:01.355557] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x111b690) 00:21:41.321 [2024-12-06 03:30:01.355563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.321 [2024-12-06 03:30:01.355572] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x117d580, cid 3, qid 0 00:21:41.321 [2024-12-06 03:30:01.355644] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.321 [2024-12-06 03:30:01.355650] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.321 [2024-12-06 03:30:01.355653] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.321 [2024-12-06 03:30:01.355656] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x117d580) on tqpair=0x111b690 00:21:41.321 [2024-12-06 03:30:01.355665] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.321 [2024-12-06 03:30:01.355668] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.321 [2024-12-06 03:30:01.355671] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x111b690) 00:21:41.321 [2024-12-06 03:30:01.355678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.321 [2024-12-06 03:30:01.355688] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x117d580, cid 3, qid 0 00:21:41.321 [2024-12-06 03:30:01.355760] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.321 [2024-12-06 03:30:01.355766] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.321 [2024-12-06 03:30:01.355768] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.321 [2024-12-06 03:30:01.355772] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x117d580) on tqpair=0x111b690 00:21:41.321 [2024-12-06 03:30:01.355780] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.321 [2024-12-06 03:30:01.355784] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.321 [2024-12-06 03:30:01.355787] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x111b690) 00:21:41.321 [2024-12-06 03:30:01.355793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.321 [2024-12-06 03:30:01.355802] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x117d580, cid 3, qid 0 00:21:41.321 [2024-12-06 03:30:01.355877] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.321 [2024-12-06 03:30:01.355883] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.321 [2024-12-06 03:30:01.355886] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.321 [2024-12-06 03:30:01.355890] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x117d580) on tqpair=0x111b690 00:21:41.321 [2024-12-06 03:30:01.355897] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.321 [2024-12-06 03:30:01.355901] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.321 [2024-12-06 03:30:01.355904] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x111b690) 00:21:41.321 [2024-12-06 03:30:01.355910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.321 [2024-12-06 03:30:01.355918] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x117d580, cid 3, qid 0 00:21:41.321 [2024-12-06 03:30:01.359959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.321 [2024-12-06 03:30:01.359967] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.321 [2024-12-06 03:30:01.359970] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.321 [2024-12-06 03:30:01.359973] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x117d580) on tqpair=0x111b690 00:21:41.321 [2024-12-06 03:30:01.359982] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:41.321 [2024-12-06 03:30:01.359986] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:41.321 [2024-12-06 03:30:01.359989] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x111b690) 00:21:41.321 [2024-12-06 03:30:01.359995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.321 [2024-12-06 03:30:01.360006] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x117d580, cid 3, qid 0 00:21:41.321 [2024-12-06 03:30:01.360072] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:41.321 [2024-12-06 03:30:01.360078] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:41.321 [2024-12-06 03:30:01.360081] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:41.321 [2024-12-06 03:30:01.360084] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x117d580) on tqpair=0x111b690 00:21:41.321 [2024-12-06 03:30:01.360090] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:21:41.321 0% 00:21:41.321 Data Units Read: 0 00:21:41.321 Data Units Written: 0 00:21:41.321 Host Read Commands: 0 00:21:41.321 Host Write Commands: 0 00:21:41.321 Controller Busy Time: 0 minutes 00:21:41.321 Power Cycles: 0 00:21:41.321 Power On Hours: 0 hours 00:21:41.321 Unsafe Shutdowns: 0 00:21:41.321 Unrecoverable Media Errors: 0 00:21:41.321 Lifetime Error Log Entries: 0 00:21:41.321 Warning Temperature Time: 0 minutes 00:21:41.321 Critical Temperature Time: 0 minutes 00:21:41.321 00:21:41.321 Number of Queues 00:21:41.321 ================ 00:21:41.321 Number of I/O Submission Queues: 127 00:21:41.321 Number of I/O Completion Queues: 127 00:21:41.321 00:21:41.321 Active Namespaces 00:21:41.321 ================= 00:21:41.321 Namespace ID:1 00:21:41.321 Error Recovery Timeout: Unlimited 00:21:41.321 Command Set Identifier: NVM (00h) 00:21:41.321 Deallocate: Supported 00:21:41.321 Deallocated/Unwritten Error: Not Supported 00:21:41.321 Deallocated Read Value: Unknown 00:21:41.321 Deallocate in Write Zeroes: Not Supported 00:21:41.321 Deallocated Guard Field: 0xFFFF 00:21:41.321 Flush: Supported 00:21:41.322 Reservation: Supported 00:21:41.322 Namespace Sharing Capabilities: Multiple Controllers 00:21:41.322 Size (in LBAs): 131072 (0GiB) 00:21:41.322 Capacity (in LBAs): 131072 (0GiB) 00:21:41.322 Utilization (in LBAs): 131072 (0GiB) 00:21:41.322 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:41.322 EUI64: ABCDEF0123456789 00:21:41.322 UUID: 22219eeb-c214-4360-aa69-515c1b88de03 00:21:41.322 Thin Provisioning: Not Supported 00:21:41.322 Per-NS Atomic Units: Yes 00:21:41.322 Atomic Boundary Size (Normal): 0 00:21:41.322 Atomic Boundary Size (PFail): 0 00:21:41.322 Atomic Boundary Offset: 0 00:21:41.322 Maximum Single Source Range Length: 65535 00:21:41.322 Maximum Copy Length: 65535 00:21:41.322 Maximum Source Range Count: 1 00:21:41.322 NGUID/EUI64 Never Reused: No 00:21:41.322 Namespace Write Protected: No 00:21:41.322 Number of LBA Formats: 1 00:21:41.322 Current LBA Format: LBA Format #00 00:21:41.322 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:41.322 00:21:41.322 03:30:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:41.322 03:30:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:41.322 03:30:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.322 03:30:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:41.322 03:30:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.322 03:30:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:41.322 03:30:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:41.322 03:30:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:41.322 03:30:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:21:41.322 03:30:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:41.322 03:30:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:21:41.322 03:30:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:41.322 03:30:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:41.322 rmmod nvme_tcp 00:21:41.322 rmmod nvme_fabrics 00:21:41.322 rmmod nvme_keyring 00:21:41.322 03:30:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:41.322 03:30:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:21:41.322 03:30:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:21:41.322 03:30:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2693763 ']' 00:21:41.322 03:30:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2693763 00:21:41.322 03:30:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2693763 ']' 00:21:41.322 03:30:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2693763 00:21:41.322 03:30:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:21:41.322 03:30:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:41.322 03:30:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2693763 00:21:41.581 03:30:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:41.581 03:30:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:41.581 03:30:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2693763' 00:21:41.581 killing process with pid 2693763 00:21:41.581 03:30:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2693763 00:21:41.581 03:30:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2693763 00:21:41.581 03:30:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:41.581 03:30:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:41.581 03:30:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:41.581 03:30:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:21:41.581 03:30:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:21:41.581 03:30:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:21:41.581 03:30:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:41.581 03:30:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:41.581 03:30:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:41.581 03:30:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.581 03:30:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:41.581 03:30:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:44.118 00:21:44.118 real 0m8.844s 00:21:44.118 user 0m5.604s 00:21:44.118 sys 0m4.472s 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:44.118 ************************************ 00:21:44.118 END TEST nvmf_identify 00:21:44.118 ************************************ 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.118 ************************************ 00:21:44.118 START TEST nvmf_perf 00:21:44.118 ************************************ 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:44.118 * Looking for test storage... 00:21:44.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:44.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.118 --rc genhtml_branch_coverage=1 00:21:44.118 --rc genhtml_function_coverage=1 00:21:44.118 --rc genhtml_legend=1 00:21:44.118 --rc geninfo_all_blocks=1 00:21:44.118 --rc geninfo_unexecuted_blocks=1 00:21:44.118 00:21:44.118 ' 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:44.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.118 --rc genhtml_branch_coverage=1 00:21:44.118 --rc genhtml_function_coverage=1 00:21:44.118 --rc genhtml_legend=1 00:21:44.118 --rc geninfo_all_blocks=1 00:21:44.118 --rc geninfo_unexecuted_blocks=1 00:21:44.118 00:21:44.118 ' 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:44.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.118 --rc genhtml_branch_coverage=1 00:21:44.118 --rc genhtml_function_coverage=1 00:21:44.118 --rc genhtml_legend=1 00:21:44.118 --rc geninfo_all_blocks=1 00:21:44.118 --rc geninfo_unexecuted_blocks=1 00:21:44.118 00:21:44.118 ' 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:44.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.118 --rc genhtml_branch_coverage=1 00:21:44.118 --rc genhtml_function_coverage=1 00:21:44.118 --rc genhtml_legend=1 00:21:44.118 --rc geninfo_all_blocks=1 00:21:44.118 --rc geninfo_unexecuted_blocks=1 00:21:44.118 00:21:44.118 ' 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:44.118 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:44.118 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:44.119 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:44.119 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:21:44.119 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:44.119 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:44.119 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:44.119 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:44.119 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:44.119 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.119 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:44.119 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.119 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:44.119 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:44.119 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:44.119 03:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:49.391 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:49.391 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:49.391 Found net devices under 0000:86:00.0: cvl_0_0 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:49.391 Found net devices under 0000:86:00.1: cvl_0_1 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:49.391 03:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:49.391 03:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:49.391 03:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:49.391 03:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:49.391 03:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:49.391 03:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:49.391 03:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:49.391 03:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:49.391 03:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:49.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:49.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:21:49.391 00:21:49.391 --- 10.0.0.2 ping statistics --- 00:21:49.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.391 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:21:49.391 03:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:49.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:49.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:21:49.391 00:21:49.391 --- 10.0.0.1 ping statistics --- 00:21:49.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.391 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:21:49.391 03:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:49.391 03:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:21:49.391 03:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:49.391 03:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:49.391 03:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:49.391 03:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:49.391 03:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:49.391 03:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:49.391 03:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:49.391 03:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:49.392 03:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:49.392 03:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:49.392 03:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:49.392 03:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2697701 00:21:49.392 03:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:49.392 03:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2697701 00:21:49.392 03:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2697701 ']' 00:21:49.392 03:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.392 03:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:49.392 03:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.392 03:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:49.392 03:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:49.392 [2024-12-06 03:30:09.228563] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:21:49.392 [2024-12-06 03:30:09.228611] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.392 [2024-12-06 03:30:09.295874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:49.392 [2024-12-06 03:30:09.341388] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.392 [2024-12-06 03:30:09.341424] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.392 [2024-12-06 03:30:09.341431] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:49.392 [2024-12-06 03:30:09.341438] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:49.392 [2024-12-06 03:30:09.341444] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.392 [2024-12-06 03:30:09.343032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.392 [2024-12-06 03:30:09.343133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:49.392 [2024-12-06 03:30:09.343237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:49.392 [2024-12-06 03:30:09.343239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.392 03:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:49.392 03:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:21:49.392 03:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:49.392 03:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:49.392 03:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:49.392 03:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.392 03:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:49.392 03:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:52.681 03:30:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:52.681 03:30:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:52.681 03:30:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:21:52.681 03:30:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:52.940 03:30:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:52.940 03:30:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:21:52.941 03:30:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:52.941 03:30:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:52.941 03:30:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:53.199 [2024-12-06 03:30:13.111528] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:53.199 03:30:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:53.459 03:30:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:53.459 03:30:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:53.459 03:30:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:53.459 03:30:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:53.718 03:30:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:53.977 [2024-12-06 03:30:13.926604] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:53.977 03:30:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:54.236 03:30:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:21:54.236 03:30:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:21:54.236 03:30:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:54.236 03:30:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:21:55.614 Initializing NVMe Controllers 00:21:55.615 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:21:55.615 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:21:55.615 Initialization complete. Launching workers. 00:21:55.615 ======================================================== 00:21:55.615 Latency(us) 00:21:55.615 Device Information : IOPS MiB/s Average min max 00:21:55.615 PCIE (0000:5e:00.0) NSID 1 from core 0: 97068.63 379.17 329.15 25.68 7245.12 00:21:55.615 ======================================================== 00:21:55.615 Total : 97068.63 379.17 329.15 25.68 7245.12 00:21:55.615 00:21:55.615 03:30:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:56.552 Initializing NVMe Controllers 00:21:56.552 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:56.552 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:56.552 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:56.552 Initialization complete. Launching workers. 00:21:56.552 ======================================================== 00:21:56.552 Latency(us) 00:21:56.552 Device Information : IOPS MiB/s Average min max 00:21:56.552 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 91.00 0.36 11305.89 115.29 45590.02 00:21:56.552 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 49.00 0.19 20973.00 7192.97 47899.42 00:21:56.552 ======================================================== 00:21:56.552 Total : 140.00 0.55 14689.38 115.29 47899.42 00:21:56.552 00:21:56.552 03:30:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:57.930 Initializing NVMe Controllers 00:21:57.930 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:57.930 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:57.930 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:57.930 Initialization complete. Launching workers. 00:21:57.930 ======================================================== 00:21:57.930 Latency(us) 00:21:57.930 Device Information : IOPS MiB/s Average min max 00:21:57.930 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10927.00 42.68 2927.46 415.64 6782.63 00:21:57.930 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3751.00 14.65 8565.91 6339.27 16093.49 00:21:57.930 ======================================================== 00:21:57.930 Total : 14678.00 57.34 4368.38 415.64 16093.49 00:21:57.930 00:21:57.930 03:30:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:21:57.930 03:30:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:21:57.930 03:30:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:00.469 Initializing NVMe Controllers 00:22:00.469 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:00.469 Controller IO queue size 128, less than required. 00:22:00.469 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:00.469 Controller IO queue size 128, less than required. 00:22:00.469 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:00.469 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:00.469 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:00.469 Initialization complete. Launching workers. 00:22:00.469 ======================================================== 00:22:00.469 Latency(us) 00:22:00.469 Device Information : IOPS MiB/s Average min max 00:22:00.469 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1715.94 428.99 75759.89 49320.44 137598.28 00:22:00.469 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 615.98 153.99 214376.97 88749.49 355242.46 00:22:00.469 ======================================================== 00:22:00.469 Total : 2331.92 582.98 112375.73 49320.44 355242.46 00:22:00.469 00:22:00.469 03:30:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:00.469 No valid NVMe controllers or AIO or URING devices found 00:22:00.469 Initializing NVMe Controllers 00:22:00.469 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:00.469 Controller IO queue size 128, less than required. 00:22:00.469 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:00.469 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:00.469 Controller IO queue size 128, less than required. 00:22:00.469 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:00.469 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:00.469 WARNING: Some requested NVMe devices were skipped 00:22:00.728 03:30:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:03.263 Initializing NVMe Controllers 00:22:03.263 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:03.263 Controller IO queue size 128, less than required. 00:22:03.263 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:03.263 Controller IO queue size 128, less than required. 00:22:03.263 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:03.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:03.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:03.263 Initialization complete. Launching workers. 00:22:03.263 00:22:03.263 ==================== 00:22:03.263 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:03.263 TCP transport: 00:22:03.263 polls: 15387 00:22:03.263 idle_polls: 11783 00:22:03.263 sock_completions: 3604 00:22:03.263 nvme_completions: 6097 00:22:03.263 submitted_requests: 9166 00:22:03.263 queued_requests: 1 00:22:03.263 00:22:03.263 ==================== 00:22:03.263 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:03.263 TCP transport: 00:22:03.263 polls: 11398 00:22:03.263 idle_polls: 7708 00:22:03.263 sock_completions: 3690 00:22:03.263 nvme_completions: 6563 00:22:03.263 submitted_requests: 9902 00:22:03.263 queued_requests: 1 00:22:03.263 ======================================================== 00:22:03.263 Latency(us) 00:22:03.263 Device Information : IOPS MiB/s Average min max 00:22:03.263 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1523.91 380.98 86653.90 48139.79 143396.51 00:22:03.263 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1640.41 410.10 78132.53 46964.16 119066.85 00:22:03.263 ======================================================== 00:22:03.263 Total : 3164.32 791.08 82236.36 46964.16 143396.51 00:22:03.263 00:22:03.263 03:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:03.263 03:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:03.264 03:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:03.264 03:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:03.264 03:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:03.264 03:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:03.264 03:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:22:03.264 03:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:03.264 03:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:22:03.264 03:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:03.264 03:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:03.264 rmmod nvme_tcp 00:22:03.264 rmmod nvme_fabrics 00:22:03.264 rmmod nvme_keyring 00:22:03.264 03:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:03.264 03:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:22:03.264 03:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:22:03.264 03:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2697701 ']' 00:22:03.264 03:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2697701 00:22:03.264 03:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2697701 ']' 00:22:03.264 03:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2697701 00:22:03.264 03:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:22:03.264 03:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:03.264 03:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2697701 00:22:03.523 03:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:03.523 03:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:03.523 03:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2697701' 00:22:03.523 killing process with pid 2697701 00:22:03.523 03:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2697701 00:22:03.523 03:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2697701 00:22:04.900 03:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:04.900 03:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:04.900 03:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:04.900 03:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:22:04.900 03:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:22:04.900 03:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:22:04.900 03:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:04.900 03:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:04.900 03:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:04.900 03:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.900 03:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:04.900 03:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.822 03:30:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:06.822 00:22:06.822 real 0m23.146s 00:22:06.822 user 1m1.374s 00:22:06.822 sys 0m7.775s 00:22:06.822 03:30:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:06.822 03:30:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:06.822 ************************************ 00:22:06.822 END TEST nvmf_perf 00:22:06.822 ************************************ 00:22:07.082 03:30:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:07.082 03:30:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:07.082 03:30:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:07.082 03:30:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.082 ************************************ 00:22:07.082 START TEST nvmf_fio_host 00:22:07.082 ************************************ 00:22:07.082 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:07.082 * Looking for test storage... 00:22:07.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:07.082 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:07.082 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:22:07.082 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:07.082 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:07.082 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:07.082 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:07.082 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:07.082 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:07.082 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:07.082 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:07.082 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:07.082 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:07.082 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:07.082 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:07.082 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:07.082 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:22:07.082 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:22:07.082 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:07.082 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:07.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.083 --rc genhtml_branch_coverage=1 00:22:07.083 --rc genhtml_function_coverage=1 00:22:07.083 --rc genhtml_legend=1 00:22:07.083 --rc geninfo_all_blocks=1 00:22:07.083 --rc geninfo_unexecuted_blocks=1 00:22:07.083 00:22:07.083 ' 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:07.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.083 --rc genhtml_branch_coverage=1 00:22:07.083 --rc genhtml_function_coverage=1 00:22:07.083 --rc genhtml_legend=1 00:22:07.083 --rc geninfo_all_blocks=1 00:22:07.083 --rc geninfo_unexecuted_blocks=1 00:22:07.083 00:22:07.083 ' 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:07.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.083 --rc genhtml_branch_coverage=1 00:22:07.083 --rc genhtml_function_coverage=1 00:22:07.083 --rc genhtml_legend=1 00:22:07.083 --rc geninfo_all_blocks=1 00:22:07.083 --rc geninfo_unexecuted_blocks=1 00:22:07.083 00:22:07.083 ' 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:07.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.083 --rc genhtml_branch_coverage=1 00:22:07.083 --rc genhtml_function_coverage=1 00:22:07.083 --rc genhtml_legend=1 00:22:07.083 --rc geninfo_all_blocks=1 00:22:07.083 --rc geninfo_unexecuted_blocks=1 00:22:07.083 00:22:07.083 ' 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:07.083 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:07.084 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:07.084 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:07.084 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:07.084 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:07.084 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:07.084 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:07.084 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:07.084 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:07.084 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:07.084 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:07.084 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:07.084 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:07.084 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:07.084 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.343 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:07.343 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:07.343 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:22:07.343 03:30:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:12.621 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:12.621 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:12.621 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:12.622 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:12.622 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:12.622 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:12.622 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:12.622 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.622 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:12.622 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:12.622 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:12.622 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:12.622 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.622 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:12.622 Found net devices under 0000:86:00.0: cvl_0_0 00:22:12.622 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.622 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:12.622 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.622 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:12.622 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:12.622 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:12.622 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:12.622 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.622 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:12.622 Found net devices under 0000:86:00.1: cvl_0_1 00:22:12.622 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.622 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:12.622 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:22:12.622 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:12.622 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:12.622 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:12.622 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:12.622 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:12.622 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:12.622 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:12.622 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:12.622 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:12.622 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:12.622 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:12.622 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:12.622 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:12.622 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:12.622 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:12.622 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:12.622 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:12.622 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:12.883 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:12.883 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:12.883 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:12.883 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:12.883 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:12.883 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:12.883 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:12.883 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:12.883 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:12.883 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:22:12.883 00:22:12.883 --- 10.0.0.2 ping statistics --- 00:22:12.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.883 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:22:12.883 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:12.883 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:12.883 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:22:12.883 00:22:12.883 --- 10.0.0.1 ping statistics --- 00:22:12.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.883 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:22:12.883 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:12.883 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:22:12.883 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:12.883 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:12.883 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:12.883 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:12.883 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:12.883 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:12.883 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:12.883 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:12.883 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:12.883 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:12.883 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.883 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2703888 00:22:12.883 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:12.883 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:12.883 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2703888 00:22:12.883 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2703888 ']' 00:22:12.883 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:12.883 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:12.883 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:12.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:12.883 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:12.883 03:30:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.143 [2024-12-06 03:30:33.043578] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:22:13.143 [2024-12-06 03:30:33.043623] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:13.143 [2024-12-06 03:30:33.110257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:13.143 [2024-12-06 03:30:33.152797] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:13.143 [2024-12-06 03:30:33.152837] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:13.143 [2024-12-06 03:30:33.152844] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:13.143 [2024-12-06 03:30:33.152850] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:13.143 [2024-12-06 03:30:33.152855] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:13.143 [2024-12-06 03:30:33.154420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:13.143 [2024-12-06 03:30:33.154518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:13.143 [2024-12-06 03:30:33.154603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:13.143 [2024-12-06 03:30:33.154605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.143 03:30:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:13.143 03:30:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:22:13.143 03:30:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:13.402 [2024-12-06 03:30:33.424956] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:13.402 03:30:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:13.402 03:30:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:13.402 03:30:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.402 03:30:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:13.662 Malloc1 00:22:13.662 03:30:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:13.921 03:30:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:14.181 03:30:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:14.181 [2024-12-06 03:30:34.297694] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:14.441 03:30:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:14.441 03:30:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:14.441 03:30:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:14.441 03:30:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:14.441 03:30:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:14.441 03:30:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:14.441 03:30:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:14.441 03:30:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:14.441 03:30:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:14.441 03:30:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:14.441 03:30:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:14.441 03:30:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:14.441 03:30:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:14.441 03:30:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:14.441 03:30:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:14.441 03:30:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:14.441 03:30:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:14.441 03:30:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:14.441 03:30:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:14.441 03:30:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:14.441 03:30:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:14.441 03:30:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:14.441 03:30:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:14.441 03:30:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:14.700 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:14.700 fio-3.35 00:22:14.700 Starting 1 thread 00:22:17.244 00:22:17.244 test: (groupid=0, jobs=1): err= 0: pid=2704479: Fri Dec 6 03:30:37 2024 00:22:17.244 read: IOPS=11.6k, BW=45.3MiB/s (47.5MB/s)(90.9MiB/2005msec) 00:22:17.244 slat (nsec): min=1532, max=243166, avg=1739.03, stdev=2138.39 00:22:17.244 clat (usec): min=2545, max=10454, avg=6075.14, stdev=467.18 00:22:17.244 lat (usec): min=2570, max=10456, avg=6076.88, stdev=467.07 00:22:17.244 clat percentiles (usec): 00:22:17.244 | 1.00th=[ 4948], 5.00th=[ 5342], 10.00th=[ 5473], 20.00th=[ 5669], 00:22:17.244 | 30.00th=[ 5866], 40.00th=[ 5997], 50.00th=[ 6063], 60.00th=[ 6194], 00:22:17.244 | 70.00th=[ 6325], 80.00th=[ 6456], 90.00th=[ 6652], 95.00th=[ 6783], 00:22:17.244 | 99.00th=[ 7046], 99.50th=[ 7177], 99.90th=[ 8029], 99.95th=[ 9896], 00:22:17.244 | 99.99th=[10290] 00:22:17.244 bw ( KiB/s): min=45240, max=47136, per=99.93%, avg=46372.00, stdev=813.76, samples=4 00:22:17.244 iops : min=11310, max=11784, avg=11593.00, stdev=203.44, samples=4 00:22:17.244 write: IOPS=11.5k, BW=45.0MiB/s (47.2MB/s)(90.2MiB/2005msec); 0 zone resets 00:22:17.244 slat (nsec): min=1577, max=185165, avg=1811.61, stdev=1433.79 00:22:17.244 clat (usec): min=1976, max=10038, avg=4923.71, stdev=385.23 00:22:17.244 lat (usec): min=1988, max=10040, avg=4925.53, stdev=385.17 00:22:17.244 clat percentiles (usec): 00:22:17.244 | 1.00th=[ 4080], 5.00th=[ 4359], 10.00th=[ 4490], 20.00th=[ 4621], 00:22:17.244 | 30.00th=[ 4752], 40.00th=[ 4817], 50.00th=[ 4948], 60.00th=[ 5014], 00:22:17.244 | 70.00th=[ 5145], 80.00th=[ 5211], 90.00th=[ 5407], 95.00th=[ 5473], 00:22:17.244 | 99.00th=[ 5735], 99.50th=[ 5800], 99.90th=[ 7439], 99.95th=[ 9241], 00:22:17.244 | 99.99th=[ 9503] 00:22:17.244 bw ( KiB/s): min=45512, max=46528, per=100.00%, avg=46084.00, stdev=468.87, samples=4 00:22:17.244 iops : min=11378, max=11632, avg=11521.00, stdev=117.22, samples=4 00:22:17.244 lat (msec) : 2=0.01%, 4=0.35%, 10=99.63%, 20=0.02% 00:22:17.244 cpu : usr=69.96%, sys=27.54%, ctx=378, majf=0, minf=2 00:22:17.244 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:17.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.244 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:17.244 issued rwts: total=23260,23095,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:17.244 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:17.244 00:22:17.244 Run status group 0 (all jobs): 00:22:17.244 READ: bw=45.3MiB/s (47.5MB/s), 45.3MiB/s-45.3MiB/s (47.5MB/s-47.5MB/s), io=90.9MiB (95.3MB), run=2005-2005msec 00:22:17.244 WRITE: bw=45.0MiB/s (47.2MB/s), 45.0MiB/s-45.0MiB/s (47.2MB/s-47.2MB/s), io=90.2MiB (94.6MB), run=2005-2005msec 00:22:17.244 03:30:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:17.244 03:30:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:17.244 03:30:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:17.244 03:30:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:17.244 03:30:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:17.244 03:30:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:17.244 03:30:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:17.244 03:30:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:17.244 03:30:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:17.244 03:30:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:17.244 03:30:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:17.244 03:30:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:17.244 03:30:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:17.244 03:30:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:17.244 03:30:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:17.244 03:30:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:17.244 03:30:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:17.244 03:30:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:17.245 03:30:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:17.245 03:30:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:17.245 03:30:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:17.245 03:30:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:17.503 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:17.503 fio-3.35 00:22:17.503 Starting 1 thread 00:22:18.557 [2024-12-06 03:30:38.418888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d6e0 is same with the state(6) to be set 00:22:18.557 [2024-12-06 03:30:38.418958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d6e0 is same with the state(6) to be set 00:22:18.557 [2024-12-06 03:30:38.418967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d6e0 is same with the state(6) to be set 00:22:20.042 00:22:20.042 test: (groupid=0, jobs=1): err= 0: pid=2704974: Fri Dec 6 03:30:39 2024 00:22:20.042 read: IOPS=10.8k, BW=169MiB/s (177MB/s)(338MiB/2004msec) 00:22:20.042 slat (nsec): min=2575, max=83979, avg=2949.97, stdev=1373.70 00:22:20.042 clat (usec): min=1222, max=15444, avg=6979.16, stdev=1768.19 00:22:20.042 lat (usec): min=1227, max=15452, avg=6982.11, stdev=1768.45 00:22:20.042 clat percentiles (usec): 00:22:20.042 | 1.00th=[ 3785], 5.00th=[ 4424], 10.00th=[ 4883], 20.00th=[ 5473], 00:22:20.042 | 30.00th=[ 5932], 40.00th=[ 6390], 50.00th=[ 6849], 60.00th=[ 7373], 00:22:20.042 | 70.00th=[ 7767], 80.00th=[ 8225], 90.00th=[ 9110], 95.00th=[10028], 00:22:20.042 | 99.00th=[12387], 99.50th=[13304], 99.90th=[14484], 99.95th=[14877], 00:22:20.042 | 99.99th=[15270] 00:22:20.042 bw ( KiB/s): min=80800, max=95360, per=49.69%, avg=85808.00, stdev=6853.80, samples=4 00:22:20.042 iops : min= 5050, max= 5960, avg=5363.00, stdev=428.36, samples=4 00:22:20.042 write: IOPS=6213, BW=97.1MiB/s (102MB/s)(175MiB/1807msec); 0 zone resets 00:22:20.042 slat (usec): min=29, max=383, avg=32.32, stdev= 8.86 00:22:20.042 clat (usec): min=3027, max=17622, avg=8755.17, stdev=1636.43 00:22:20.042 lat (usec): min=3057, max=17723, avg=8787.49, stdev=1639.44 00:22:20.042 clat percentiles (usec): 00:22:20.042 | 1.00th=[ 5866], 5.00th=[ 6521], 10.00th=[ 6915], 20.00th=[ 7439], 00:22:20.042 | 30.00th=[ 7832], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 8848], 00:22:20.042 | 70.00th=[ 9372], 80.00th=[ 9896], 90.00th=[10945], 95.00th=[11731], 00:22:20.042 | 99.00th=[13960], 99.50th=[15401], 99.90th=[16581], 99.95th=[16909], 00:22:20.042 | 99.99th=[17171] 00:22:20.042 bw ( KiB/s): min=82592, max=99200, per=89.93%, avg=89400.00, stdev=7544.37, samples=4 00:22:20.042 iops : min= 5162, max= 6200, avg=5587.50, stdev=471.52, samples=4 00:22:20.042 lat (msec) : 2=0.01%, 4=1.31%, 10=88.85%, 20=9.83% 00:22:20.042 cpu : usr=85.03%, sys=14.07%, ctx=50, majf=0, minf=2 00:22:20.042 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:20.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:20.042 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:20.042 issued rwts: total=21629,11227,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:20.042 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:20.042 00:22:20.042 Run status group 0 (all jobs): 00:22:20.042 READ: bw=169MiB/s (177MB/s), 169MiB/s-169MiB/s (177MB/s-177MB/s), io=338MiB (354MB), run=2004-2004msec 00:22:20.042 WRITE: bw=97.1MiB/s (102MB/s), 97.1MiB/s-97.1MiB/s (102MB/s-102MB/s), io=175MiB (184MB), run=1807-1807msec 00:22:20.042 03:30:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:20.042 03:30:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:20.042 03:30:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:20.042 03:30:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:20.300 03:30:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:20.300 03:30:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:20.300 03:30:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:22:20.300 03:30:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:20.300 03:30:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:22:20.300 03:30:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:20.300 03:30:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:20.300 rmmod nvme_tcp 00:22:20.300 rmmod nvme_fabrics 00:22:20.300 rmmod nvme_keyring 00:22:20.300 03:30:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:20.300 03:30:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:22:20.300 03:30:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:22:20.300 03:30:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2703888 ']' 00:22:20.300 03:30:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2703888 00:22:20.300 03:30:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2703888 ']' 00:22:20.300 03:30:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2703888 00:22:20.300 03:30:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:22:20.300 03:30:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:20.300 03:30:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2703888 00:22:20.300 03:30:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:20.300 03:30:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:20.300 03:30:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2703888' 00:22:20.300 killing process with pid 2703888 00:22:20.300 03:30:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2703888 00:22:20.300 03:30:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2703888 00:22:20.558 03:30:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:20.558 03:30:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:20.558 03:30:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:20.558 03:30:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:22:20.558 03:30:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:22:20.558 03:30:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:22:20.558 03:30:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:20.558 03:30:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:20.558 03:30:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:20.558 03:30:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.558 03:30:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:20.558 03:30:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.461 03:30:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:22.461 00:22:22.461 real 0m15.570s 00:22:22.461 user 0m46.413s 00:22:22.461 sys 0m6.388s 00:22:22.461 03:30:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:22.461 03:30:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.461 ************************************ 00:22:22.461 END TEST nvmf_fio_host 00:22:22.461 ************************************ 00:22:22.720 03:30:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:22.720 03:30:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:22.720 03:30:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:22.720 03:30:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.720 ************************************ 00:22:22.720 START TEST nvmf_failover 00:22:22.720 ************************************ 00:22:22.720 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:22.720 * Looking for test storage... 00:22:22.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:22.720 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:22.720 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:22:22.720 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:22.720 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:22.720 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:22.720 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:22.720 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:22.720 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:22:22.720 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:22:22.720 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:22.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.721 --rc genhtml_branch_coverage=1 00:22:22.721 --rc genhtml_function_coverage=1 00:22:22.721 --rc genhtml_legend=1 00:22:22.721 --rc geninfo_all_blocks=1 00:22:22.721 --rc geninfo_unexecuted_blocks=1 00:22:22.721 00:22:22.721 ' 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:22.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.721 --rc genhtml_branch_coverage=1 00:22:22.721 --rc genhtml_function_coverage=1 00:22:22.721 --rc genhtml_legend=1 00:22:22.721 --rc geninfo_all_blocks=1 00:22:22.721 --rc geninfo_unexecuted_blocks=1 00:22:22.721 00:22:22.721 ' 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:22.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.721 --rc genhtml_branch_coverage=1 00:22:22.721 --rc genhtml_function_coverage=1 00:22:22.721 --rc genhtml_legend=1 00:22:22.721 --rc geninfo_all_blocks=1 00:22:22.721 --rc geninfo_unexecuted_blocks=1 00:22:22.721 00:22:22.721 ' 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:22.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.721 --rc genhtml_branch_coverage=1 00:22:22.721 --rc genhtml_function_coverage=1 00:22:22.721 --rc genhtml_legend=1 00:22:22.721 --rc geninfo_all_blocks=1 00:22:22.721 --rc geninfo_unexecuted_blocks=1 00:22:22.721 00:22:22.721 ' 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:22.721 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:22.722 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:22.722 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:22.722 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:22.722 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:22.722 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:22.722 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:22.979 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:22.979 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:22.979 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:22.979 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:22.979 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:22.979 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:22.979 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:22.979 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:22.979 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:22.979 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:22.979 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.979 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:22.979 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.979 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:22.979 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:22.979 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:22:22.979 03:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:29.549 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:29.549 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:22:29.549 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:29.550 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:29.550 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:29.550 Found net devices under 0000:86:00.0: cvl_0_0 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:29.550 Found net devices under 0000:86:00.1: cvl_0_1 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:29.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:29.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.404 ms 00:22:29.550 00:22:29.550 --- 10.0.0.2 ping statistics --- 00:22:29.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.550 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:29.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:29.550 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:22:29.550 00:22:29.550 --- 10.0.0.1 ping statistics --- 00:22:29.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.550 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:29.550 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:29.551 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:29.551 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:29.551 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:29.551 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2708815 00:22:29.551 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2708815 00:22:29.551 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:29.551 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2708815 ']' 00:22:29.551 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.551 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:29.551 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.551 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:29.551 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:29.551 [2024-12-06 03:30:48.773430] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:22:29.551 [2024-12-06 03:30:48.773474] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.551 [2024-12-06 03:30:48.839501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:29.551 [2024-12-06 03:30:48.882154] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:29.551 [2024-12-06 03:30:48.882192] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:29.551 [2024-12-06 03:30:48.882199] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:29.551 [2024-12-06 03:30:48.882209] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:29.551 [2024-12-06 03:30:48.882214] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:29.551 [2024-12-06 03:30:48.883610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:29.551 [2024-12-06 03:30:48.883679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:29.551 [2024-12-06 03:30:48.883681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:29.551 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:29.551 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:29.551 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:29.551 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:29.551 03:30:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:29.551 03:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:29.551 03:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:29.551 [2024-12-06 03:30:49.206553] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:29.551 03:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:29.551 Malloc0 00:22:29.551 03:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:29.551 03:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:29.810 03:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:30.070 [2024-12-06 03:30:50.020500] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:30.070 03:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:30.329 [2024-12-06 03:30:50.229083] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:30.329 03:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:30.329 [2024-12-06 03:30:50.437686] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:30.589 03:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2709134 00:22:30.589 03:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:30.589 03:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:30.589 03:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2709134 /var/tmp/bdevperf.sock 00:22:30.589 03:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2709134 ']' 00:22:30.589 03:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:30.589 03:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:30.589 03:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:30.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:30.589 03:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:30.589 03:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:30.589 03:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:30.589 03:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:30.589 03:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:30.847 NVMe0n1 00:22:31.106 03:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:31.366 00:22:31.366 03:30:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2709305 00:22:31.366 03:30:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:31.366 03:30:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:32.304 03:30:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:32.564 [2024-12-06 03:30:52.567457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.564 [2024-12-06 03:30:52.567533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.564 [2024-12-06 03:30:52.567542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.564 [2024-12-06 03:30:52.567549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.564 [2024-12-06 03:30:52.567555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.564 [2024-12-06 03:30:52.567562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.564 [2024-12-06 03:30:52.567567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.564 [2024-12-06 03:30:52.567573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.564 [2024-12-06 03:30:52.567579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.564 [2024-12-06 03:30:52.567585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.564 [2024-12-06 03:30:52.567592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.564 [2024-12-06 03:30:52.567597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.564 [2024-12-06 03:30:52.567604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.564 [2024-12-06 03:30:52.567610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.564 [2024-12-06 03:30:52.567616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.564 [2024-12-06 03:30:52.567622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.564 [2024-12-06 03:30:52.567628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.564 [2024-12-06 03:30:52.567639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.564 [2024-12-06 03:30:52.567646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.564 [2024-12-06 03:30:52.567652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.564 [2024-12-06 03:30:52.567658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.564 [2024-12-06 03:30:52.567664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.564 [2024-12-06 03:30:52.567671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.564 [2024-12-06 03:30:52.567677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.564 [2024-12-06 03:30:52.567683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.564 [2024-12-06 03:30:52.567689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.564 [2024-12-06 03:30:52.567695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.567996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.568002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.568008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.568014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.568020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.568026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.568032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.568038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.568044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.568050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.568055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.568061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.568067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.568073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.565 [2024-12-06 03:30:52.568079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.566 [2024-12-06 03:30:52.568085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.566 [2024-12-06 03:30:52.568091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.566 [2024-12-06 03:30:52.568097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.566 [2024-12-06 03:30:52.568103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.566 [2024-12-06 03:30:52.568109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.566 [2024-12-06 03:30:52.568115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.566 [2024-12-06 03:30:52.568121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.566 [2024-12-06 03:30:52.568127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.566 [2024-12-06 03:30:52.568134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.566 [2024-12-06 03:30:52.568140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.566 [2024-12-06 03:30:52.568146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.566 [2024-12-06 03:30:52.568152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daee20 is same with the state(6) to be set 00:22:32.566 03:30:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:35.870 03:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:35.870 00:22:35.870 03:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:36.129 03:30:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:39.416 03:30:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:39.416 [2024-12-06 03:30:59.348070] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:39.416 03:30:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:40.351 03:31:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:40.611 03:31:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2709305 00:22:47.186 { 00:22:47.186 "results": [ 00:22:47.186 { 00:22:47.186 "job": "NVMe0n1", 00:22:47.186 "core_mask": "0x1", 00:22:47.186 "workload": "verify", 00:22:47.186 "status": "finished", 00:22:47.186 "verify_range": { 00:22:47.186 "start": 0, 00:22:47.186 "length": 16384 00:22:47.186 }, 00:22:47.186 "queue_depth": 128, 00:22:47.186 "io_size": 4096, 00:22:47.186 "runtime": 15.002352, 00:22:47.186 "iops": 10885.759779533237, 00:22:47.186 "mibps": 42.522499138801706, 00:22:47.186 "io_failed": 4837, 00:22:47.186 "io_timeout": 0, 00:22:47.186 "avg_latency_us": 11397.623796327636, 00:22:47.186 "min_latency_us": 498.6434782608696, 00:22:47.186 "max_latency_us": 23820.911304347825 00:22:47.186 } 00:22:47.186 ], 00:22:47.186 "core_count": 1 00:22:47.186 } 00:22:47.186 03:31:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2709134 00:22:47.186 03:31:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2709134 ']' 00:22:47.186 03:31:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2709134 00:22:47.186 03:31:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:47.186 03:31:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:47.186 03:31:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2709134 00:22:47.186 03:31:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:47.186 03:31:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:47.186 03:31:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2709134' 00:22:47.186 killing process with pid 2709134 00:22:47.186 03:31:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2709134 00:22:47.186 03:31:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2709134 00:22:47.186 03:31:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:47.186 [2024-12-06 03:30:50.514049] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:22:47.186 [2024-12-06 03:30:50.514105] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2709134 ] 00:22:47.186 [2024-12-06 03:30:50.579540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.186 [2024-12-06 03:30:50.621515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.186 Running I/O for 15 seconds... 00:22:47.186 10832.00 IOPS, 42.31 MiB/s [2024-12-06T02:31:07.327Z] [2024-12-06 03:30:52.569464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-12-06 03:30:52.569498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-12-06 03:30:52.569514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-12-06 03:30:52.569522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-12-06 03:30:52.569531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-12-06 03:30:52.569538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-12-06 03:30:52.569547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-12-06 03:30:52.569553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-12-06 03:30:52.569562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-12-06 03:30:52.569569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-12-06 03:30:52.569577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-12-06 03:30:52.569584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-12-06 03:30:52.569592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-12-06 03:30:52.569599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-12-06 03:30:52.569607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-12-06 03:30:52.569614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-12-06 03:30:52.569622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-12-06 03:30:52.569628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-12-06 03:30:52.569636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-12-06 03:30:52.569643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-12-06 03:30:52.569651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-12-06 03:30:52.569663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-12-06 03:30:52.569677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-12-06 03:30:52.569684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-12-06 03:30:52.569692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-12-06 03:30:52.569699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-12-06 03:30:52.569707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-12-06 03:30:52.569714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-12-06 03:30:52.569722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-12-06 03:30:52.569728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-12-06 03:30:52.569736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-12-06 03:30:52.569743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-12-06 03:30:52.569751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-12-06 03:30:52.569764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-12-06 03:30:52.569772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-12-06 03:30:52.569778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-12-06 03:30:52.569786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-12-06 03:30:52.569793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-12-06 03:30:52.569801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-12-06 03:30:52.569808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-12-06 03:30:52.569816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-12-06 03:30:52.569823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-12-06 03:30:52.569831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-12-06 03:30:52.569837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-12-06 03:30:52.569845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-12-06 03:30:52.569851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-12-06 03:30:52.569860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-12-06 03:30:52.569868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-12-06 03:30:52.569877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-12-06 03:30:52.569883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-12-06 03:30:52.569891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-12-06 03:30:52.569898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.186 [2024-12-06 03:30:52.569906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.186 [2024-12-06 03:30:52.569913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-12-06 03:30:52.569921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-12-06 03:30:52.569928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-12-06 03:30:52.569936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-12-06 03:30:52.569942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-12-06 03:30:52.569955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-12-06 03:30:52.569962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-12-06 03:30:52.569970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-12-06 03:30:52.569977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-12-06 03:30:52.569985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-12-06 03:30:52.569992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-12-06 03:30:52.570000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-12-06 03:30:52.570008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-12-06 03:30:52.570016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-12-06 03:30:52.570022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-12-06 03:30:52.570031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-12-06 03:30:52.570037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-12-06 03:30:52.570045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-12-06 03:30:52.570052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-12-06 03:30:52.570065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-12-06 03:30:52.570072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-12-06 03:30:52.570080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-12-06 03:30:52.570087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-12-06 03:30:52.570095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-12-06 03:30:52.570102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-12-06 03:30:52.570110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.187 [2024-12-06 03:30:52.570117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-12-06 03:30:52.570124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.187 [2024-12-06 03:30:52.570132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-12-06 03:30:52.570140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-12-06 03:30:52.570147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-12-06 03:30:52.570155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-12-06 03:30:52.570162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-12-06 03:30:52.570170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-12-06 03:30:52.570176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-12-06 03:30:52.570184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-12-06 03:30:52.570191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-12-06 03:30:52.570199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-12-06 03:30:52.570206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-12-06 03:30:52.570214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-12-06 03:30:52.570220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-12-06 03:30:52.570228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-12-06 03:30:52.570235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-12-06 03:30:52.570243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-12-06 03:30:52.570254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-12-06 03:30:52.570262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-12-06 03:30:52.570269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-12-06 03:30:52.570277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-12-06 03:30:52.570283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-12-06 03:30:52.570292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-12-06 03:30:52.570298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-12-06 03:30:52.570306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-12-06 03:30:52.570313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-12-06 03:30:52.570321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-12-06 03:30:52.570328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-12-06 03:30:52.570336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-12-06 03:30:52.570343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-12-06 03:30:52.570351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-12-06 03:30:52.570358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-12-06 03:30:52.570366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-12-06 03:30:52.570372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-12-06 03:30:52.570380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-12-06 03:30:52.570387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-12-06 03:30:52.570395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-12-06 03:30:52.570401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-12-06 03:30:52.570409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-12-06 03:30:52.570416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-12-06 03:30:52.570424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-12-06 03:30:52.570430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-12-06 03:30:52.570438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-12-06 03:30:52.570447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-12-06 03:30:52.570455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-12-06 03:30:52.570461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-12-06 03:30:52.570470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.187 [2024-12-06 03:30:52.570476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-12-06 03:30:52.570484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.187 [2024-12-06 03:30:52.570492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-12-06 03:30:52.570499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.187 [2024-12-06 03:30:52.570506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.187 [2024-12-06 03:30:52.570514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.188 [2024-12-06 03:30:52.570521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-12-06 03:30:52.570529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.188 [2024-12-06 03:30:52.570535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-12-06 03:30:52.570544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.188 [2024-12-06 03:30:52.570550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-12-06 03:30:52.570559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.188 [2024-12-06 03:30:52.570566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-12-06 03:30:52.570574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.188 [2024-12-06 03:30:52.570581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-12-06 03:30:52.570589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.188 [2024-12-06 03:30:52.570595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-12-06 03:30:52.570603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.188 [2024-12-06 03:30:52.570610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-12-06 03:30:52.570618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.188 [2024-12-06 03:30:52.570624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-12-06 03:30:52.570634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.188 [2024-12-06 03:30:52.570640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-12-06 03:30:52.570648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.188 [2024-12-06 03:30:52.570655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-12-06 03:30:52.570663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.188 [2024-12-06 03:30:52.570669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-12-06 03:30:52.570677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.188 [2024-12-06 03:30:52.570684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-12-06 03:30:52.570692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.188 [2024-12-06 03:30:52.570698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-12-06 03:30:52.570706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.188 [2024-12-06 03:30:52.570715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-12-06 03:30:52.570723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.188 [2024-12-06 03:30:52.570731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-12-06 03:30:52.570739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.188 [2024-12-06 03:30:52.570746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-12-06 03:30:52.570754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.188 [2024-12-06 03:30:52.570760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-12-06 03:30:52.570768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.188 [2024-12-06 03:30:52.570775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-12-06 03:30:52.570783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.188 [2024-12-06 03:30:52.570789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-12-06 03:30:52.570797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.188 [2024-12-06 03:30:52.570803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-12-06 03:30:52.570812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.188 [2024-12-06 03:30:52.570820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-12-06 03:30:52.570828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.188 [2024-12-06 03:30:52.570834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-12-06 03:30:52.570842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.188 [2024-12-06 03:30:52.570848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-12-06 03:30:52.570856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.188 [2024-12-06 03:30:52.570863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-12-06 03:30:52.570871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.188 [2024-12-06 03:30:52.570877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-12-06 03:30:52.570885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.188 [2024-12-06 03:30:52.570892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-12-06 03:30:52.570900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.188 [2024-12-06 03:30:52.570906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-12-06 03:30:52.570914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.188 [2024-12-06 03:30:52.570921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-12-06 03:30:52.570929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.188 [2024-12-06 03:30:52.570935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-12-06 03:30:52.570943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.188 [2024-12-06 03:30:52.570955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-12-06 03:30:52.570963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.188 [2024-12-06 03:30:52.570972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-12-06 03:30:52.570980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.188 [2024-12-06 03:30:52.570986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-12-06 03:30:52.570994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.188 [2024-12-06 03:30:52.571000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-12-06 03:30:52.571010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.188 [2024-12-06 03:30:52.571017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-12-06 03:30:52.571025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.188 [2024-12-06 03:30:52.571031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-12-06 03:30:52.571039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.188 [2024-12-06 03:30:52.571046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-12-06 03:30:52.571054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.188 [2024-12-06 03:30:52.571060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-12-06 03:30:52.571068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.188 [2024-12-06 03:30:52.571074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-12-06 03:30:52.571082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.188 [2024-12-06 03:30:52.571089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.188 [2024-12-06 03:30:52.571097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.188 [2024-12-06 03:30:52.571104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-12-06 03:30:52.571111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.189 [2024-12-06 03:30:52.571118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-12-06 03:30:52.571125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.189 [2024-12-06 03:30:52.571132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-12-06 03:30:52.571140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.189 [2024-12-06 03:30:52.571146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-12-06 03:30:52.571154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.189 [2024-12-06 03:30:52.571161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-12-06 03:30:52.571169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.189 [2024-12-06 03:30:52.571175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-12-06 03:30:52.571183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.189 [2024-12-06 03:30:52.571191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-12-06 03:30:52.571212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:47.189 [2024-12-06 03:30:52.571220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96328 len:8 PRP1 0x0 PRP2 0x0 00:22:47.189 [2024-12-06 03:30:52.571227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-12-06 03:30:52.571237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:47.189 [2024-12-06 03:30:52.571242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:47.189 [2024-12-06 03:30:52.571248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96336 len:8 PRP1 0x0 PRP2 0x0 00:22:47.189 [2024-12-06 03:30:52.571254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-12-06 03:30:52.571261] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:47.189 [2024-12-06 03:30:52.571266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:47.189 [2024-12-06 03:30:52.571272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96344 len:8 PRP1 0x0 PRP2 0x0 00:22:47.189 [2024-12-06 03:30:52.571278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-12-06 03:30:52.571285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:47.189 [2024-12-06 03:30:52.571289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:47.189 [2024-12-06 03:30:52.571295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96352 len:8 PRP1 0x0 PRP2 0x0 00:22:47.189 [2024-12-06 03:30:52.571301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-12-06 03:30:52.571308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:47.189 [2024-12-06 03:30:52.571313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:47.189 [2024-12-06 03:30:52.571318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96360 len:8 PRP1 0x0 PRP2 0x0 00:22:47.189 [2024-12-06 03:30:52.571324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-12-06 03:30:52.571331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:47.189 [2024-12-06 03:30:52.571336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:47.189 [2024-12-06 03:30:52.571342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96368 len:8 PRP1 0x0 PRP2 0x0 00:22:47.189 [2024-12-06 03:30:52.571348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-12-06 03:30:52.571355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:47.189 [2024-12-06 03:30:52.571359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:47.189 [2024-12-06 03:30:52.571365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96376 len:8 PRP1 0x0 PRP2 0x0 00:22:47.189 [2024-12-06 03:30:52.571371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-12-06 03:30:52.571378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:47.189 [2024-12-06 03:30:52.571383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:47.189 [2024-12-06 03:30:52.571388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96384 len:8 PRP1 0x0 PRP2 0x0 00:22:47.189 [2024-12-06 03:30:52.571396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-12-06 03:30:52.571404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:47.189 [2024-12-06 03:30:52.571410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:47.189 [2024-12-06 03:30:52.571417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96392 len:8 PRP1 0x0 PRP2 0x0 00:22:47.189 [2024-12-06 03:30:52.571423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-12-06 03:30:52.571430] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:47.189 [2024-12-06 03:30:52.571435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:47.189 [2024-12-06 03:30:52.571441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96400 len:8 PRP1 0x0 PRP2 0x0 00:22:47.189 [2024-12-06 03:30:52.571447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-12-06 03:30:52.571454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:47.189 [2024-12-06 03:30:52.571459] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:47.189 [2024-12-06 03:30:52.571464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96408 len:8 PRP1 0x0 PRP2 0x0 00:22:47.189 [2024-12-06 03:30:52.571470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-12-06 03:30:52.571477] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:47.189 [2024-12-06 03:30:52.571482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:47.189 [2024-12-06 03:30:52.571487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96416 len:8 PRP1 0x0 PRP2 0x0 00:22:47.189 [2024-12-06 03:30:52.571493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-12-06 03:30:52.571500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:47.189 [2024-12-06 03:30:52.571505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:47.189 [2024-12-06 03:30:52.571510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96424 len:8 PRP1 0x0 PRP2 0x0 00:22:47.189 [2024-12-06 03:30:52.571516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-12-06 03:30:52.571523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:47.189 [2024-12-06 03:30:52.571528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:47.189 [2024-12-06 03:30:52.571533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96432 len:8 PRP1 0x0 PRP2 0x0 00:22:47.189 [2024-12-06 03:30:52.571540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-12-06 03:30:52.584422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:47.189 [2024-12-06 03:30:52.584432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:47.189 [2024-12-06 03:30:52.584439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96440 len:8 PRP1 0x0 PRP2 0x0 00:22:47.189 [2024-12-06 03:30:52.584447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-12-06 03:30:52.584455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:47.189 [2024-12-06 03:30:52.584461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:47.189 [2024-12-06 03:30:52.584469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96448 len:8 PRP1 0x0 PRP2 0x0 00:22:47.189 [2024-12-06 03:30:52.584477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-12-06 03:30:52.584524] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:47.189 [2024-12-06 03:30:52.584549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.189 [2024-12-06 03:30:52.584558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-12-06 03:30:52.584566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.189 [2024-12-06 03:30:52.584573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-12-06 03:30:52.584581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.189 [2024-12-06 03:30:52.584588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-12-06 03:30:52.584595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.189 [2024-12-06 03:30:52.584603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.189 [2024-12-06 03:30:52.584611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:47.189 [2024-12-06 03:30:52.584640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x597fa0 (9): Bad file descriptor 00:22:47.189 [2024-12-06 03:30:52.587936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:47.189 [2024-12-06 03:30:52.618243] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:47.189 10671.50 IOPS, 41.69 MiB/s [2024-12-06T02:31:07.330Z] 10798.33 IOPS, 42.18 MiB/s [2024-12-06T02:31:07.330Z] 10885.75 IOPS, 42.52 MiB/s [2024-12-06T02:31:07.330Z] [2024-12-06 03:30:56.128839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.190 [2024-12-06 03:30:56.128883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-12-06 03:30:56.128893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.190 [2024-12-06 03:30:56.128901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-12-06 03:30:56.128909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.190 [2024-12-06 03:30:56.128916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-12-06 03:30:56.128923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.190 [2024-12-06 03:30:56.128929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-12-06 03:30:56.128936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x597fa0 is same with the state(6) to be set 00:22:47.190 [2024-12-06 03:30:56.129265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.190 [2024-12-06 03:30:56.129276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-12-06 03:30:56.129293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.190 [2024-12-06 03:30:56.129301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-12-06 03:30:56.129310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.190 [2024-12-06 03:30:56.129316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-12-06 03:30:56.129325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.190 [2024-12-06 03:30:56.129332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-12-06 03:30:56.129341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.190 [2024-12-06 03:30:56.129347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-12-06 03:30:56.129356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.190 [2024-12-06 03:30:56.129362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-12-06 03:30:56.129371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.190 [2024-12-06 03:30:56.129378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-12-06 03:30:56.129386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.190 [2024-12-06 03:30:56.129393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-12-06 03:30:56.129401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-12-06 03:30:56.129408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-12-06 03:30:56.129416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-12-06 03:30:56.129422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-12-06 03:30:56.129431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-12-06 03:30:56.129437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-12-06 03:30:56.129445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-12-06 03:30:56.129452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-12-06 03:30:56.129461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-12-06 03:30:56.129468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-12-06 03:30:56.129476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-12-06 03:30:56.129484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-12-06 03:30:56.129493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-12-06 03:30:56.129499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-12-06 03:30:56.129507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-12-06 03:30:56.129513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-12-06 03:30:56.129522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-12-06 03:30:56.129528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-12-06 03:30:56.129538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-12-06 03:30:56.129544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-12-06 03:30:56.129553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-12-06 03:30:56.129559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-12-06 03:30:56.129567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-12-06 03:30:56.129574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-12-06 03:30:56.129582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-12-06 03:30:56.129589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-12-06 03:30:56.129597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-12-06 03:30:56.129603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-12-06 03:30:56.129612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-12-06 03:30:56.129618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-12-06 03:30:56.129626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.190 [2024-12-06 03:30:56.129633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-12-06 03:30:56.129641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-12-06 03:30:56.129647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-12-06 03:30:56.129655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-12-06 03:30:56.129662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-12-06 03:30:56.129672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-12-06 03:30:56.129679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-12-06 03:30:56.129687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.190 [2024-12-06 03:30:56.129694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.190 [2024-12-06 03:30:56.129702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-12-06 03:30:56.129709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-12-06 03:30:56.129717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-12-06 03:30:56.129723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-12-06 03:30:56.129732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-12-06 03:30:56.129738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-12-06 03:30:56.129747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-12-06 03:30:56.129753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-12-06 03:30:56.129762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-12-06 03:30:56.129768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-12-06 03:30:56.129776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-12-06 03:30:56.129782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-12-06 03:30:56.129791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-12-06 03:30:56.129797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-12-06 03:30:56.129805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-12-06 03:30:56.129812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-12-06 03:30:56.129820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-12-06 03:30:56.129826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-12-06 03:30:56.129834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-12-06 03:30:56.129841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-12-06 03:30:56.129849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-12-06 03:30:56.129857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-12-06 03:30:56.129865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-12-06 03:30:56.129872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-12-06 03:30:56.129880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-12-06 03:30:56.129886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-12-06 03:30:56.129894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-12-06 03:30:56.129901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-12-06 03:30:56.129909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-12-06 03:30:56.129917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-12-06 03:30:56.129925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-12-06 03:30:56.129931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-12-06 03:30:56.129940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-12-06 03:30:56.129952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-12-06 03:30:56.129960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-12-06 03:30:56.129967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-12-06 03:30:56.129975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-12-06 03:30:56.129981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-12-06 03:30:56.129989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-12-06 03:30:56.129996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-12-06 03:30:56.130004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-12-06 03:30:56.130010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-12-06 03:30:56.130019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-12-06 03:30:56.130025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-12-06 03:30:56.130033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-12-06 03:30:56.130039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-12-06 03:30:56.130048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-12-06 03:30:56.130056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-12-06 03:30:56.130065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-12-06 03:30:56.130071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-12-06 03:30:56.130080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-12-06 03:30:56.130086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-12-06 03:30:56.130094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-12-06 03:30:56.130100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-12-06 03:30:56.130108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-12-06 03:30:56.130115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-12-06 03:30:56.130123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:24208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-12-06 03:30:56.130130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-12-06 03:30:56.130138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-12-06 03:30:56.130144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-12-06 03:30:56.130152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-12-06 03:30:56.130159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-12-06 03:30:56.130167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-12-06 03:30:56.130173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-12-06 03:30:56.130182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-12-06 03:30:56.130189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-12-06 03:30:56.130197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-12-06 03:30:56.130203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-12-06 03:30:56.130211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-12-06 03:30:56.130218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-12-06 03:30:56.130226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.191 [2024-12-06 03:30:56.130232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-12-06 03:30:56.130241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.191 [2024-12-06 03:30:56.130248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-12-06 03:30:56.130256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.191 [2024-12-06 03:30:56.130263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-12-06 03:30:56.130271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.191 [2024-12-06 03:30:56.130278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.191 [2024-12-06 03:30:56.130286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.191 [2024-12-06 03:30:56.130293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-12-06 03:30:56.130301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.192 [2024-12-06 03:30:56.130307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-12-06 03:30:56.130316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.192 [2024-12-06 03:30:56.130322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-12-06 03:30:56.130330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.192 [2024-12-06 03:30:56.130336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-12-06 03:30:56.130344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.192 [2024-12-06 03:30:56.130351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-12-06 03:30:56.130359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.192 [2024-12-06 03:30:56.130366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-12-06 03:30:56.130374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.192 [2024-12-06 03:30:56.130381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-12-06 03:30:56.130388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.192 [2024-12-06 03:30:56.130395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-12-06 03:30:56.130403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:24816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.192 [2024-12-06 03:30:56.130409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-12-06 03:30:56.130418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.192 [2024-12-06 03:30:56.130426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-12-06 03:30:56.130434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.192 [2024-12-06 03:30:56.130441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-12-06 03:30:56.130449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.192 [2024-12-06 03:30:56.130456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-12-06 03:30:56.130464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-12-06 03:30:56.130471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-12-06 03:30:56.130479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-12-06 03:30:56.130485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-12-06 03:30:56.130493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-12-06 03:30:56.130500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-12-06 03:30:56.130508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-12-06 03:30:56.130514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-12-06 03:30:56.130522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-12-06 03:30:56.130529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-12-06 03:30:56.130537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-12-06 03:30:56.130544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-12-06 03:30:56.130552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-12-06 03:30:56.130558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-12-06 03:30:56.130567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-12-06 03:30:56.130573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-12-06 03:30:56.130581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-12-06 03:30:56.130588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-12-06 03:30:56.130596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-12-06 03:30:56.130602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-12-06 03:30:56.130612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-12-06 03:30:56.130618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-12-06 03:30:56.130626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-12-06 03:30:56.130632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-12-06 03:30:56.130641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-12-06 03:30:56.130647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-12-06 03:30:56.130655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-12-06 03:30:56.130662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-12-06 03:30:56.130670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-12-06 03:30:56.130677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-12-06 03:30:56.130685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-12-06 03:30:56.130691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-12-06 03:30:56.130699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.192 [2024-12-06 03:30:56.130706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-12-06 03:30:56.130714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-12-06 03:30:56.130720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-12-06 03:30:56.130728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-12-06 03:30:56.130734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-12-06 03:30:56.130742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-12-06 03:30:56.130750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-12-06 03:30:56.130759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-12-06 03:30:56.130766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-12-06 03:30:56.130774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-12-06 03:30:56.130780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-12-06 03:30:56.130788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-12-06 03:30:56.130799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-12-06 03:30:56.130808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-12-06 03:30:56.130815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-12-06 03:30:56.130823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-12-06 03:30:56.130829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-12-06 03:30:56.130837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-12-06 03:30:56.130844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-12-06 03:30:56.130852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-12-06 03:30:56.130859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-12-06 03:30:56.130867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-12-06 03:30:56.130874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-12-06 03:30:56.130882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.192 [2024-12-06 03:30:56.130889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.192 [2024-12-06 03:30:56.130897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-12-06 03:30:56.130904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-12-06 03:30:56.130913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-12-06 03:30:56.130920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-12-06 03:30:56.130928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-12-06 03:30:56.130935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-12-06 03:30:56.130943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-12-06 03:30:56.130953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-12-06 03:30:56.130961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-12-06 03:30:56.130967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-12-06 03:30:56.130976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-12-06 03:30:56.130983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-12-06 03:30:56.130991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-12-06 03:30:56.130999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-12-06 03:30:56.131008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-12-06 03:30:56.131015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-12-06 03:30:56.131023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-12-06 03:30:56.131029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-12-06 03:30:56.131038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-12-06 03:30:56.131045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-12-06 03:30:56.131053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-12-06 03:30:56.131060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-12-06 03:30:56.131068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-12-06 03:30:56.131074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-12-06 03:30:56.131083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-12-06 03:30:56.131089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-12-06 03:30:56.131098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-12-06 03:30:56.131104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-12-06 03:30:56.131112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-12-06 03:30:56.131119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-12-06 03:30:56.131127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-12-06 03:30:56.131133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-12-06 03:30:56.131142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-12-06 03:30:56.131148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-12-06 03:30:56.131157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-12-06 03:30:56.131163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-12-06 03:30:56.131171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-12-06 03:30:56.131177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-12-06 03:30:56.131187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c62d0 is same with the state(6) to be set 00:22:47.193 [2024-12-06 03:30:56.131201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:47.193 [2024-12-06 03:30:56.131207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:47.193 [2024-12-06 03:30:56.131213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24648 len:8 PRP1 0x0 PRP2 0x0 00:22:47.193 [2024-12-06 03:30:56.131220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-12-06 03:30:56.131264] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:47.193 [2024-12-06 03:30:56.131273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:47.193 [2024-12-06 03:30:56.134167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:47.193 [2024-12-06 03:30:56.134199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x597fa0 (9): Bad file descriptor 00:22:47.193 [2024-12-06 03:30:56.169978] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:22:47.193 10791.80 IOPS, 42.16 MiB/s [2024-12-06T02:31:07.334Z] 10836.67 IOPS, 42.33 MiB/s [2024-12-06T02:31:07.334Z] 10844.00 IOPS, 42.36 MiB/s [2024-12-06T02:31:07.334Z] 10878.62 IOPS, 42.49 MiB/s [2024-12-06T02:31:07.334Z] 10887.33 IOPS, 42.53 MiB/s [2024-12-06T02:31:07.334Z] [2024-12-06 03:31:00.566910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:31448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.193 [2024-12-06 03:31:00.566969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-12-06 03:31:00.566986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:31456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.193 [2024-12-06 03:31:00.566993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-12-06 03:31:00.567002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:31464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.193 [2024-12-06 03:31:00.567009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-12-06 03:31:00.567018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.193 [2024-12-06 03:31:00.567024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-12-06 03:31:00.567033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.193 [2024-12-06 03:31:00.567039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-12-06 03:31:00.567047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:31488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.193 [2024-12-06 03:31:00.567054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-12-06 03:31:00.567062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:31496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.193 [2024-12-06 03:31:00.567069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-12-06 03:31:00.567078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:31504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.193 [2024-12-06 03:31:00.567089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-12-06 03:31:00.567098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:31512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.193 [2024-12-06 03:31:00.567104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-12-06 03:31:00.567113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:31520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.193 [2024-12-06 03:31:00.567119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-12-06 03:31:00.567128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:30568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-12-06 03:31:00.567134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-12-06 03:31:00.567143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:30576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-12-06 03:31:00.567150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-12-06 03:31:00.567158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-12-06 03:31:00.567165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-12-06 03:31:00.567173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:30592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-12-06 03:31:00.567180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-12-06 03:31:00.567188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:30600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-12-06 03:31:00.567195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.193 [2024-12-06 03:31:00.567203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:30608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.193 [2024-12-06 03:31:00.567210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-12-06 03:31:00.567218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-12-06 03:31:00.567224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-12-06 03:31:00.567233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:30624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-12-06 03:31:00.567240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-12-06 03:31:00.567248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:30632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-12-06 03:31:00.567254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-12-06 03:31:00.567263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:30640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-12-06 03:31:00.567270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-12-06 03:31:00.567278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-12-06 03:31:00.567286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-12-06 03:31:00.567294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:30656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-12-06 03:31:00.567301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-12-06 03:31:00.567310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:30664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-12-06 03:31:00.567316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-12-06 03:31:00.567324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:30672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-12-06 03:31:00.567331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-12-06 03:31:00.567340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:30680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-12-06 03:31:00.567346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-12-06 03:31:00.567355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:30688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-12-06 03:31:00.567362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-12-06 03:31:00.567370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:31528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.194 [2024-12-06 03:31:00.567377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-12-06 03:31:00.567385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:31536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.194 [2024-12-06 03:31:00.567391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-12-06 03:31:00.567399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:31544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.194 [2024-12-06 03:31:00.567406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-12-06 03:31:00.567414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:31552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.194 [2024-12-06 03:31:00.567420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-12-06 03:31:00.567429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:31560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.194 [2024-12-06 03:31:00.567435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-12-06 03:31:00.567443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-12-06 03:31:00.567450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-12-06 03:31:00.567458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:30704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-12-06 03:31:00.567464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-12-06 03:31:00.567479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-12-06 03:31:00.567486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-12-06 03:31:00.567494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-12-06 03:31:00.567500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-12-06 03:31:00.567509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:30728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-12-06 03:31:00.567515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-12-06 03:31:00.567523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-12-06 03:31:00.567529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-12-06 03:31:00.567538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:30744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-12-06 03:31:00.567544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-12-06 03:31:00.567552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:30752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-12-06 03:31:00.567558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-12-06 03:31:00.567567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:30760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-12-06 03:31:00.567573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-12-06 03:31:00.567581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-12-06 03:31:00.567589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-12-06 03:31:00.567598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:30776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-12-06 03:31:00.567604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-12-06 03:31:00.567612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:30784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-12-06 03:31:00.567619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-12-06 03:31:00.567627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:30792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-12-06 03:31:00.567633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-12-06 03:31:00.567641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:30800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-12-06 03:31:00.567648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-12-06 03:31:00.567656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:30808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-12-06 03:31:00.567664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-12-06 03:31:00.567672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-12-06 03:31:00.567679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-12-06 03:31:00.567687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-12-06 03:31:00.567693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-12-06 03:31:00.567701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:30832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-12-06 03:31:00.567707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-12-06 03:31:00.567715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-12-06 03:31:00.567722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-12-06 03:31:00.567730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:30848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-12-06 03:31:00.567736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-12-06 03:31:00.567744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-12-06 03:31:00.567751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-12-06 03:31:00.567759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:30864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-12-06 03:31:00.567765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-12-06 03:31:00.567773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:30872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.194 [2024-12-06 03:31:00.567779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.194 [2024-12-06 03:31:00.567787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-12-06 03:31:00.567793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-12-06 03:31:00.567802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-12-06 03:31:00.567808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-12-06 03:31:00.567816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:30896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-12-06 03:31:00.567823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-12-06 03:31:00.567832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:30904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-12-06 03:31:00.567838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-12-06 03:31:00.567847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:30912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-12-06 03:31:00.567854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-12-06 03:31:00.567862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:30920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-12-06 03:31:00.567869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-12-06 03:31:00.567877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:30928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-12-06 03:31:00.567883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-12-06 03:31:00.567891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:30936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-12-06 03:31:00.567897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-12-06 03:31:00.567906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-12-06 03:31:00.567912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-12-06 03:31:00.567920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:30952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-12-06 03:31:00.567926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-12-06 03:31:00.567934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-12-06 03:31:00.567941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-12-06 03:31:00.567954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:30968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-12-06 03:31:00.567961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-12-06 03:31:00.567969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-12-06 03:31:00.567975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-12-06 03:31:00.567983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:30984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-12-06 03:31:00.567990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-12-06 03:31:00.567997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-12-06 03:31:00.568004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-12-06 03:31:00.568012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-12-06 03:31:00.568019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-12-06 03:31:00.568027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:31008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-12-06 03:31:00.568035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-12-06 03:31:00.568043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:31016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-12-06 03:31:00.568049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-12-06 03:31:00.568057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:31024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-12-06 03:31:00.568063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-12-06 03:31:00.568072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-12-06 03:31:00.568079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-12-06 03:31:00.568087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-12-06 03:31:00.568093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-12-06 03:31:00.568101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-12-06 03:31:00.568108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-12-06 03:31:00.568116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:31056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-12-06 03:31:00.568122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-12-06 03:31:00.568130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-12-06 03:31:00.568137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-12-06 03:31:00.568145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-12-06 03:31:00.568151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-12-06 03:31:00.568159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-12-06 03:31:00.568165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-12-06 03:31:00.568173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:31088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-12-06 03:31:00.568179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-12-06 03:31:00.568188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:31096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-12-06 03:31:00.568194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-12-06 03:31:00.568202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-12-06 03:31:00.568209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-12-06 03:31:00.568217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:31112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-12-06 03:31:00.568224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-12-06 03:31:00.568232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-12-06 03:31:00.568239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-12-06 03:31:00.568247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:31128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-12-06 03:31:00.568261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-12-06 03:31:00.568269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-12-06 03:31:00.568276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-12-06 03:31:00.568284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-12-06 03:31:00.568290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-12-06 03:31:00.568298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-12-06 03:31:00.568305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-12-06 03:31:00.568313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:31160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.195 [2024-12-06 03:31:00.568319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.195 [2024-12-06 03:31:00.568327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-12-06 03:31:00.568334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-12-06 03:31:00.568342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:31176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-12-06 03:31:00.568348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-12-06 03:31:00.568356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:31184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-12-06 03:31:00.568363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-12-06 03:31:00.568371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-12-06 03:31:00.568377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-12-06 03:31:00.568385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.196 [2024-12-06 03:31:00.568392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-12-06 03:31:00.568399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:31576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.196 [2024-12-06 03:31:00.568406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-12-06 03:31:00.568416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:31200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-12-06 03:31:00.568425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-12-06 03:31:00.568433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:31208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-12-06 03:31:00.568439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-12-06 03:31:00.568447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-12-06 03:31:00.568453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-12-06 03:31:00.568461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-12-06 03:31:00.568468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-12-06 03:31:00.568476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:31232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-12-06 03:31:00.568482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-12-06 03:31:00.568490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:31240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-12-06 03:31:00.568498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-12-06 03:31:00.568506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-12-06 03:31:00.568513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-12-06 03:31:00.568521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:31256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-12-06 03:31:00.568528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-12-06 03:31:00.568536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:31264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-12-06 03:31:00.568542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-12-06 03:31:00.568550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:31272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-12-06 03:31:00.568556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-12-06 03:31:00.568565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-12-06 03:31:00.568571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-12-06 03:31:00.568579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:31288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-12-06 03:31:00.568586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-12-06 03:31:00.568594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:31296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-12-06 03:31:00.568602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-12-06 03:31:00.568610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-12-06 03:31:00.568616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-12-06 03:31:00.568624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:31312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-12-06 03:31:00.568630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-12-06 03:31:00.568638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:31584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.196 [2024-12-06 03:31:00.568645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-12-06 03:31:00.568652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:31320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-12-06 03:31:00.568660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-12-06 03:31:00.568668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-12-06 03:31:00.568674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-12-06 03:31:00.568682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-12-06 03:31:00.568689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-12-06 03:31:00.568697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:31344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-12-06 03:31:00.568704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-12-06 03:31:00.568712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:31352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-12-06 03:31:00.568718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-12-06 03:31:00.568726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-12-06 03:31:00.568733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-12-06 03:31:00.568742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-12-06 03:31:00.568748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-12-06 03:31:00.568757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:31376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-12-06 03:31:00.568763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-12-06 03:31:00.568771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:31384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-12-06 03:31:00.568777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-12-06 03:31:00.568787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-12-06 03:31:00.568793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-12-06 03:31:00.568802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:31400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-12-06 03:31:00.568808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-12-06 03:31:00.568816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:31408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-12-06 03:31:00.568822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-12-06 03:31:00.568830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:31416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-12-06 03:31:00.568836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-12-06 03:31:00.568845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:31424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-12-06 03:31:00.568851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-12-06 03:31:00.568859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.196 [2024-12-06 03:31:00.568866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-12-06 03:31:00.568873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f3a70 is same with the state(6) to be set 00:22:47.196 [2024-12-06 03:31:00.568882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:47.196 [2024-12-06 03:31:00.568887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:47.196 [2024-12-06 03:31:00.568895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31440 len:8 PRP1 0x0 PRP2 0x0 00:22:47.196 [2024-12-06 03:31:00.568901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.196 [2024-12-06 03:31:00.568946] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:47.196 [2024-12-06 03:31:00.568975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.197 [2024-12-06 03:31:00.568983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.197 [2024-12-06 03:31:00.568991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.197 [2024-12-06 03:31:00.568998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.197 [2024-12-06 03:31:00.569005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.197 [2024-12-06 03:31:00.569011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.197 [2024-12-06 03:31:00.569018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.197 [2024-12-06 03:31:00.569027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.197 [2024-12-06 03:31:00.569035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:47.197 [2024-12-06 03:31:00.571930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:47.197 [2024-12-06 03:31:00.571966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x597fa0 (9): Bad file descriptor 00:22:47.197 [2024-12-06 03:31:00.600649] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:22:47.197 10863.00 IOPS, 42.43 MiB/s [2024-12-06T02:31:07.338Z] 10857.45 IOPS, 42.41 MiB/s [2024-12-06T02:31:07.338Z] 10852.08 IOPS, 42.39 MiB/s [2024-12-06T02:31:07.338Z] 10870.62 IOPS, 42.46 MiB/s [2024-12-06T02:31:07.338Z] 10885.86 IOPS, 42.52 MiB/s 00:22:47.197 Latency(us) 00:22:47.197 [2024-12-06T02:31:07.338Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.197 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:47.197 Verification LBA range: start 0x0 length 0x4000 00:22:47.197 NVMe0n1 : 15.00 10885.76 42.52 322.42 0.00 11397.62 498.64 23820.91 00:22:47.197 [2024-12-06T02:31:07.338Z] =================================================================================================================== 00:22:47.197 [2024-12-06T02:31:07.338Z] Total : 10885.76 42.52 322.42 0.00 11397.62 498.64 23820.91 00:22:47.197 Received shutdown signal, test time was about 15.000000 seconds 00:22:47.197 00:22:47.197 Latency(us) 00:22:47.197 [2024-12-06T02:31:07.338Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.197 [2024-12-06T02:31:07.338Z] =================================================================================================================== 00:22:47.197 [2024-12-06T02:31:07.338Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:47.197 03:31:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:47.197 03:31:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:47.197 03:31:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:47.197 03:31:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2711822 00:22:47.197 03:31:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2711822 /var/tmp/bdevperf.sock 00:22:47.197 03:31:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:47.197 03:31:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2711822 ']' 00:22:47.197 03:31:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:47.197 03:31:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:47.197 03:31:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:47.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:47.197 03:31:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:47.197 03:31:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:47.197 03:31:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:47.197 03:31:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:47.197 03:31:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:47.197 [2024-12-06 03:31:07.179135] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:47.197 03:31:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:47.456 [2024-12-06 03:31:07.367656] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:47.456 03:31:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:47.715 NVMe0n1 00:22:47.715 03:31:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:47.974 00:22:47.974 03:31:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:48.542 00:22:48.542 03:31:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:48.542 03:31:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:48.801 03:31:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:48.801 03:31:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:22:52.091 03:31:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:52.091 03:31:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:22:52.091 03:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:52.091 03:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2712742 00:22:52.091 03:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2712742 00:22:53.470 { 00:22:53.470 "results": [ 00:22:53.470 { 00:22:53.470 "job": "NVMe0n1", 00:22:53.470 "core_mask": "0x1", 00:22:53.470 "workload": "verify", 00:22:53.470 "status": "finished", 00:22:53.470 "verify_range": { 00:22:53.470 "start": 0, 00:22:53.470 "length": 16384 00:22:53.470 }, 00:22:53.470 "queue_depth": 128, 00:22:53.470 "io_size": 4096, 00:22:53.470 "runtime": 1.008135, 00:22:53.470 "iops": 10711.859026816845, 00:22:53.470 "mibps": 41.8431993235033, 00:22:53.470 "io_failed": 0, 00:22:53.470 "io_timeout": 0, 00:22:53.470 "avg_latency_us": 11900.111799723807, 00:22:53.470 "min_latency_us": 2621.44, 00:22:53.470 "max_latency_us": 9630.942608695652 00:22:53.470 } 00:22:53.470 ], 00:22:53.470 "core_count": 1 00:22:53.470 } 00:22:53.470 03:31:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:53.470 [2024-12-06 03:31:06.807654] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:22:53.470 [2024-12-06 03:31:06.807705] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2711822 ] 00:22:53.470 [2024-12-06 03:31:06.870486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.470 [2024-12-06 03:31:06.908633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.470 [2024-12-06 03:31:08.863994] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:53.470 [2024-12-06 03:31:08.864039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.470 [2024-12-06 03:31:08.864050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.470 [2024-12-06 03:31:08.864058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.470 [2024-12-06 03:31:08.864065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.470 [2024-12-06 03:31:08.864073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.470 [2024-12-06 03:31:08.864080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.470 [2024-12-06 03:31:08.864087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.470 [2024-12-06 03:31:08.864094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.470 [2024-12-06 03:31:08.864101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:22:53.470 [2024-12-06 03:31:08.864127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:22:53.470 [2024-12-06 03:31:08.864142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c64fa0 (9): Bad file descriptor 00:22:53.470 [2024-12-06 03:31:08.956054] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:22:53.470 Running I/O for 1 seconds... 00:22:53.470 10671.00 IOPS, 41.68 MiB/s 00:22:53.470 Latency(us) 00:22:53.470 [2024-12-06T02:31:13.611Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.470 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:53.470 Verification LBA range: start 0x0 length 0x4000 00:22:53.470 NVMe0n1 : 1.01 10711.86 41.84 0.00 0.00 11900.11 2621.44 9630.94 00:22:53.470 [2024-12-06T02:31:13.611Z] =================================================================================================================== 00:22:53.470 [2024-12-06T02:31:13.611Z] Total : 10711.86 41.84 0.00 0.00 11900.11 2621.44 9630.94 00:22:53.470 03:31:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:53.470 03:31:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:22:53.470 03:31:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:53.729 03:31:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:53.729 03:31:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:22:53.729 03:31:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:53.988 03:31:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:22:57.275 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:57.275 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:22:57.276 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2711822 00:22:57.276 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2711822 ']' 00:22:57.276 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2711822 00:22:57.276 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:57.276 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:57.276 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2711822 00:22:57.276 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:57.276 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:57.276 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2711822' 00:22:57.276 killing process with pid 2711822 00:22:57.276 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2711822 00:22:57.276 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2711822 00:22:57.534 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:22:57.534 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:57.534 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:57.534 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:57.534 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:22:57.793 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:57.793 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:22:57.793 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:57.793 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:22:57.793 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:57.793 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:57.793 rmmod nvme_tcp 00:22:57.793 rmmod nvme_fabrics 00:22:57.793 rmmod nvme_keyring 00:22:57.793 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:57.793 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:22:57.793 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:22:57.793 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2708815 ']' 00:22:57.793 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2708815 00:22:57.793 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2708815 ']' 00:22:57.793 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2708815 00:22:57.793 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:57.793 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:57.793 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2708815 00:22:57.793 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:57.793 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:57.793 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2708815' 00:22:57.793 killing process with pid 2708815 00:22:57.793 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2708815 00:22:57.793 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2708815 00:22:58.054 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:58.054 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:58.054 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:58.054 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:22:58.054 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:22:58.054 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:22:58.054 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:58.054 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:58.054 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:58.054 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.054 03:31:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:58.054 03:31:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.961 03:31:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:59.961 00:22:59.961 real 0m37.410s 00:22:59.961 user 1m58.721s 00:22:59.961 sys 0m7.884s 00:22:59.961 03:31:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:59.961 03:31:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:59.961 ************************************ 00:22:59.961 END TEST nvmf_failover 00:22:59.961 ************************************ 00:23:00.220 03:31:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:00.220 03:31:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:00.220 03:31:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:00.220 03:31:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.220 ************************************ 00:23:00.220 START TEST nvmf_host_discovery 00:23:00.220 ************************************ 00:23:00.220 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:00.220 * Looking for test storage... 00:23:00.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:00.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.221 --rc genhtml_branch_coverage=1 00:23:00.221 --rc genhtml_function_coverage=1 00:23:00.221 --rc genhtml_legend=1 00:23:00.221 --rc geninfo_all_blocks=1 00:23:00.221 --rc geninfo_unexecuted_blocks=1 00:23:00.221 00:23:00.221 ' 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:00.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.221 --rc genhtml_branch_coverage=1 00:23:00.221 --rc genhtml_function_coverage=1 00:23:00.221 --rc genhtml_legend=1 00:23:00.221 --rc geninfo_all_blocks=1 00:23:00.221 --rc geninfo_unexecuted_blocks=1 00:23:00.221 00:23:00.221 ' 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:00.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.221 --rc genhtml_branch_coverage=1 00:23:00.221 --rc genhtml_function_coverage=1 00:23:00.221 --rc genhtml_legend=1 00:23:00.221 --rc geninfo_all_blocks=1 00:23:00.221 --rc geninfo_unexecuted_blocks=1 00:23:00.221 00:23:00.221 ' 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:00.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.221 --rc genhtml_branch_coverage=1 00:23:00.221 --rc genhtml_function_coverage=1 00:23:00.221 --rc genhtml_legend=1 00:23:00.221 --rc geninfo_all_blocks=1 00:23:00.221 --rc geninfo_unexecuted_blocks=1 00:23:00.221 00:23:00.221 ' 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:00.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:00.221 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:00.222 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:00.222 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:00.222 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:00.222 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:00.222 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:00.222 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:00.222 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:00.222 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:00.222 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.222 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:00.222 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.222 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:00.222 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:00.222 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:23:00.222 03:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:05.496 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:05.496 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:23:05.496 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:05.496 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:05.496 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:05.496 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:05.496 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:05.496 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:23:05.496 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:05.496 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:05.497 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:05.497 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:05.497 Found net devices under 0000:86:00.0: cvl_0_0 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:05.497 Found net devices under 0000:86:00.1: cvl_0_1 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:05.497 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:05.497 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.419 ms 00:23:05.497 00:23:05.497 --- 10.0.0.2 ping statistics --- 00:23:05.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.497 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:23:05.497 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:05.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:05.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:23:05.755 00:23:05.755 --- 10.0.0.1 ping statistics --- 00:23:05.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.755 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:23:05.755 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:05.755 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:23:05.755 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:05.755 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:05.755 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:05.755 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:05.755 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:05.755 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:05.755 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:05.755 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:05.755 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:05.755 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:05.755 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:05.755 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:05.755 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2716972 00:23:05.755 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2716972 00:23:05.755 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2716972 ']' 00:23:05.755 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.755 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:05.755 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.755 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:05.755 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:05.755 [2024-12-06 03:31:25.711825] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:23:05.755 [2024-12-06 03:31:25.711866] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:05.755 [2024-12-06 03:31:25.778093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.755 [2024-12-06 03:31:25.820490] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:05.755 [2024-12-06 03:31:25.820524] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:05.755 [2024-12-06 03:31:25.820531] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:05.755 [2024-12-06 03:31:25.820537] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:05.755 [2024-12-06 03:31:25.820545] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:05.755 [2024-12-06 03:31:25.821136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:06.013 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:06.013 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:06.013 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:06.013 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:06.013 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.013 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:06.013 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:06.013 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.013 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.013 [2024-12-06 03:31:25.954487] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:06.013 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.013 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:06.013 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.013 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.013 [2024-12-06 03:31:25.962654] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:06.013 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.013 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:06.013 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.013 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.013 null0 00:23:06.013 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.013 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:06.013 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.013 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.013 null1 00:23:06.013 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.013 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:06.013 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.013 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.013 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.013 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2717113 00:23:06.013 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:06.013 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2717113 /tmp/host.sock 00:23:06.013 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2717113 ']' 00:23:06.013 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:23:06.013 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:06.013 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:06.013 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:06.013 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:06.013 03:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.013 [2024-12-06 03:31:26.037235] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:23:06.013 [2024-12-06 03:31:26.037280] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2717113 ] 00:23:06.013 [2024-12-06 03:31:26.098163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.013 [2024-12-06 03:31:26.141485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.271 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:06.272 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:06.530 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.530 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:06.530 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:06.530 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.530 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.530 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.530 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:06.530 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:06.530 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:06.530 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.530 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:06.530 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.530 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:06.530 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.530 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:06.530 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:06.530 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.531 [2024-12-06 03:31:26.556182] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.531 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.789 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:06.789 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:06.790 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:06.790 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:06.790 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:06.790 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.790 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.790 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.790 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:06.790 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:06.790 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:06.790 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:06.790 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:06.790 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:06.790 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:06.790 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:06.790 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:06.790 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.790 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:06.790 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.790 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.790 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:23:06.790 03:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:07.358 [2024-12-06 03:31:27.260857] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:07.358 [2024-12-06 03:31:27.260875] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:07.358 [2024-12-06 03:31:27.260887] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:07.358 [2024-12-06 03:31:27.348141] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:07.358 [2024-12-06 03:31:27.450777] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:23:07.358 [2024-12-06 03:31:27.451488] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2293920:1 started. 00:23:07.358 [2024-12-06 03:31:27.452867] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:07.358 [2024-12-06 03:31:27.452883] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:07.358 [2024-12-06 03:31:27.459779] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2293920 was disconnected and freed. delete nvme_qpair. 00:23:07.617 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:07.617 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:07.617 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:07.617 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:07.617 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:07.617 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.617 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:07.617 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:07.617 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:07.876 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:07.877 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:07.877 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:07.877 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:07.877 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:07.877 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.877 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:07.877 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:07.877 03:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:08.137 [2024-12-06 03:31:28.100580] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2293ca0:1 started. 00:23:08.137 03:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.137 03:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:08.137 03:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:08.137 03:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:08.137 03:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:08.137 03:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:08.137 03:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:08.137 03:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:08.137 03:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:08.137 03:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:08.137 03:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:08.137 03:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:08.137 03:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:08.137 03:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.137 03:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:08.137 03:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.137 [2024-12-06 03:31:28.152498] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2293ca0 was disconnected and freed. delete nvme_qpair. 00:23:08.137 03:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:08.137 03:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:08.137 03:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:08.137 03:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:09.077 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:09.077 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:09.077 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:09.077 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:09.077 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:09.077 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.077 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:09.077 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:09.336 [2024-12-06 03:31:29.223429] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:09.336 [2024-12-06 03:31:29.224119] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:09.336 [2024-12-06 03:31:29.224140] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.336 [2024-12-06 03:31:29.310381] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:09.336 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.336 [2024-12-06 03:31:29.368983] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:23:09.336 [2024-12-06 03:31:29.369015] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:09.336 [2024-12-06 03:31:29.369024] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:09.337 [2024-12-06 03:31:29.369029] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:09.337 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:09.337 03:31:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:10.271 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:10.271 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:10.271 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:10.271 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:10.271 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:10.271 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.272 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:10.272 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:10.272 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:10.272 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.272 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:10.272 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:10.272 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:10.272 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:10.272 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:10.272 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:10.272 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:10.272 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:10.531 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:10.531 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:10.531 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:10.531 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:10.531 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.531 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:10.531 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.531 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:10.531 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:10.531 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:10.531 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:10.531 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:10.531 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.531 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:10.531 [2024-12-06 03:31:30.459033] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:10.531 [2024-12-06 03:31:30.459059] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:10.531 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.531 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:10.531 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:10.531 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:10.531 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:10.531 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:10.531 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:10.531 [2024-12-06 03:31:30.467526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.531 [2024-12-06 03:31:30.467548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.531 [2024-12-06 03:31:30.467557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.531 [2024-12-06 03:31:30.467565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.531 [2024-12-06 03:31:30.467590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.532 [2024-12-06 03:31:30.467597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.532 [2024-12-06 03:31:30.467604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.532 [2024-12-06 03:31:30.467615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.532 [2024-12-06 03:31:30.467623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265930 is same with the state(6) to be set 00:23:10.532 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:10.532 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:10.532 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.532 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:10.532 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:10.532 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:10.532 [2024-12-06 03:31:30.477536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2265930 (9): Bad file descriptor 00:23:10.532 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.532 [2024-12-06 03:31:30.487571] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:10.532 [2024-12-06 03:31:30.487582] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:10.532 [2024-12-06 03:31:30.487589] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:10.532 [2024-12-06 03:31:30.487598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:10.532 [2024-12-06 03:31:30.487616] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:10.532 [2024-12-06 03:31:30.487895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.532 [2024-12-06 03:31:30.487910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2265930 with addr=10.0.0.2, port=4420 00:23:10.532 [2024-12-06 03:31:30.487918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265930 is same with the state(6) to be set 00:23:10.532 [2024-12-06 03:31:30.487931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2265930 (9): Bad file descriptor 00:23:10.532 [2024-12-06 03:31:30.487941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:10.532 [2024-12-06 03:31:30.487953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:10.532 [2024-12-06 03:31:30.487963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:10.532 [2024-12-06 03:31:30.487969] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:10.532 [2024-12-06 03:31:30.487974] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:10.532 [2024-12-06 03:31:30.487979] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:10.532 [2024-12-06 03:31:30.497646] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:10.532 [2024-12-06 03:31:30.497656] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:10.532 [2024-12-06 03:31:30.497660] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:10.532 [2024-12-06 03:31:30.497664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:10.532 [2024-12-06 03:31:30.497677] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:10.532 [2024-12-06 03:31:30.497880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.532 [2024-12-06 03:31:30.497898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2265930 with addr=10.0.0.2, port=4420 00:23:10.532 [2024-12-06 03:31:30.497907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265930 is same with the state(6) to be set 00:23:10.532 [2024-12-06 03:31:30.497917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2265930 (9): Bad file descriptor 00:23:10.532 [2024-12-06 03:31:30.497927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:10.532 [2024-12-06 03:31:30.497933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:10.532 [2024-12-06 03:31:30.497939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:10.532 [2024-12-06 03:31:30.497945] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:10.532 [2024-12-06 03:31:30.497954] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:10.532 [2024-12-06 03:31:30.497958] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:10.532 [2024-12-06 03:31:30.507709] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:10.532 [2024-12-06 03:31:30.507722] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:10.532 [2024-12-06 03:31:30.507726] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:10.532 [2024-12-06 03:31:30.507730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:10.532 [2024-12-06 03:31:30.507745] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:10.532 [2024-12-06 03:31:30.508006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.532 [2024-12-06 03:31:30.508020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2265930 with addr=10.0.0.2, port=4420 00:23:10.532 [2024-12-06 03:31:30.508028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265930 is same with the state(6) to be set 00:23:10.532 [2024-12-06 03:31:30.508038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2265930 (9): Bad file descriptor 00:23:10.532 [2024-12-06 03:31:30.508047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:10.532 [2024-12-06 03:31:30.508054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:10.532 [2024-12-06 03:31:30.508060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:10.532 [2024-12-06 03:31:30.508066] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:10.532 [2024-12-06 03:31:30.508070] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:10.532 [2024-12-06 03:31:30.508073] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:10.532 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.532 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:10.532 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:10.532 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:10.532 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:10.532 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:10.532 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:10.532 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:10.532 [2024-12-06 03:31:30.517775] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:10.532 [2024-12-06 03:31:30.517787] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:10.532 [2024-12-06 03:31:30.517792] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:10.532 [2024-12-06 03:31:30.517796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:10.532 [2024-12-06 03:31:30.517808] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:10.532 [2024-12-06 03:31:30.518011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.532 [2024-12-06 03:31:30.518024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2265930 with addr=10.0.0.2, port=4420 00:23:10.532 [2024-12-06 03:31:30.518031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265930 is same with the state(6) to be set 00:23:10.532 [2024-12-06 03:31:30.518041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2265930 (9): Bad file descriptor 00:23:10.532 [2024-12-06 03:31:30.518050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:10.532 [2024-12-06 03:31:30.518057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:10.532 [2024-12-06 03:31:30.518063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:10.532 [2024-12-06 03:31:30.518068] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:10.532 [2024-12-06 03:31:30.518073] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:10.532 [2024-12-06 03:31:30.518077] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:10.532 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:10.532 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:10.532 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.532 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:10.532 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:10.532 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:10.532 [2024-12-06 03:31:30.527839] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:10.532 [2024-12-06 03:31:30.527853] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:10.532 [2024-12-06 03:31:30.527857] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:10.532 [2024-12-06 03:31:30.527861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:10.532 [2024-12-06 03:31:30.527875] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:10.532 [2024-12-06 03:31:30.528086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.533 [2024-12-06 03:31:30.528099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2265930 with addr=10.0.0.2, port=4420 00:23:10.533 [2024-12-06 03:31:30.528106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265930 is same with the state(6) to be set 00:23:10.533 [2024-12-06 03:31:30.528121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2265930 (9): Bad file descriptor 00:23:10.533 [2024-12-06 03:31:30.528130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:10.533 [2024-12-06 03:31:30.528136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:10.533 [2024-12-06 03:31:30.528143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:10.533 [2024-12-06 03:31:30.528148] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:10.533 [2024-12-06 03:31:30.528152] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:10.533 [2024-12-06 03:31:30.528156] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:10.533 [2024-12-06 03:31:30.537906] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:10.533 [2024-12-06 03:31:30.537916] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:10.533 [2024-12-06 03:31:30.537920] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:10.533 [2024-12-06 03:31:30.537924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:10.533 [2024-12-06 03:31:30.537936] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:10.533 [2024-12-06 03:31:30.538207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.533 [2024-12-06 03:31:30.538219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2265930 with addr=10.0.0.2, port=4420 00:23:10.533 [2024-12-06 03:31:30.538227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265930 is same with the state(6) to be set 00:23:10.533 [2024-12-06 03:31:30.538236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2265930 (9): Bad file descriptor 00:23:10.533 [2024-12-06 03:31:30.538251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:10.533 [2024-12-06 03:31:30.538258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:10.533 [2024-12-06 03:31:30.538264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:10.533 [2024-12-06 03:31:30.538270] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:10.533 [2024-12-06 03:31:30.538274] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:10.533 [2024-12-06 03:31:30.538278] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:10.533 [2024-12-06 03:31:30.545279] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:10.533 [2024-12-06 03:31:30.545295] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:10.533 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.533 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:10.533 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:10.533 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:10.533 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:10.533 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:10.533 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:10.533 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:10.533 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:10.533 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:10.533 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.533 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:10.533 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:10.533 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:10.533 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:10.533 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.533 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:23:10.533 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:10.533 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:10.533 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:10.533 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:10.533 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:10.533 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:10.533 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:10.533 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:10.533 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:10.533 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:10.533 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:10.533 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.533 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:10.533 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.533 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:10.533 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:10.533 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:10.533 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:10.533 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:10.533 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.533 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.792 03:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.168 [2024-12-06 03:31:31.873117] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:12.168 [2024-12-06 03:31:31.873133] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:12.168 [2024-12-06 03:31:31.873145] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:12.168 [2024-12-06 03:31:31.961406] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:12.168 [2024-12-06 03:31:32.230683] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:23:12.168 [2024-12-06 03:31:32.231308] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x2299b10:1 started. 00:23:12.168 [2024-12-06 03:31:32.232971] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:12.168 [2024-12-06 03:31:32.232996] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:12.168 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.168 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:12.168 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:12.168 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:12.168 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:12.168 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:12.168 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:12.168 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:12.168 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:12.168 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.168 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.168 [2024-12-06 03:31:32.242296] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x2299b10 was disconnected and freed. delete nvme_qpair. 00:23:12.168 request: 00:23:12.168 { 00:23:12.168 "name": "nvme", 00:23:12.168 "trtype": "tcp", 00:23:12.168 "traddr": "10.0.0.2", 00:23:12.168 "adrfam": "ipv4", 00:23:12.168 "trsvcid": "8009", 00:23:12.168 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:12.168 "wait_for_attach": true, 00:23:12.168 "method": "bdev_nvme_start_discovery", 00:23:12.168 "req_id": 1 00:23:12.168 } 00:23:12.168 Got JSON-RPC error response 00:23:12.168 response: 00:23:12.168 { 00:23:12.168 "code": -17, 00:23:12.168 "message": "File exists" 00:23:12.168 } 00:23:12.168 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:12.168 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:12.168 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:12.168 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:12.168 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:12.168 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:12.168 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:12.168 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:12.168 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.168 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:12.168 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.168 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:12.168 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.168 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:12.168 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:12.168 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:12.168 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:12.168 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.168 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:12.168 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.168 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:12.427 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.427 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:12.427 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:12.427 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:12.427 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:12.427 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:12.427 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:12.427 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:12.427 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:12.427 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:12.427 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.427 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.427 request: 00:23:12.427 { 00:23:12.427 "name": "nvme_second", 00:23:12.427 "trtype": "tcp", 00:23:12.427 "traddr": "10.0.0.2", 00:23:12.427 "adrfam": "ipv4", 00:23:12.427 "trsvcid": "8009", 00:23:12.427 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:12.427 "wait_for_attach": true, 00:23:12.427 "method": "bdev_nvme_start_discovery", 00:23:12.427 "req_id": 1 00:23:12.427 } 00:23:12.427 Got JSON-RPC error response 00:23:12.427 response: 00:23:12.427 { 00:23:12.427 "code": -17, 00:23:12.427 "message": "File exists" 00:23:12.427 } 00:23:12.427 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:12.427 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:12.427 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:12.427 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:12.427 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:12.427 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:12.428 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:12.428 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:12.428 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.428 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:12.428 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.428 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:12.428 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.428 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:12.428 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:12.428 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:12.428 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.428 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.428 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:12.428 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:12.428 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:12.428 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.428 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:12.428 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:12.428 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:12.428 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:12.428 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:12.428 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:12.428 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:12.428 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:12.428 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:12.428 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.428 03:31:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.362 [2024-12-06 03:31:33.472626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:13.362 [2024-12-06 03:31:33.472660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x225e590 with addr=10.0.0.2, port=8010 00:23:13.362 [2024-12-06 03:31:33.472672] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:13.362 [2024-12-06 03:31:33.472683] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:13.362 [2024-12-06 03:31:33.472690] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:14.739 [2024-12-06 03:31:34.475094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.739 [2024-12-06 03:31:34.475119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x225e590 with addr=10.0.0.2, port=8010 00:23:14.739 [2024-12-06 03:31:34.475130] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:14.739 [2024-12-06 03:31:34.475136] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:14.739 [2024-12-06 03:31:34.475142] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:15.673 [2024-12-06 03:31:35.477238] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:15.673 request: 00:23:15.673 { 00:23:15.673 "name": "nvme_second", 00:23:15.673 "trtype": "tcp", 00:23:15.673 "traddr": "10.0.0.2", 00:23:15.673 "adrfam": "ipv4", 00:23:15.673 "trsvcid": "8010", 00:23:15.673 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:15.673 "wait_for_attach": false, 00:23:15.673 "attach_timeout_ms": 3000, 00:23:15.673 "method": "bdev_nvme_start_discovery", 00:23:15.673 "req_id": 1 00:23:15.673 } 00:23:15.673 Got JSON-RPC error response 00:23:15.673 response: 00:23:15.673 { 00:23:15.673 "code": -110, 00:23:15.673 "message": "Connection timed out" 00:23:15.673 } 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2717113 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:15.673 rmmod nvme_tcp 00:23:15.673 rmmod nvme_fabrics 00:23:15.673 rmmod nvme_keyring 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2716972 ']' 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2716972 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 2716972 ']' 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 2716972 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2716972 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2716972' 00:23:15.673 killing process with pid 2716972 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 2716972 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 2716972 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:23:15.673 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:15.931 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:15.931 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:15.931 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.931 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:15.931 03:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.834 03:31:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:17.834 00:23:17.834 real 0m17.736s 00:23:17.834 user 0m22.624s 00:23:17.834 sys 0m5.360s 00:23:17.834 03:31:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:17.834 03:31:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.834 ************************************ 00:23:17.834 END TEST nvmf_host_discovery 00:23:17.834 ************************************ 00:23:17.834 03:31:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:17.834 03:31:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:17.834 03:31:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:17.834 03:31:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.834 ************************************ 00:23:17.834 START TEST nvmf_host_multipath_status 00:23:17.834 ************************************ 00:23:17.834 03:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:18.093 * Looking for test storage... 00:23:18.093 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:18.093 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:18.093 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:23:18.093 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:18.093 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:18.093 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:18.093 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:18.093 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:18.093 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:18.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.094 --rc genhtml_branch_coverage=1 00:23:18.094 --rc genhtml_function_coverage=1 00:23:18.094 --rc genhtml_legend=1 00:23:18.094 --rc geninfo_all_blocks=1 00:23:18.094 --rc geninfo_unexecuted_blocks=1 00:23:18.094 00:23:18.094 ' 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:18.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.094 --rc genhtml_branch_coverage=1 00:23:18.094 --rc genhtml_function_coverage=1 00:23:18.094 --rc genhtml_legend=1 00:23:18.094 --rc geninfo_all_blocks=1 00:23:18.094 --rc geninfo_unexecuted_blocks=1 00:23:18.094 00:23:18.094 ' 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:18.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.094 --rc genhtml_branch_coverage=1 00:23:18.094 --rc genhtml_function_coverage=1 00:23:18.094 --rc genhtml_legend=1 00:23:18.094 --rc geninfo_all_blocks=1 00:23:18.094 --rc geninfo_unexecuted_blocks=1 00:23:18.094 00:23:18.094 ' 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:18.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.094 --rc genhtml_branch_coverage=1 00:23:18.094 --rc genhtml_function_coverage=1 00:23:18.094 --rc genhtml_legend=1 00:23:18.094 --rc geninfo_all_blocks=1 00:23:18.094 --rc geninfo_unexecuted_blocks=1 00:23:18.094 00:23:18.094 ' 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.094 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:18.095 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.095 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:23:18.095 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:18.095 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:18.095 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:18.095 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:18.095 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:18.095 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:18.095 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:18.095 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:18.095 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:18.095 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:18.095 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:18.095 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:18.095 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:18.095 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:18.095 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:18.095 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:18.095 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:18.095 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:18.095 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:18.095 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:18.095 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:18.095 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:18.095 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.095 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:18.095 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.095 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:18.095 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:18.095 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:23:18.095 03:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:23.361 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:23.361 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:23:23.361 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:23.361 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:23.361 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:23.361 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:23.361 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:23.361 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:23:23.361 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:23.361 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:23:23.361 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:23:23.361 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:23:23.361 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:23:23.361 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:23:23.361 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:23:23.361 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:23.361 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:23.361 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:23.361 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:23.361 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:23.361 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:23.361 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:23.361 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:23.362 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:23.362 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:23.362 Found net devices under 0000:86:00.0: cvl_0_0 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:23.362 Found net devices under 0000:86:00.1: cvl_0_1 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:23.362 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:23.619 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:23.619 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:23.619 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:23:23.619 00:23:23.619 --- 10.0.0.2 ping statistics --- 00:23:23.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.619 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:23:23.619 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:23.619 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:23.619 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:23:23.619 00:23:23.619 --- 10.0.0.1 ping statistics --- 00:23:23.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.619 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:23:23.619 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:23.619 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:23:23.619 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:23.619 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:23.619 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:23.619 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:23.619 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:23.619 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:23.619 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:23.619 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:23.619 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:23.620 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:23.620 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:23.620 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2722297 00:23:23.620 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:23.620 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2722297 00:23:23.620 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2722297 ']' 00:23:23.620 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.620 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:23.620 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.620 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:23.620 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:23.620 [2024-12-06 03:31:43.614486] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:23:23.620 [2024-12-06 03:31:43.614533] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:23.620 [2024-12-06 03:31:43.680528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:23.620 [2024-12-06 03:31:43.722683] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:23.620 [2024-12-06 03:31:43.722720] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:23.620 [2024-12-06 03:31:43.722727] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:23.620 [2024-12-06 03:31:43.722733] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:23.620 [2024-12-06 03:31:43.722738] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:23.620 [2024-12-06 03:31:43.723961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:23.620 [2024-12-06 03:31:43.723964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.878 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:23.878 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:23.878 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:23.878 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:23.878 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:23.878 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:23.878 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2722297 00:23:23.878 03:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:24.135 [2024-12-06 03:31:44.026890] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.136 03:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:24.136 Malloc0 00:23:24.136 03:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:24.393 03:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:24.650 03:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:24.907 [2024-12-06 03:31:44.812387] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:24.907 03:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:24.907 [2024-12-06 03:31:44.988837] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:24.907 03:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2722547 00:23:24.907 03:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:24.907 03:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:24.907 03:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2722547 /var/tmp/bdevperf.sock 00:23:24.908 03:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2722547 ']' 00:23:24.908 03:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:24.908 03:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:24.908 03:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:24.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:24.908 03:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:24.908 03:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:25.164 03:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:25.164 03:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:25.164 03:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:25.420 03:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:25.982 Nvme0n1 00:23:25.982 03:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:26.239 Nvme0n1 00:23:26.239 03:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:26.239 03:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:28.768 03:31:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:28.768 03:31:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:28.768 03:31:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:28.768 03:31:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:29.703 03:31:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:29.703 03:31:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:29.703 03:31:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:29.703 03:31:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:29.961 03:31:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:29.961 03:31:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:29.961 03:31:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:29.961 03:31:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:30.219 03:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:30.219 03:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:30.219 03:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.219 03:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:30.477 03:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:30.477 03:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:30.477 03:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.477 03:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:30.477 03:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:30.477 03:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:30.477 03:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.477 03:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:30.735 03:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:30.735 03:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:30.735 03:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.735 03:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:30.993 03:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:30.993 03:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:30.993 03:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:31.251 03:31:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:31.508 03:31:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:32.440 03:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:32.440 03:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:32.440 03:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:32.440 03:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:32.699 03:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:32.699 03:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:32.699 03:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:32.699 03:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:32.957 03:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:32.957 03:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:32.957 03:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:32.957 03:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:32.957 03:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:32.957 03:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:32.957 03:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:32.957 03:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:33.215 03:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:33.215 03:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:33.215 03:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:33.215 03:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:33.473 03:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:33.473 03:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:33.473 03:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:33.473 03:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:33.732 03:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:33.732 03:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:33.732 03:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:33.990 03:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:33.990 03:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:35.364 03:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:35.364 03:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:35.364 03:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.364 03:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:35.364 03:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.364 03:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:35.364 03:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.364 03:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:35.621 03:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:35.621 03:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:35.621 03:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.621 03:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:35.621 03:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.621 03:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:35.621 03:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.621 03:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:35.878 03:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.878 03:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:35.878 03:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.879 03:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:36.136 03:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:36.136 03:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:36.136 03:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.136 03:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:36.393 03:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:36.393 03:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:36.393 03:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:36.650 03:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:36.650 03:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:38.025 03:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:38.025 03:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:38.025 03:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.025 03:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:38.025 03:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:38.025 03:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:38.025 03:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.025 03:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:38.284 03:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:38.284 03:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:38.284 03:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.284 03:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:38.285 03:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:38.285 03:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:38.285 03:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.285 03:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:38.544 03:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:38.544 03:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:38.544 03:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.544 03:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:38.803 03:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:38.803 03:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:38.803 03:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.803 03:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:39.062 03:31:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:39.062 03:31:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:39.062 03:31:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:39.322 03:31:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:39.322 03:31:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:40.700 03:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:40.700 03:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:40.700 03:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.700 03:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:40.700 03:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:40.700 03:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:40.700 03:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.700 03:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:40.700 03:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:40.700 03:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:40.700 03:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.700 03:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:40.960 03:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.960 03:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:40.960 03:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.960 03:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:41.219 03:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:41.219 03:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:41.219 03:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.219 03:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:41.479 03:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:41.479 03:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:41.479 03:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.479 03:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:41.479 03:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:41.479 03:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:41.479 03:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:41.737 03:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:42.081 03:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:43.128 03:32:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:43.128 03:32:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:43.128 03:32:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.128 03:32:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:43.128 03:32:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:43.128 03:32:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:43.128 03:32:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.128 03:32:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:43.391 03:32:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:43.391 03:32:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:43.391 03:32:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.391 03:32:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:43.649 03:32:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:43.649 03:32:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:43.649 03:32:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.649 03:32:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:43.908 03:32:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:43.908 03:32:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:43.908 03:32:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.908 03:32:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:43.908 03:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:43.908 03:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:43.908 03:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.908 03:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:44.167 03:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:44.167 03:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:44.424 03:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:44.424 03:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:44.682 03:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:44.940 03:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:45.869 03:32:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:45.869 03:32:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:45.869 03:32:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.869 03:32:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:46.126 03:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:46.126 03:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:46.126 03:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:46.126 03:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:46.384 03:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:46.384 03:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:46.384 03:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:46.384 03:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:46.384 03:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:46.384 03:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:46.384 03:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:46.384 03:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:46.641 03:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:46.641 03:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:46.641 03:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:46.641 03:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:46.899 03:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:46.899 03:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:46.899 03:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:46.899 03:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.157 03:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:47.157 03:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:47.157 03:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:47.415 03:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:47.415 03:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:48.787 03:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:48.787 03:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:48.787 03:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.787 03:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:48.787 03:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:48.787 03:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:48.787 03:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:48.787 03:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.045 03:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:49.045 03:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:49.045 03:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.045 03:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:49.302 03:32:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:49.302 03:32:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:49.302 03:32:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:49.302 03:32:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.302 03:32:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:49.302 03:32:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:49.302 03:32:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.302 03:32:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:49.559 03:32:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:49.559 03:32:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:49.559 03:32:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.559 03:32:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:49.815 03:32:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:49.815 03:32:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:49.815 03:32:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:50.072 03:32:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:50.072 03:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:51.447 03:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:51.447 03:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:51.447 03:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.447 03:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:51.447 03:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.447 03:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:51.447 03:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.447 03:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:51.706 03:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.706 03:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:51.706 03:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.706 03:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:51.706 03:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.706 03:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:51.706 03:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.706 03:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:51.964 03:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.964 03:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:51.964 03:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:51.964 03:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.223 03:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.223 03:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:52.223 03:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.223 03:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:52.481 03:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.481 03:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:52.481 03:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:52.740 03:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:52.740 03:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:54.114 03:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:54.114 03:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:54.114 03:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.114 03:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:54.114 03:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.114 03:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:54.115 03:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.115 03:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:54.373 03:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:54.373 03:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:54.373 03:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.373 03:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:54.373 03:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.373 03:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:54.373 03:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.373 03:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:54.631 03:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.631 03:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:54.631 03:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.631 03:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:54.889 03:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.889 03:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:54.889 03:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:54.889 03:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.146 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:55.146 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2722547 00:23:55.146 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2722547 ']' 00:23:55.146 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2722547 00:23:55.146 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:23:55.146 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:55.146 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2722547 00:23:55.146 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:55.146 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:55.146 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2722547' 00:23:55.146 killing process with pid 2722547 00:23:55.146 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2722547 00:23:55.146 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2722547 00:23:55.146 { 00:23:55.146 "results": [ 00:23:55.146 { 00:23:55.146 "job": "Nvme0n1", 00:23:55.146 "core_mask": "0x4", 00:23:55.146 "workload": "verify", 00:23:55.146 "status": "terminated", 00:23:55.146 "verify_range": { 00:23:55.146 "start": 0, 00:23:55.146 "length": 16384 00:23:55.146 }, 00:23:55.146 "queue_depth": 128, 00:23:55.146 "io_size": 4096, 00:23:55.146 "runtime": 28.696539, 00:23:55.146 "iops": 10274.409746764235, 00:23:55.146 "mibps": 40.13441307329779, 00:23:55.146 "io_failed": 0, 00:23:55.146 "io_timeout": 0, 00:23:55.146 "avg_latency_us": 12437.752315454809, 00:23:55.146 "min_latency_us": 573.44, 00:23:55.146 "max_latency_us": 3019898.88 00:23:55.146 } 00:23:55.146 ], 00:23:55.146 "core_count": 1 00:23:55.146 } 00:23:55.407 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2722547 00:23:55.407 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:55.407 [2024-12-06 03:31:45.052537] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:23:55.407 [2024-12-06 03:31:45.052592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2722547 ] 00:23:55.407 [2024-12-06 03:31:45.111381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.407 [2024-12-06 03:31:45.152418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:55.407 Running I/O for 90 seconds... 00:23:55.407 10966.00 IOPS, 42.84 MiB/s [2024-12-06T02:32:15.548Z] 10916.00 IOPS, 42.64 MiB/s [2024-12-06T02:32:15.548Z] 10936.67 IOPS, 42.72 MiB/s [2024-12-06T02:32:15.548Z] 11003.50 IOPS, 42.98 MiB/s [2024-12-06T02:32:15.548Z] 11054.00 IOPS, 43.18 MiB/s [2024-12-06T02:32:15.548Z] 11036.17 IOPS, 43.11 MiB/s [2024-12-06T02:32:15.548Z] 11031.86 IOPS, 43.09 MiB/s [2024-12-06T02:32:15.548Z] 11055.62 IOPS, 43.19 MiB/s [2024-12-06T02:32:15.548Z] 11044.22 IOPS, 43.14 MiB/s [2024-12-06T02:32:15.548Z] 11045.50 IOPS, 43.15 MiB/s [2024-12-06T02:32:15.548Z] 11053.91 IOPS, 43.18 MiB/s [2024-12-06T02:32:15.548Z] 11060.58 IOPS, 43.21 MiB/s [2024-12-06T02:32:15.548Z] [2024-12-06 03:31:59.203110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.407 [2024-12-06 03:31:59.203148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:55.407 [2024-12-06 03:31:59.203186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.407 [2024-12-06 03:31:59.203195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:55.407 [2024-12-06 03:31:59.203209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.407 [2024-12-06 03:31:59.203217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:55.407 [2024-12-06 03:31:59.203230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.407 [2024-12-06 03:31:59.203236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:55.407 [2024-12-06 03:31:59.203249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.407 [2024-12-06 03:31:59.203256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:55.407 [2024-12-06 03:31:59.203269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.407 [2024-12-06 03:31:59.203276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:55.407 [2024-12-06 03:31:59.203288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.407 [2024-12-06 03:31:59.203295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:55.407 [2024-12-06 03:31:59.203308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.407 [2024-12-06 03:31:59.203315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:55.407 [2024-12-06 03:31:59.203327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.407 [2024-12-06 03:31:59.203334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:55.407 [2024-12-06 03:31:59.204359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.407 [2024-12-06 03:31:59.204385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:55.407 [2024-12-06 03:31:59.204402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.407 [2024-12-06 03:31:59.204410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:55.407 [2024-12-06 03:31:59.204424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.407 [2024-12-06 03:31:59.204431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:55.407 [2024-12-06 03:31:59.204445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.407 [2024-12-06 03:31:59.204453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:55.407 [2024-12-06 03:31:59.204467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.407 [2024-12-06 03:31:59.204474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:55.407 [2024-12-06 03:31:59.204488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.407 [2024-12-06 03:31:59.204495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:55.407 [2024-12-06 03:31:59.204510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.407 [2024-12-06 03:31:59.204517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:55.407 [2024-12-06 03:31:59.204531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.407 [2024-12-06 03:31:59.204538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:55.407 [2024-12-06 03:31:59.204552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.407 [2024-12-06 03:31:59.204559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:55.407 [2024-12-06 03:31:59.204573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.407 [2024-12-06 03:31:59.204581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:55.407 [2024-12-06 03:31:59.204595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.407 [2024-12-06 03:31:59.204602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:55.407 [2024-12-06 03:31:59.204616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.407 [2024-12-06 03:31:59.204624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:55.407 [2024-12-06 03:31:59.204638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.407 [2024-12-06 03:31:59.204647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:55.407 [2024-12-06 03:31:59.204661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.407 [2024-12-06 03:31:59.204668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:55.407 [2024-12-06 03:31:59.204682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.407 [2024-12-06 03:31:59.204689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.407 [2024-12-06 03:31:59.204704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.407 [2024-12-06 03:31:59.204711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:55.407 [2024-12-06 03:31:59.204726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.407 [2024-12-06 03:31:59.204733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:55.407 [2024-12-06 03:31:59.204748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.407 [2024-12-06 03:31:59.204754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:55.407 [2024-12-06 03:31:59.204768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.408 [2024-12-06 03:31:59.204776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:55.408 [2024-12-06 03:31:59.204790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.408 [2024-12-06 03:31:59.204797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:55.408 [2024-12-06 03:31:59.204811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.408 [2024-12-06 03:31:59.204818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:55.408 [2024-12-06 03:31:59.204832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.408 [2024-12-06 03:31:59.204839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:55.408 [2024-12-06 03:31:59.204853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.408 [2024-12-06 03:31:59.204861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:55.408 [2024-12-06 03:31:59.204875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.408 [2024-12-06 03:31:59.204882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:55.408 [2024-12-06 03:31:59.204896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.408 [2024-12-06 03:31:59.204903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:55.408 [2024-12-06 03:31:59.204918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.408 [2024-12-06 03:31:59.204926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:55.408 [2024-12-06 03:31:59.204940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.408 [2024-12-06 03:31:59.204952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:55.408 [2024-12-06 03:31:59.204967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.408 [2024-12-06 03:31:59.204974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:55.408 [2024-12-06 03:31:59.204988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.408 [2024-12-06 03:31:59.204996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:55.408 [2024-12-06 03:31:59.205010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.408 [2024-12-06 03:31:59.205017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:55.408 [2024-12-06 03:31:59.205032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.408 [2024-12-06 03:31:59.205039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:55.408 [2024-12-06 03:31:59.205054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.408 [2024-12-06 03:31:59.205063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:55.408 [2024-12-06 03:31:59.205141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.408 [2024-12-06 03:31:59.205150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:55.408 [2024-12-06 03:31:59.205166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.408 [2024-12-06 03:31:59.205174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:55.408 [2024-12-06 03:31:59.205189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.408 [2024-12-06 03:31:59.205196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:55.408 [2024-12-06 03:31:59.205212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.408 [2024-12-06 03:31:59.205220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:55.408 [2024-12-06 03:31:59.205235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.408 [2024-12-06 03:31:59.205243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:55.408 [2024-12-06 03:31:59.205260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.408 [2024-12-06 03:31:59.205268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:55.408 [2024-12-06 03:31:59.205283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.408 [2024-12-06 03:31:59.205291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:55.408 [2024-12-06 03:31:59.205307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.408 [2024-12-06 03:31:59.205314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:55.408 [2024-12-06 03:31:59.205330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.408 [2024-12-06 03:31:59.205337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:55.408 [2024-12-06 03:31:59.205353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.408 [2024-12-06 03:31:59.205360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:55.408 [2024-12-06 03:31:59.205375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.408 [2024-12-06 03:31:59.205383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:55.408 [2024-12-06 03:31:59.205399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.408 [2024-12-06 03:31:59.205406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:55.408 [2024-12-06 03:31:59.205421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:79320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.408 [2024-12-06 03:31:59.205429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:55.408 [2024-12-06 03:31:59.205444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:79328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.408 [2024-12-06 03:31:59.205451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.408 [2024-12-06 03:31:59.205467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.408 [2024-12-06 03:31:59.205474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.408 [2024-12-06 03:31:59.205489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.408 [2024-12-06 03:31:59.205497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:55.408 [2024-12-06 03:31:59.205513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.408 [2024-12-06 03:31:59.205519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:55.408 [2024-12-06 03:31:59.205535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.408 [2024-12-06 03:31:59.205544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:55.408 [2024-12-06 03:31:59.205560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.408 [2024-12-06 03:31:59.205567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:55.408 [2024-12-06 03:31:59.205583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.408 [2024-12-06 03:31:59.205589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:55.408 [2024-12-06 03:31:59.205605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.408 [2024-12-06 03:31:59.205612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:55.408 [2024-12-06 03:31:59.205628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.408 [2024-12-06 03:31:59.205635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:55.408 [2024-12-06 03:31:59.205650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.408 [2024-12-06 03:31:59.205657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:55.408 [2024-12-06 03:31:59.205673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.408 [2024-12-06 03:31:59.205680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:55.408 [2024-12-06 03:31:59.205696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.408 [2024-12-06 03:31:59.205703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:55.408 [2024-12-06 03:31:59.205767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.409 [2024-12-06 03:31:59.205776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:55.409 [2024-12-06 03:31:59.205794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.409 [2024-12-06 03:31:59.205801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:55.409 [2024-12-06 03:31:59.205818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.409 [2024-12-06 03:31:59.205825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:55.409 [2024-12-06 03:31:59.205843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.409 [2024-12-06 03:31:59.205850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:55.409 [2024-12-06 03:31:59.205867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.409 [2024-12-06 03:31:59.205875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:55.409 [2024-12-06 03:31:59.205892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.409 [2024-12-06 03:31:59.205900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:55.409 [2024-12-06 03:31:59.205917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.409 [2024-12-06 03:31:59.205924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:55.409 [2024-12-06 03:31:59.205941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.409 [2024-12-06 03:31:59.205953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:55.409 [2024-12-06 03:31:59.205971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.409 [2024-12-06 03:31:59.205978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:55.409 [2024-12-06 03:31:59.205995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.409 [2024-12-06 03:31:59.206002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:55.409 [2024-12-06 03:31:59.206019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.409 [2024-12-06 03:31:59.206026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:55.409 [2024-12-06 03:31:59.206043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.409 [2024-12-06 03:31:59.206050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:55.409 [2024-12-06 03:31:59.206067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.409 [2024-12-06 03:31:59.206074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:55.409 [2024-12-06 03:31:59.206091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.409 [2024-12-06 03:31:59.206098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:55.409 [2024-12-06 03:31:59.206115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.409 [2024-12-06 03:31:59.206122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:55.409 [2024-12-06 03:31:59.206140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.409 [2024-12-06 03:31:59.206147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:55.409 [2024-12-06 03:31:59.206164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.409 [2024-12-06 03:31:59.206171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:55.409 [2024-12-06 03:31:59.206189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.409 [2024-12-06 03:31:59.206196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:55.409 [2024-12-06 03:31:59.206213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.409 [2024-12-06 03:31:59.206220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:55.409 [2024-12-06 03:31:59.206237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.409 [2024-12-06 03:31:59.206244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:55.409 [2024-12-06 03:31:59.206261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.409 [2024-12-06 03:31:59.206268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:55.409 [2024-12-06 03:31:59.206285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.409 [2024-12-06 03:31:59.206292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.409 [2024-12-06 03:31:59.206308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.409 [2024-12-06 03:31:59.206316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:55.409 [2024-12-06 03:31:59.206334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.409 [2024-12-06 03:31:59.206341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:55.409 [2024-12-06 03:31:59.206406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.409 [2024-12-06 03:31:59.206415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:55.409 [2024-12-06 03:31:59.206434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.409 [2024-12-06 03:31:59.206441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:55.409 [2024-12-06 03:31:59.206460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.409 [2024-12-06 03:31:59.206467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:55.409 [2024-12-06 03:31:59.206485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.409 [2024-12-06 03:31:59.206492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:55.409 [2024-12-06 03:31:59.206510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.409 [2024-12-06 03:31:59.206517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:55.409 [2024-12-06 03:31:59.206536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.409 [2024-12-06 03:31:59.206545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:55.409 [2024-12-06 03:31:59.206563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.409 [2024-12-06 03:31:59.206571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:55.409 [2024-12-06 03:31:59.206589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.409 [2024-12-06 03:31:59.206596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:55.409 [2024-12-06 03:31:59.206614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.409 [2024-12-06 03:31:59.206622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:55.409 [2024-12-06 03:31:59.206640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.409 [2024-12-06 03:31:59.206647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:55.409 [2024-12-06 03:31:59.206665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.409 [2024-12-06 03:31:59.206672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:55.409 [2024-12-06 03:31:59.206691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.409 [2024-12-06 03:31:59.206698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:55.409 [2024-12-06 03:31:59.206716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.410 [2024-12-06 03:31:59.206723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:55.410 [2024-12-06 03:31:59.206741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.410 [2024-12-06 03:31:59.206755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:55.410 [2024-12-06 03:31:59.206773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.410 [2024-12-06 03:31:59.206780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:55.410 [2024-12-06 03:31:59.206799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:79384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.410 [2024-12-06 03:31:59.206806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:55.410 10845.15 IOPS, 42.36 MiB/s [2024-12-06T02:32:15.551Z] 10070.50 IOPS, 39.34 MiB/s [2024-12-06T02:32:15.551Z] 9399.13 IOPS, 36.72 MiB/s [2024-12-06T02:32:15.551Z] 8982.56 IOPS, 35.09 MiB/s [2024-12-06T02:32:15.551Z] 9104.65 IOPS, 35.57 MiB/s [2024-12-06T02:32:15.551Z] 9211.72 IOPS, 35.98 MiB/s [2024-12-06T02:32:15.551Z] 9401.05 IOPS, 36.72 MiB/s [2024-12-06T02:32:15.551Z] 9592.30 IOPS, 37.47 MiB/s [2024-12-06T02:32:15.551Z] 9754.29 IOPS, 38.10 MiB/s [2024-12-06T02:32:15.551Z] 9811.82 IOPS, 38.33 MiB/s [2024-12-06T02:32:15.551Z] 9854.35 IOPS, 38.49 MiB/s [2024-12-06T02:32:15.551Z] 9922.42 IOPS, 38.76 MiB/s [2024-12-06T02:32:15.551Z] 10049.88 IOPS, 39.26 MiB/s [2024-12-06T02:32:15.551Z] 10175.27 IOPS, 39.75 MiB/s [2024-12-06T02:32:15.551Z] [2024-12-06 03:32:12.848038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:56752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.410 [2024-12-06 03:32:12.848079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:55.410 [2024-12-06 03:32:12.848130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:56768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.410 [2024-12-06 03:32:12.848139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.410 [2024-12-06 03:32:12.848152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:56784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.410 [2024-12-06 03:32:12.848160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:55.410 [2024-12-06 03:32:12.848173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:56800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.410 [2024-12-06 03:32:12.848180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:55.410 [2024-12-06 03:32:12.848193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:56816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.410 [2024-12-06 03:32:12.848200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:55.410 [2024-12-06 03:32:12.848217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:56832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.410 [2024-12-06 03:32:12.848225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:55.410 [2024-12-06 03:32:12.848237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:56848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.410 [2024-12-06 03:32:12.848244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:55.410 [2024-12-06 03:32:12.848257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:56696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.410 [2024-12-06 03:32:12.848264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:55.410 [2024-12-06 03:32:12.848277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:56728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.410 [2024-12-06 03:32:12.848284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:55.410 [2024-12-06 03:32:12.848297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:56856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.410 [2024-12-06 03:32:12.848304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:55.410 [2024-12-06 03:32:12.848317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:56872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.410 [2024-12-06 03:32:12.848324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:55.410 [2024-12-06 03:32:12.848337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:56888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.410 [2024-12-06 03:32:12.848344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:55.410 [2024-12-06 03:32:12.848356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:56904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.410 [2024-12-06 03:32:12.848368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:55.410 [2024-12-06 03:32:12.848381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:56920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.410 [2024-12-06 03:32:12.848388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:55.410 [2024-12-06 03:32:12.848400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:56936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.410 [2024-12-06 03:32:12.848407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:55.410 [2024-12-06 03:32:12.848419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:56952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.410 [2024-12-06 03:32:12.848427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:55.410 [2024-12-06 03:32:12.848439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:56968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.410 [2024-12-06 03:32:12.848446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:55.410 [2024-12-06 03:32:12.848459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:56984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.410 [2024-12-06 03:32:12.848466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:55.410 [2024-12-06 03:32:12.848480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:57000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.410 [2024-12-06 03:32:12.848487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:55.410 [2024-12-06 03:32:12.848499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:57016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.410 [2024-12-06 03:32:12.848506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:55.410 [2024-12-06 03:32:12.848518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:57032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.410 [2024-12-06 03:32:12.848525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:55.410 [2024-12-06 03:32:12.848538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:57048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.410 [2024-12-06 03:32:12.848545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:55.410 [2024-12-06 03:32:12.848558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:57064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.410 [2024-12-06 03:32:12.848565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:55.410 [2024-12-06 03:32:12.848578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:57080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.410 [2024-12-06 03:32:12.848585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:55.410 [2024-12-06 03:32:12.848597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:57096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.410 [2024-12-06 03:32:12.848605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:55.410 [2024-12-06 03:32:12.848618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:57112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.410 [2024-12-06 03:32:12.848625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:55.410 [2024-12-06 03:32:12.848637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:57128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.410 [2024-12-06 03:32:12.848644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:55.410 [2024-12-06 03:32:12.848657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:57144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.410 [2024-12-06 03:32:12.848664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:55.410 [2024-12-06 03:32:12.848677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:57160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.410 [2024-12-06 03:32:12.848684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:55.410 [2024-12-06 03:32:12.848696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:57176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.410 [2024-12-06 03:32:12.848704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:55.410 [2024-12-06 03:32:12.848716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.410 [2024-12-06 03:32:12.848723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:55.410 [2024-12-06 03:32:12.848736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:57208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.410 [2024-12-06 03:32:12.848743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:55.410 [2024-12-06 03:32:12.848755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:57224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.411 [2024-12-06 03:32:12.848763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.411 [2024-12-06 03:32:12.848776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:57240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.411 [2024-12-06 03:32:12.848783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.411 [2024-12-06 03:32:12.848795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:57256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.411 [2024-12-06 03:32:12.848802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:55.411 [2024-12-06 03:32:12.848814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:57272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.411 [2024-12-06 03:32:12.848821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:55.411 [2024-12-06 03:32:12.848833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:57288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.411 [2024-12-06 03:32:12.848840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:55.411 [2024-12-06 03:32:12.848855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:57304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.411 [2024-12-06 03:32:12.848862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:55.411 [2024-12-06 03:32:12.848874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:57320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.411 [2024-12-06 03:32:12.848882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:55.411 [2024-12-06 03:32:12.848895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:57336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.411 [2024-12-06 03:32:12.848902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:55.411 [2024-12-06 03:32:12.849324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:57352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.411 [2024-12-06 03:32:12.849339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:55.411 [2024-12-06 03:32:12.849354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.411 [2024-12-06 03:32:12.849362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:55.411 [2024-12-06 03:32:12.849375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:57384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.411 [2024-12-06 03:32:12.849381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:55.411 [2024-12-06 03:32:12.849394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:57400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.411 [2024-12-06 03:32:12.849401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:55.411 [2024-12-06 03:32:12.849414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:57416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.411 [2024-12-06 03:32:12.849421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:55.411 [2024-12-06 03:32:12.849433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.411 [2024-12-06 03:32:12.849440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:55.411 [2024-12-06 03:32:12.849453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:57448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.411 [2024-12-06 03:32:12.849459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:55.411 [2024-12-06 03:32:12.849472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.411 [2024-12-06 03:32:12.849479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:55.411 [2024-12-06 03:32:12.849491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:57480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.411 [2024-12-06 03:32:12.849499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:55.411 [2024-12-06 03:32:12.849514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:57496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.411 [2024-12-06 03:32:12.849522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:55.411 [2024-12-06 03:32:12.849534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:57512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.411 [2024-12-06 03:32:12.849541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:55.411 [2024-12-06 03:32:12.849554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:57528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.411 [2024-12-06 03:32:12.849561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:55.411 [2024-12-06 03:32:12.849573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.411 [2024-12-06 03:32:12.849580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:55.411 [2024-12-06 03:32:12.849593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:57560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.411 [2024-12-06 03:32:12.849600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:55.411 [2024-12-06 03:32:12.849613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:57576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.411 [2024-12-06 03:32:12.849621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:55.411 [2024-12-06 03:32:12.849633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:57592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.411 [2024-12-06 03:32:12.849640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:55.411 [2024-12-06 03:32:12.849653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:57608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.411 [2024-12-06 03:32:12.849660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:55.411 [2024-12-06 03:32:12.849673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:57624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.411 [2024-12-06 03:32:12.849680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:55.411 [2024-12-06 03:32:12.849692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:56672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.411 [2024-12-06 03:32:12.849699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:55.411 [2024-12-06 03:32:12.849712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.411 [2024-12-06 03:32:12.849718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:55.411 [2024-12-06 03:32:12.849731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:56736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.411 [2024-12-06 03:32:12.849738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:55.411 [2024-12-06 03:32:12.849750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:57648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.411 [2024-12-06 03:32:12.849759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:55.411 [2024-12-06 03:32:12.849772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:57664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.411 [2024-12-06 03:32:12.849778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:55.411 [2024-12-06 03:32:12.849791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:57680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.411 [2024-12-06 03:32:12.849798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:55.411 [2024-12-06 03:32:12.849810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:56760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.411 [2024-12-06 03:32:12.849818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:55.411 [2024-12-06 03:32:12.849831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:56792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.412 [2024-12-06 03:32:12.849838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.412 [2024-12-06 03:32:12.849850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:56824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.412 [2024-12-06 03:32:12.849857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:55.412 [2024-12-06 03:32:12.850643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:57696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.412 [2024-12-06 03:32:12.850659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:55.412 [2024-12-06 03:32:12.850675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:56880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.412 [2024-12-06 03:32:12.850682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:55.412 [2024-12-06 03:32:12.850695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:56912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.412 [2024-12-06 03:32:12.850702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:55.412 [2024-12-06 03:32:12.850714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:56944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.412 [2024-12-06 03:32:12.850721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:55.412 [2024-12-06 03:32:12.850734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:56976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.412 [2024-12-06 03:32:12.850741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:55.412 [2024-12-06 03:32:12.850753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:57008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.412 [2024-12-06 03:32:12.850760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:55.412 [2024-12-06 03:32:12.850773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:57040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.412 [2024-12-06 03:32:12.850783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:55.412 [2024-12-06 03:32:12.850796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:57072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.412 [2024-12-06 03:32:12.850803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:55.412 [2024-12-06 03:32:12.850815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:57704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.412 [2024-12-06 03:32:12.850822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:55.412 [2024-12-06 03:32:12.850835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:57104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.412 [2024-12-06 03:32:12.850842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:55.412 [2024-12-06 03:32:12.850854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:57136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.412 [2024-12-06 03:32:12.850862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:55.412 [2024-12-06 03:32:12.850874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:57168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.412 [2024-12-06 03:32:12.850881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:55.412 [2024-12-06 03:32:12.850893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:57200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.412 [2024-12-06 03:32:12.850900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:55.412 10240.19 IOPS, 40.00 MiB/s [2024-12-06T02:32:15.553Z] 10261.61 IOPS, 40.08 MiB/s [2024-12-06T02:32:15.553Z] Received shutdown signal, test time was about 28.697195 seconds 00:23:55.412 00:23:55.412 Latency(us) 00:23:55.412 [2024-12-06T02:32:15.553Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.412 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:55.412 Verification LBA range: start 0x0 length 0x4000 00:23:55.412 Nvme0n1 : 28.70 10274.41 40.13 0.00 0.00 12437.75 573.44 3019898.88 00:23:55.412 [2024-12-06T02:32:15.553Z] =================================================================================================================== 00:23:55.412 [2024-12-06T02:32:15.553Z] Total : 10274.41 40.13 0.00 0.00 12437.75 573.44 3019898.88 00:23:55.412 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:55.670 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:55.670 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:55.670 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:55.670 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:55.670 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:23:55.670 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:55.670 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:23:55.670 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:55.670 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:55.670 rmmod nvme_tcp 00:23:55.670 rmmod nvme_fabrics 00:23:55.670 rmmod nvme_keyring 00:23:55.670 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:55.670 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:23:55.670 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:23:55.670 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2722297 ']' 00:23:55.670 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2722297 00:23:55.670 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2722297 ']' 00:23:55.670 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2722297 00:23:55.670 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:23:55.670 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:55.670 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2722297 00:23:55.670 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:55.670 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:55.670 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2722297' 00:23:55.670 killing process with pid 2722297 00:23:55.670 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2722297 00:23:55.670 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2722297 00:23:55.928 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:55.928 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:55.928 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:55.928 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:23:55.928 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:55.928 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:23:55.928 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:23:55.928 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:55.928 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:55.928 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.928 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:55.928 03:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.829 03:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:57.829 00:23:57.829 real 0m39.975s 00:23:57.829 user 1m49.379s 00:23:57.829 sys 0m11.108s 00:23:57.829 03:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:57.829 03:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:57.829 ************************************ 00:23:57.829 END TEST nvmf_host_multipath_status 00:23:57.829 ************************************ 00:23:57.829 03:32:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:57.829 03:32:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:57.829 03:32:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:57.829 03:32:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.088 ************************************ 00:23:58.088 START TEST nvmf_discovery_remove_ifc 00:23:58.088 ************************************ 00:23:58.088 03:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:58.088 * Looking for test storage... 00:23:58.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:58.088 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:58.088 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:23:58.088 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:58.088 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:58.088 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:58.088 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:58.088 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:58.088 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:23:58.088 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:23:58.088 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:23:58.088 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:23:58.088 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:23:58.088 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:23:58.088 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:23:58.088 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:58.088 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:23:58.088 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:23:58.088 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:58.088 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:58.088 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:23:58.088 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:23:58.088 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:58.088 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:23:58.088 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:23:58.088 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:23:58.088 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:23:58.088 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:58.088 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:23:58.088 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:23:58.088 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:58.088 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:58.088 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:23:58.088 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:58.088 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:58.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.088 --rc genhtml_branch_coverage=1 00:23:58.088 --rc genhtml_function_coverage=1 00:23:58.088 --rc genhtml_legend=1 00:23:58.088 --rc geninfo_all_blocks=1 00:23:58.088 --rc geninfo_unexecuted_blocks=1 00:23:58.088 00:23:58.088 ' 00:23:58.088 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:58.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.088 --rc genhtml_branch_coverage=1 00:23:58.088 --rc genhtml_function_coverage=1 00:23:58.088 --rc genhtml_legend=1 00:23:58.088 --rc geninfo_all_blocks=1 00:23:58.088 --rc geninfo_unexecuted_blocks=1 00:23:58.088 00:23:58.088 ' 00:23:58.088 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:58.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.088 --rc genhtml_branch_coverage=1 00:23:58.088 --rc genhtml_function_coverage=1 00:23:58.088 --rc genhtml_legend=1 00:23:58.088 --rc geninfo_all_blocks=1 00:23:58.089 --rc geninfo_unexecuted_blocks=1 00:23:58.089 00:23:58.089 ' 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:58.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.089 --rc genhtml_branch_coverage=1 00:23:58.089 --rc genhtml_function_coverage=1 00:23:58.089 --rc genhtml_legend=1 00:23:58.089 --rc geninfo_all_blocks=1 00:23:58.089 --rc geninfo_unexecuted_blocks=1 00:23:58.089 00:23:58.089 ' 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:58.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:23:58.089 03:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:03.383 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:03.383 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:24:03.383 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:03.383 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:03.383 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:03.383 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:03.383 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:03.383 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:24:03.383 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:03.383 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:24:03.383 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:24:03.383 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:24:03.383 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:24:03.383 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:24:03.383 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:24:03.383 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:03.383 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:03.383 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:03.383 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:03.383 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:03.383 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:03.383 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:03.383 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:03.383 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:03.383 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:03.383 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:03.383 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:03.383 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:03.383 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:03.383 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:03.383 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:03.383 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:03.384 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:03.384 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:03.384 Found net devices under 0000:86:00.0: cvl_0_0 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:03.384 Found net devices under 0000:86:00.1: cvl_0_1 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:03.384 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:03.643 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:03.643 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:03.643 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:03.643 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:03.643 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:03.643 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:03.643 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:03.643 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:03.643 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:03.643 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.443 ms 00:24:03.643 00:24:03.643 --- 10.0.0.2 ping statistics --- 00:24:03.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.643 rtt min/avg/max/mdev = 0.443/0.443/0.443/0.000 ms 00:24:03.643 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:03.643 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:03.643 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:24:03.643 00:24:03.643 --- 10.0.0.1 ping statistics --- 00:24:03.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.643 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:24:03.643 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:03.643 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:24:03.643 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:03.643 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:03.643 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:03.643 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:03.643 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:03.643 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:03.643 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:03.643 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:03.643 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:03.643 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:03.643 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:03.643 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2731114 00:24:03.643 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2731114 00:24:03.643 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:03.643 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2731114 ']' 00:24:03.643 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.643 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:03.643 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.643 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:03.643 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:03.643 [2024-12-06 03:32:23.739006] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:24:03.643 [2024-12-06 03:32:23.739048] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:03.903 [2024-12-06 03:32:23.805790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.903 [2024-12-06 03:32:23.845684] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:03.903 [2024-12-06 03:32:23.845721] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:03.903 [2024-12-06 03:32:23.845729] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:03.903 [2024-12-06 03:32:23.845737] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:03.903 [2024-12-06 03:32:23.845742] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:03.903 [2024-12-06 03:32:23.846313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.903 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:03.903 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:03.903 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:03.903 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:03.903 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:03.903 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:03.903 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:03.903 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.903 03:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:03.903 [2024-12-06 03:32:23.991360] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:03.903 [2024-12-06 03:32:23.999536] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:03.903 null0 00:24:03.903 [2024-12-06 03:32:24.031517] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:04.163 03:32:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.163 03:32:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2731295 00:24:04.163 03:32:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:04.163 03:32:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2731295 /tmp/host.sock 00:24:04.163 03:32:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2731295 ']' 00:24:04.163 03:32:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:04.163 03:32:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:04.163 03:32:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:04.163 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:04.163 03:32:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:04.163 03:32:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:04.163 [2024-12-06 03:32:24.102688] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:24:04.163 [2024-12-06 03:32:24.102733] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2731295 ] 00:24:04.163 [2024-12-06 03:32:24.164497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.163 [2024-12-06 03:32:24.206862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.163 03:32:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:04.163 03:32:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:04.163 03:32:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:04.163 03:32:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:04.163 03:32:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.163 03:32:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:04.163 03:32:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.163 03:32:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:04.163 03:32:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.163 03:32:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:04.422 03:32:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.422 03:32:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:04.422 03:32:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.422 03:32:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:05.358 [2024-12-06 03:32:25.398122] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:05.358 [2024-12-06 03:32:25.398142] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:05.358 [2024-12-06 03:32:25.398154] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:05.358 [2024-12-06 03:32:25.484417] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:05.615 [2024-12-06 03:32:25.539000] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:05.615 [2024-12-06 03:32:25.539647] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x16c5940:1 started. 00:24:05.615 [2024-12-06 03:32:25.541030] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:05.615 [2024-12-06 03:32:25.541072] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:05.615 [2024-12-06 03:32:25.541091] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:05.615 [2024-12-06 03:32:25.541103] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:05.615 [2024-12-06 03:32:25.541120] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:05.615 03:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.615 03:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:05.615 03:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:05.615 [2024-12-06 03:32:25.546636] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x16c5940 was disconnected and freed. delete nvme_qpair. 00:24:05.615 03:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:05.615 03:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:05.615 03:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:05.615 03:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.615 03:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:05.615 03:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:05.615 03:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.615 03:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:05.615 03:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:05.615 03:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:05.615 03:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:05.615 03:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:05.615 03:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:05.615 03:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:05.615 03:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.615 03:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:05.615 03:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:05.616 03:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:05.616 03:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.616 03:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:05.616 03:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:07.012 03:32:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:07.012 03:32:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:07.012 03:32:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:07.012 03:32:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.012 03:32:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:07.012 03:32:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:07.012 03:32:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:07.012 03:32:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.012 03:32:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:07.012 03:32:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:07.947 03:32:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:07.947 03:32:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:07.947 03:32:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:07.947 03:32:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:07.947 03:32:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.947 03:32:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:07.947 03:32:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:07.947 03:32:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.947 03:32:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:07.947 03:32:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:08.886 03:32:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:08.886 03:32:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:08.886 03:32:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:08.886 03:32:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.886 03:32:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:08.886 03:32:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:08.886 03:32:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:08.886 03:32:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.886 03:32:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:08.886 03:32:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:09.825 03:32:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:09.825 03:32:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:09.825 03:32:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:09.825 03:32:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:09.825 03:32:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:09.825 03:32:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.825 03:32:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:09.825 03:32:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.825 03:32:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:09.825 03:32:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:11.204 03:32:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:11.204 03:32:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:11.204 03:32:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:11.204 03:32:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:11.204 03:32:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.204 03:32:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:11.204 03:32:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:11.204 03:32:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.204 [2024-12-06 03:32:30.982681] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:11.204 [2024-12-06 03:32:30.982729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.204 [2024-12-06 03:32:30.982741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.204 [2024-12-06 03:32:30.982751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.204 [2024-12-06 03:32:30.982758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.204 [2024-12-06 03:32:30.982765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.204 [2024-12-06 03:32:30.982772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.204 [2024-12-06 03:32:30.982779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.204 [2024-12-06 03:32:30.982786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.204 [2024-12-06 03:32:30.982793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.204 [2024-12-06 03:32:30.982800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.204 [2024-12-06 03:32:30.982807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2160 is same with the state(6) to be set 00:24:11.204 [2024-12-06 03:32:30.992703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a2160 (9): Bad file descriptor 00:24:11.204 03:32:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:11.204 03:32:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:11.204 [2024-12-06 03:32:31.002739] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:11.204 [2024-12-06 03:32:31.002752] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:11.204 [2024-12-06 03:32:31.002759] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:11.204 [2024-12-06 03:32:31.002764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:11.204 [2024-12-06 03:32:31.002787] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:12.141 03:32:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:12.141 03:32:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:12.141 03:32:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:12.141 03:32:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:12.141 03:32:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.141 03:32:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:12.141 03:32:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:12.141 [2024-12-06 03:32:32.047964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:12.141 [2024-12-06 03:32:32.048005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a2160 with addr=10.0.0.2, port=4420 00:24:12.141 [2024-12-06 03:32:32.048022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2160 is same with the state(6) to be set 00:24:12.141 [2024-12-06 03:32:32.048052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a2160 (9): Bad file descriptor 00:24:12.141 [2024-12-06 03:32:32.048476] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:24:12.141 [2024-12-06 03:32:32.048504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:12.141 [2024-12-06 03:32:32.048515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:12.141 [2024-12-06 03:32:32.048527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:12.141 [2024-12-06 03:32:32.048536] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:12.141 [2024-12-06 03:32:32.048543] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:12.141 [2024-12-06 03:32:32.048549] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:12.141 [2024-12-06 03:32:32.048559] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:12.141 [2024-12-06 03:32:32.048566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:12.141 03:32:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.141 03:32:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:12.141 03:32:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:13.075 [2024-12-06 03:32:33.051045] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:13.075 [2024-12-06 03:32:33.051064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:13.075 [2024-12-06 03:32:33.051076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:13.075 [2024-12-06 03:32:33.051082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:13.075 [2024-12-06 03:32:33.051089] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:24:13.075 [2024-12-06 03:32:33.051096] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:13.075 [2024-12-06 03:32:33.051100] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:13.075 [2024-12-06 03:32:33.051104] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:13.075 [2024-12-06 03:32:33.051126] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:13.075 [2024-12-06 03:32:33.051146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.075 [2024-12-06 03:32:33.051156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.076 [2024-12-06 03:32:33.051165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.076 [2024-12-06 03:32:33.051172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.076 [2024-12-06 03:32:33.051179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.076 [2024-12-06 03:32:33.051185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.076 [2024-12-06 03:32:33.051192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.076 [2024-12-06 03:32:33.051203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.076 [2024-12-06 03:32:33.051210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.076 [2024-12-06 03:32:33.051216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.076 [2024-12-06 03:32:33.051223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:24:13.076 [2024-12-06 03:32:33.051245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1691450 (9): Bad file descriptor 00:24:13.076 [2024-12-06 03:32:33.052244] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:13.076 [2024-12-06 03:32:33.052254] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:24:13.076 03:32:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:13.076 03:32:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:13.076 03:32:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:13.076 03:32:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:13.076 03:32:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.076 03:32:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:13.076 03:32:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:13.076 03:32:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.076 03:32:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:13.076 03:32:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:13.076 03:32:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:13.076 03:32:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:13.076 03:32:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:13.076 03:32:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:13.076 03:32:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:13.076 03:32:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.076 03:32:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:13.076 03:32:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:13.076 03:32:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:13.076 03:32:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.334 03:32:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:13.334 03:32:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:14.270 03:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:14.270 03:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:14.271 03:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:14.271 03:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:14.271 03:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.271 03:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:14.271 03:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:14.271 03:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.271 03:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:14.271 03:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:15.205 [2024-12-06 03:32:35.062415] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:15.205 [2024-12-06 03:32:35.062433] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:15.205 [2024-12-06 03:32:35.062445] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:15.205 [2024-12-06 03:32:35.189832] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:15.205 [2024-12-06 03:32:35.284605] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:24:15.205 [2024-12-06 03:32:35.285253] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1676090:1 started. 00:24:15.205 [2024-12-06 03:32:35.286319] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:15.205 [2024-12-06 03:32:35.286353] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:15.205 [2024-12-06 03:32:35.286370] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:15.205 [2024-12-06 03:32:35.286384] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:15.205 [2024-12-06 03:32:35.286391] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:15.205 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:15.205 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:15.205 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:15.205 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.205 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:15.205 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:15.205 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:15.205 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.205 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:15.205 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:15.205 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2731295 00:24:15.205 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2731295 ']' 00:24:15.205 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2731295 00:24:15.205 [2024-12-06 03:32:35.332480] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1676090 was disconnected and freed. delete nvme_qpair. 00:24:15.205 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:24:15.205 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:15.205 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2731295 00:24:15.462 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:15.462 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:15.462 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2731295' 00:24:15.462 killing process with pid 2731295 00:24:15.462 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2731295 00:24:15.462 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2731295 00:24:15.462 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:15.462 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:15.462 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:24:15.462 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:15.462 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:24:15.462 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:15.462 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:15.462 rmmod nvme_tcp 00:24:15.462 rmmod nvme_fabrics 00:24:15.462 rmmod nvme_keyring 00:24:15.463 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:15.721 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:24:15.721 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:24:15.721 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2731114 ']' 00:24:15.721 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2731114 00:24:15.721 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2731114 ']' 00:24:15.721 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2731114 00:24:15.721 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:24:15.721 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:15.721 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2731114 00:24:15.721 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:15.721 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:15.721 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2731114' 00:24:15.721 killing process with pid 2731114 00:24:15.721 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2731114 00:24:15.721 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2731114 00:24:15.721 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:15.721 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:15.721 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:15.721 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:24:15.721 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:24:15.721 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:15.721 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:24:15.721 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:15.721 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:15.721 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:15.721 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:15.721 03:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.253 03:32:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:18.253 00:24:18.253 real 0m19.888s 00:24:18.253 user 0m24.325s 00:24:18.253 sys 0m5.453s 00:24:18.253 03:32:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:18.253 03:32:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:18.253 ************************************ 00:24:18.253 END TEST nvmf_discovery_remove_ifc 00:24:18.253 ************************************ 00:24:18.253 03:32:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:18.253 03:32:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:18.253 03:32:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:18.253 03:32:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.253 ************************************ 00:24:18.253 START TEST nvmf_identify_kernel_target 00:24:18.253 ************************************ 00:24:18.253 03:32:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:18.253 * Looking for test storage... 00:24:18.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:18.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.253 --rc genhtml_branch_coverage=1 00:24:18.253 --rc genhtml_function_coverage=1 00:24:18.253 --rc genhtml_legend=1 00:24:18.253 --rc geninfo_all_blocks=1 00:24:18.253 --rc geninfo_unexecuted_blocks=1 00:24:18.253 00:24:18.253 ' 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:18.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.253 --rc genhtml_branch_coverage=1 00:24:18.253 --rc genhtml_function_coverage=1 00:24:18.253 --rc genhtml_legend=1 00:24:18.253 --rc geninfo_all_blocks=1 00:24:18.253 --rc geninfo_unexecuted_blocks=1 00:24:18.253 00:24:18.253 ' 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:18.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.253 --rc genhtml_branch_coverage=1 00:24:18.253 --rc genhtml_function_coverage=1 00:24:18.253 --rc genhtml_legend=1 00:24:18.253 --rc geninfo_all_blocks=1 00:24:18.253 --rc geninfo_unexecuted_blocks=1 00:24:18.253 00:24:18.253 ' 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:18.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.253 --rc genhtml_branch_coverage=1 00:24:18.253 --rc genhtml_function_coverage=1 00:24:18.253 --rc genhtml_legend=1 00:24:18.253 --rc geninfo_all_blocks=1 00:24:18.253 --rc geninfo_unexecuted_blocks=1 00:24:18.253 00:24:18.253 ' 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:18.253 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:24:18.253 03:32:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:23.531 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:23.531 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:23.531 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:23.532 Found net devices under 0000:86:00.0: cvl_0_0 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:23.532 Found net devices under 0000:86:00.1: cvl_0_1 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:23.532 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:23.791 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:23.791 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:23.791 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:23.791 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:23.791 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:23.791 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:24:23.791 00:24:23.791 --- 10.0.0.2 ping statistics --- 00:24:23.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.791 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:24:23.791 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:23.791 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:23.791 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:24:23.791 00:24:23.791 --- 10.0.0.1 ping statistics --- 00:24:23.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.791 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:24:23.791 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:23.791 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:24:23.791 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:23.791 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:23.791 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:23.791 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:23.791 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:23.791 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:23.791 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:23.792 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:23.792 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:23.792 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:24:23.792 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:23.792 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:23.792 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.792 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.792 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:23.792 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.792 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:23.792 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:23.792 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:23.792 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:23.792 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:23.792 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:23.792 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:23.792 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:23.792 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:23.792 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:23.792 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:24:23.792 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:23.792 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:23.792 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:23.792 03:32:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:26.322 Waiting for block devices as requested 00:24:26.322 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:24:26.581 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:26.581 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:26.581 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:26.581 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:26.839 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:26.839 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:26.839 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:26.839 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:27.098 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:27.098 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:27.098 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:27.357 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:27.357 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:27.357 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:27.357 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:27.616 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:27.616 03:32:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:27.616 03:32:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:27.616 03:32:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:27.616 03:32:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:24:27.616 03:32:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:27.616 03:32:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:27.616 03:32:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:27.616 03:32:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:27.616 03:32:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:27.616 No valid GPT data, bailing 00:24:27.616 03:32:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:27.616 03:32:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:24:27.616 03:32:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:24:27.616 03:32:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:27.616 03:32:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:24:27.616 03:32:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:27.616 03:32:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:27.616 03:32:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:27.616 03:32:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:27.616 03:32:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:24:27.616 03:32:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:24:27.616 03:32:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:24:27.616 03:32:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:27.616 03:32:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:24:27.616 03:32:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:24:27.616 03:32:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:24:27.616 03:32:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:27.616 03:32:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:24:27.875 00:24:27.875 Discovery Log Number of Records 2, Generation counter 2 00:24:27.875 =====Discovery Log Entry 0====== 00:24:27.875 trtype: tcp 00:24:27.875 adrfam: ipv4 00:24:27.875 subtype: current discovery subsystem 00:24:27.875 treq: not specified, sq flow control disable supported 00:24:27.875 portid: 1 00:24:27.875 trsvcid: 4420 00:24:27.875 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:27.875 traddr: 10.0.0.1 00:24:27.875 eflags: none 00:24:27.875 sectype: none 00:24:27.875 =====Discovery Log Entry 1====== 00:24:27.875 trtype: tcp 00:24:27.875 adrfam: ipv4 00:24:27.875 subtype: nvme subsystem 00:24:27.875 treq: not specified, sq flow control disable supported 00:24:27.875 portid: 1 00:24:27.875 trsvcid: 4420 00:24:27.875 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:27.875 traddr: 10.0.0.1 00:24:27.875 eflags: none 00:24:27.875 sectype: none 00:24:27.875 03:32:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:27.875 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:27.875 ===================================================== 00:24:27.875 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:27.875 ===================================================== 00:24:27.875 Controller Capabilities/Features 00:24:27.875 ================================ 00:24:27.875 Vendor ID: 0000 00:24:27.875 Subsystem Vendor ID: 0000 00:24:27.875 Serial Number: fb6de041d7ee08ca0413 00:24:27.875 Model Number: Linux 00:24:27.875 Firmware Version: 6.8.9-20 00:24:27.875 Recommended Arb Burst: 0 00:24:27.875 IEEE OUI Identifier: 00 00 00 00:24:27.875 Multi-path I/O 00:24:27.875 May have multiple subsystem ports: No 00:24:27.875 May have multiple controllers: No 00:24:27.875 Associated with SR-IOV VF: No 00:24:27.875 Max Data Transfer Size: Unlimited 00:24:27.875 Max Number of Namespaces: 0 00:24:27.875 Max Number of I/O Queues: 1024 00:24:27.875 NVMe Specification Version (VS): 1.3 00:24:27.875 NVMe Specification Version (Identify): 1.3 00:24:27.875 Maximum Queue Entries: 1024 00:24:27.875 Contiguous Queues Required: No 00:24:27.875 Arbitration Mechanisms Supported 00:24:27.875 Weighted Round Robin: Not Supported 00:24:27.875 Vendor Specific: Not Supported 00:24:27.875 Reset Timeout: 7500 ms 00:24:27.875 Doorbell Stride: 4 bytes 00:24:27.875 NVM Subsystem Reset: Not Supported 00:24:27.875 Command Sets Supported 00:24:27.875 NVM Command Set: Supported 00:24:27.875 Boot Partition: Not Supported 00:24:27.875 Memory Page Size Minimum: 4096 bytes 00:24:27.875 Memory Page Size Maximum: 4096 bytes 00:24:27.875 Persistent Memory Region: Not Supported 00:24:27.875 Optional Asynchronous Events Supported 00:24:27.875 Namespace Attribute Notices: Not Supported 00:24:27.875 Firmware Activation Notices: Not Supported 00:24:27.875 ANA Change Notices: Not Supported 00:24:27.875 PLE Aggregate Log Change Notices: Not Supported 00:24:27.875 LBA Status Info Alert Notices: Not Supported 00:24:27.875 EGE Aggregate Log Change Notices: Not Supported 00:24:27.875 Normal NVM Subsystem Shutdown event: Not Supported 00:24:27.875 Zone Descriptor Change Notices: Not Supported 00:24:27.875 Discovery Log Change Notices: Supported 00:24:27.875 Controller Attributes 00:24:27.875 128-bit Host Identifier: Not Supported 00:24:27.875 Non-Operational Permissive Mode: Not Supported 00:24:27.875 NVM Sets: Not Supported 00:24:27.875 Read Recovery Levels: Not Supported 00:24:27.875 Endurance Groups: Not Supported 00:24:27.875 Predictable Latency Mode: Not Supported 00:24:27.875 Traffic Based Keep ALive: Not Supported 00:24:27.875 Namespace Granularity: Not Supported 00:24:27.875 SQ Associations: Not Supported 00:24:27.875 UUID List: Not Supported 00:24:27.875 Multi-Domain Subsystem: Not Supported 00:24:27.875 Fixed Capacity Management: Not Supported 00:24:27.875 Variable Capacity Management: Not Supported 00:24:27.875 Delete Endurance Group: Not Supported 00:24:27.875 Delete NVM Set: Not Supported 00:24:27.875 Extended LBA Formats Supported: Not Supported 00:24:27.875 Flexible Data Placement Supported: Not Supported 00:24:27.875 00:24:27.875 Controller Memory Buffer Support 00:24:27.875 ================================ 00:24:27.875 Supported: No 00:24:27.875 00:24:27.875 Persistent Memory Region Support 00:24:27.875 ================================ 00:24:27.875 Supported: No 00:24:27.875 00:24:27.875 Admin Command Set Attributes 00:24:27.875 ============================ 00:24:27.875 Security Send/Receive: Not Supported 00:24:27.875 Format NVM: Not Supported 00:24:27.875 Firmware Activate/Download: Not Supported 00:24:27.875 Namespace Management: Not Supported 00:24:27.875 Device Self-Test: Not Supported 00:24:27.875 Directives: Not Supported 00:24:27.875 NVMe-MI: Not Supported 00:24:27.875 Virtualization Management: Not Supported 00:24:27.875 Doorbell Buffer Config: Not Supported 00:24:27.875 Get LBA Status Capability: Not Supported 00:24:27.875 Command & Feature Lockdown Capability: Not Supported 00:24:27.875 Abort Command Limit: 1 00:24:27.875 Async Event Request Limit: 1 00:24:27.875 Number of Firmware Slots: N/A 00:24:27.875 Firmware Slot 1 Read-Only: N/A 00:24:27.875 Firmware Activation Without Reset: N/A 00:24:27.875 Multiple Update Detection Support: N/A 00:24:27.875 Firmware Update Granularity: No Information Provided 00:24:27.875 Per-Namespace SMART Log: No 00:24:27.875 Asymmetric Namespace Access Log Page: Not Supported 00:24:27.876 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:27.876 Command Effects Log Page: Not Supported 00:24:27.876 Get Log Page Extended Data: Supported 00:24:27.876 Telemetry Log Pages: Not Supported 00:24:27.876 Persistent Event Log Pages: Not Supported 00:24:27.876 Supported Log Pages Log Page: May Support 00:24:27.876 Commands Supported & Effects Log Page: Not Supported 00:24:27.876 Feature Identifiers & Effects Log Page:May Support 00:24:27.876 NVMe-MI Commands & Effects Log Page: May Support 00:24:27.876 Data Area 4 for Telemetry Log: Not Supported 00:24:27.876 Error Log Page Entries Supported: 1 00:24:27.876 Keep Alive: Not Supported 00:24:27.876 00:24:27.876 NVM Command Set Attributes 00:24:27.876 ========================== 00:24:27.876 Submission Queue Entry Size 00:24:27.876 Max: 1 00:24:27.876 Min: 1 00:24:27.876 Completion Queue Entry Size 00:24:27.876 Max: 1 00:24:27.876 Min: 1 00:24:27.876 Number of Namespaces: 0 00:24:27.876 Compare Command: Not Supported 00:24:27.876 Write Uncorrectable Command: Not Supported 00:24:27.876 Dataset Management Command: Not Supported 00:24:27.876 Write Zeroes Command: Not Supported 00:24:27.876 Set Features Save Field: Not Supported 00:24:27.876 Reservations: Not Supported 00:24:27.876 Timestamp: Not Supported 00:24:27.876 Copy: Not Supported 00:24:27.876 Volatile Write Cache: Not Present 00:24:27.876 Atomic Write Unit (Normal): 1 00:24:27.876 Atomic Write Unit (PFail): 1 00:24:27.876 Atomic Compare & Write Unit: 1 00:24:27.876 Fused Compare & Write: Not Supported 00:24:27.876 Scatter-Gather List 00:24:27.876 SGL Command Set: Supported 00:24:27.876 SGL Keyed: Not Supported 00:24:27.876 SGL Bit Bucket Descriptor: Not Supported 00:24:27.876 SGL Metadata Pointer: Not Supported 00:24:27.876 Oversized SGL: Not Supported 00:24:27.876 SGL Metadata Address: Not Supported 00:24:27.876 SGL Offset: Supported 00:24:27.876 Transport SGL Data Block: Not Supported 00:24:27.876 Replay Protected Memory Block: Not Supported 00:24:27.876 00:24:27.876 Firmware Slot Information 00:24:27.876 ========================= 00:24:27.876 Active slot: 0 00:24:27.876 00:24:27.876 00:24:27.876 Error Log 00:24:27.876 ========= 00:24:27.876 00:24:27.876 Active Namespaces 00:24:27.876 ================= 00:24:27.876 Discovery Log Page 00:24:27.876 ================== 00:24:27.876 Generation Counter: 2 00:24:27.876 Number of Records: 2 00:24:27.876 Record Format: 0 00:24:27.876 00:24:27.876 Discovery Log Entry 0 00:24:27.876 ---------------------- 00:24:27.876 Transport Type: 3 (TCP) 00:24:27.876 Address Family: 1 (IPv4) 00:24:27.876 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:27.876 Entry Flags: 00:24:27.876 Duplicate Returned Information: 0 00:24:27.876 Explicit Persistent Connection Support for Discovery: 0 00:24:27.876 Transport Requirements: 00:24:27.876 Secure Channel: Not Specified 00:24:27.876 Port ID: 1 (0x0001) 00:24:27.876 Controller ID: 65535 (0xffff) 00:24:27.876 Admin Max SQ Size: 32 00:24:27.876 Transport Service Identifier: 4420 00:24:27.876 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:27.876 Transport Address: 10.0.0.1 00:24:27.876 Discovery Log Entry 1 00:24:27.876 ---------------------- 00:24:27.876 Transport Type: 3 (TCP) 00:24:27.876 Address Family: 1 (IPv4) 00:24:27.876 Subsystem Type: 2 (NVM Subsystem) 00:24:27.876 Entry Flags: 00:24:27.876 Duplicate Returned Information: 0 00:24:27.876 Explicit Persistent Connection Support for Discovery: 0 00:24:27.876 Transport Requirements: 00:24:27.876 Secure Channel: Not Specified 00:24:27.876 Port ID: 1 (0x0001) 00:24:27.876 Controller ID: 65535 (0xffff) 00:24:27.876 Admin Max SQ Size: 32 00:24:27.876 Transport Service Identifier: 4420 00:24:27.876 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:27.876 Transport Address: 10.0.0.1 00:24:27.876 03:32:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:27.876 get_feature(0x01) failed 00:24:27.876 get_feature(0x02) failed 00:24:27.876 get_feature(0x04) failed 00:24:27.876 ===================================================== 00:24:27.876 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:27.876 ===================================================== 00:24:27.876 Controller Capabilities/Features 00:24:27.876 ================================ 00:24:27.876 Vendor ID: 0000 00:24:27.876 Subsystem Vendor ID: 0000 00:24:27.876 Serial Number: 00a7475278a9519eb182 00:24:27.876 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:27.876 Firmware Version: 6.8.9-20 00:24:27.876 Recommended Arb Burst: 6 00:24:27.876 IEEE OUI Identifier: 00 00 00 00:24:27.876 Multi-path I/O 00:24:27.876 May have multiple subsystem ports: Yes 00:24:27.876 May have multiple controllers: Yes 00:24:27.876 Associated with SR-IOV VF: No 00:24:27.876 Max Data Transfer Size: Unlimited 00:24:27.876 Max Number of Namespaces: 1024 00:24:27.876 Max Number of I/O Queues: 128 00:24:27.876 NVMe Specification Version (VS): 1.3 00:24:27.876 NVMe Specification Version (Identify): 1.3 00:24:27.876 Maximum Queue Entries: 1024 00:24:27.876 Contiguous Queues Required: No 00:24:27.876 Arbitration Mechanisms Supported 00:24:27.876 Weighted Round Robin: Not Supported 00:24:27.876 Vendor Specific: Not Supported 00:24:27.876 Reset Timeout: 7500 ms 00:24:27.876 Doorbell Stride: 4 bytes 00:24:27.876 NVM Subsystem Reset: Not Supported 00:24:27.876 Command Sets Supported 00:24:27.876 NVM Command Set: Supported 00:24:27.876 Boot Partition: Not Supported 00:24:27.876 Memory Page Size Minimum: 4096 bytes 00:24:27.876 Memory Page Size Maximum: 4096 bytes 00:24:27.876 Persistent Memory Region: Not Supported 00:24:27.876 Optional Asynchronous Events Supported 00:24:27.876 Namespace Attribute Notices: Supported 00:24:27.876 Firmware Activation Notices: Not Supported 00:24:27.876 ANA Change Notices: Supported 00:24:27.876 PLE Aggregate Log Change Notices: Not Supported 00:24:27.876 LBA Status Info Alert Notices: Not Supported 00:24:27.876 EGE Aggregate Log Change Notices: Not Supported 00:24:27.876 Normal NVM Subsystem Shutdown event: Not Supported 00:24:27.876 Zone Descriptor Change Notices: Not Supported 00:24:27.876 Discovery Log Change Notices: Not Supported 00:24:27.876 Controller Attributes 00:24:27.876 128-bit Host Identifier: Supported 00:24:27.876 Non-Operational Permissive Mode: Not Supported 00:24:27.876 NVM Sets: Not Supported 00:24:27.876 Read Recovery Levels: Not Supported 00:24:27.876 Endurance Groups: Not Supported 00:24:27.876 Predictable Latency Mode: Not Supported 00:24:27.876 Traffic Based Keep ALive: Supported 00:24:27.876 Namespace Granularity: Not Supported 00:24:27.876 SQ Associations: Not Supported 00:24:27.876 UUID List: Not Supported 00:24:27.876 Multi-Domain Subsystem: Not Supported 00:24:27.876 Fixed Capacity Management: Not Supported 00:24:27.876 Variable Capacity Management: Not Supported 00:24:27.876 Delete Endurance Group: Not Supported 00:24:27.876 Delete NVM Set: Not Supported 00:24:27.876 Extended LBA Formats Supported: Not Supported 00:24:27.876 Flexible Data Placement Supported: Not Supported 00:24:27.876 00:24:27.876 Controller Memory Buffer Support 00:24:27.876 ================================ 00:24:27.876 Supported: No 00:24:27.876 00:24:27.876 Persistent Memory Region Support 00:24:27.876 ================================ 00:24:27.876 Supported: No 00:24:27.876 00:24:27.876 Admin Command Set Attributes 00:24:27.876 ============================ 00:24:27.876 Security Send/Receive: Not Supported 00:24:27.876 Format NVM: Not Supported 00:24:27.876 Firmware Activate/Download: Not Supported 00:24:27.876 Namespace Management: Not Supported 00:24:27.876 Device Self-Test: Not Supported 00:24:27.876 Directives: Not Supported 00:24:27.876 NVMe-MI: Not Supported 00:24:27.876 Virtualization Management: Not Supported 00:24:27.876 Doorbell Buffer Config: Not Supported 00:24:27.876 Get LBA Status Capability: Not Supported 00:24:27.876 Command & Feature Lockdown Capability: Not Supported 00:24:27.876 Abort Command Limit: 4 00:24:27.876 Async Event Request Limit: 4 00:24:27.876 Number of Firmware Slots: N/A 00:24:27.876 Firmware Slot 1 Read-Only: N/A 00:24:27.876 Firmware Activation Without Reset: N/A 00:24:27.876 Multiple Update Detection Support: N/A 00:24:27.876 Firmware Update Granularity: No Information Provided 00:24:27.876 Per-Namespace SMART Log: Yes 00:24:27.876 Asymmetric Namespace Access Log Page: Supported 00:24:27.876 ANA Transition Time : 10 sec 00:24:27.876 00:24:27.876 Asymmetric Namespace Access Capabilities 00:24:27.876 ANA Optimized State : Supported 00:24:27.876 ANA Non-Optimized State : Supported 00:24:27.876 ANA Inaccessible State : Supported 00:24:27.876 ANA Persistent Loss State : Supported 00:24:27.877 ANA Change State : Supported 00:24:27.877 ANAGRPID is not changed : No 00:24:27.877 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:27.877 00:24:27.877 ANA Group Identifier Maximum : 128 00:24:27.877 Number of ANA Group Identifiers : 128 00:24:27.877 Max Number of Allowed Namespaces : 1024 00:24:27.877 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:27.877 Command Effects Log Page: Supported 00:24:27.877 Get Log Page Extended Data: Supported 00:24:27.877 Telemetry Log Pages: Not Supported 00:24:27.877 Persistent Event Log Pages: Not Supported 00:24:27.877 Supported Log Pages Log Page: May Support 00:24:27.877 Commands Supported & Effects Log Page: Not Supported 00:24:27.877 Feature Identifiers & Effects Log Page:May Support 00:24:27.877 NVMe-MI Commands & Effects Log Page: May Support 00:24:27.877 Data Area 4 for Telemetry Log: Not Supported 00:24:27.877 Error Log Page Entries Supported: 128 00:24:27.877 Keep Alive: Supported 00:24:27.877 Keep Alive Granularity: 1000 ms 00:24:27.877 00:24:27.877 NVM Command Set Attributes 00:24:27.877 ========================== 00:24:27.877 Submission Queue Entry Size 00:24:27.877 Max: 64 00:24:27.877 Min: 64 00:24:27.877 Completion Queue Entry Size 00:24:27.877 Max: 16 00:24:27.877 Min: 16 00:24:27.877 Number of Namespaces: 1024 00:24:27.877 Compare Command: Not Supported 00:24:27.877 Write Uncorrectable Command: Not Supported 00:24:27.877 Dataset Management Command: Supported 00:24:27.877 Write Zeroes Command: Supported 00:24:27.877 Set Features Save Field: Not Supported 00:24:27.877 Reservations: Not Supported 00:24:27.877 Timestamp: Not Supported 00:24:27.877 Copy: Not Supported 00:24:27.877 Volatile Write Cache: Present 00:24:27.877 Atomic Write Unit (Normal): 1 00:24:27.877 Atomic Write Unit (PFail): 1 00:24:27.877 Atomic Compare & Write Unit: 1 00:24:27.877 Fused Compare & Write: Not Supported 00:24:27.877 Scatter-Gather List 00:24:27.877 SGL Command Set: Supported 00:24:27.877 SGL Keyed: Not Supported 00:24:27.877 SGL Bit Bucket Descriptor: Not Supported 00:24:27.877 SGL Metadata Pointer: Not Supported 00:24:27.877 Oversized SGL: Not Supported 00:24:27.877 SGL Metadata Address: Not Supported 00:24:27.877 SGL Offset: Supported 00:24:27.877 Transport SGL Data Block: Not Supported 00:24:27.877 Replay Protected Memory Block: Not Supported 00:24:27.877 00:24:27.877 Firmware Slot Information 00:24:27.877 ========================= 00:24:27.877 Active slot: 0 00:24:27.877 00:24:27.877 Asymmetric Namespace Access 00:24:27.877 =========================== 00:24:27.877 Change Count : 0 00:24:27.877 Number of ANA Group Descriptors : 1 00:24:27.877 ANA Group Descriptor : 0 00:24:27.877 ANA Group ID : 1 00:24:27.877 Number of NSID Values : 1 00:24:27.877 Change Count : 0 00:24:27.877 ANA State : 1 00:24:27.877 Namespace Identifier : 1 00:24:27.877 00:24:27.877 Commands Supported and Effects 00:24:27.877 ============================== 00:24:27.877 Admin Commands 00:24:27.877 -------------- 00:24:27.877 Get Log Page (02h): Supported 00:24:27.877 Identify (06h): Supported 00:24:27.877 Abort (08h): Supported 00:24:27.877 Set Features (09h): Supported 00:24:27.877 Get Features (0Ah): Supported 00:24:27.877 Asynchronous Event Request (0Ch): Supported 00:24:27.877 Keep Alive (18h): Supported 00:24:27.877 I/O Commands 00:24:27.877 ------------ 00:24:27.877 Flush (00h): Supported 00:24:27.877 Write (01h): Supported LBA-Change 00:24:27.877 Read (02h): Supported 00:24:27.877 Write Zeroes (08h): Supported LBA-Change 00:24:27.877 Dataset Management (09h): Supported 00:24:27.877 00:24:27.877 Error Log 00:24:27.877 ========= 00:24:27.877 Entry: 0 00:24:27.877 Error Count: 0x3 00:24:27.877 Submission Queue Id: 0x0 00:24:27.877 Command Id: 0x5 00:24:27.877 Phase Bit: 0 00:24:27.877 Status Code: 0x2 00:24:27.877 Status Code Type: 0x0 00:24:27.877 Do Not Retry: 1 00:24:27.877 Error Location: 0x28 00:24:27.877 LBA: 0x0 00:24:27.877 Namespace: 0x0 00:24:27.877 Vendor Log Page: 0x0 00:24:27.877 ----------- 00:24:27.877 Entry: 1 00:24:27.877 Error Count: 0x2 00:24:27.877 Submission Queue Id: 0x0 00:24:27.877 Command Id: 0x5 00:24:27.877 Phase Bit: 0 00:24:27.877 Status Code: 0x2 00:24:27.877 Status Code Type: 0x0 00:24:27.877 Do Not Retry: 1 00:24:27.877 Error Location: 0x28 00:24:27.877 LBA: 0x0 00:24:27.877 Namespace: 0x0 00:24:27.877 Vendor Log Page: 0x0 00:24:27.877 ----------- 00:24:27.877 Entry: 2 00:24:27.877 Error Count: 0x1 00:24:27.877 Submission Queue Id: 0x0 00:24:27.877 Command Id: 0x4 00:24:27.877 Phase Bit: 0 00:24:27.877 Status Code: 0x2 00:24:27.877 Status Code Type: 0x0 00:24:27.877 Do Not Retry: 1 00:24:27.877 Error Location: 0x28 00:24:27.877 LBA: 0x0 00:24:27.877 Namespace: 0x0 00:24:27.877 Vendor Log Page: 0x0 00:24:27.877 00:24:27.877 Number of Queues 00:24:27.877 ================ 00:24:27.877 Number of I/O Submission Queues: 128 00:24:27.877 Number of I/O Completion Queues: 128 00:24:27.877 00:24:27.877 ZNS Specific Controller Data 00:24:27.877 ============================ 00:24:27.877 Zone Append Size Limit: 0 00:24:27.877 00:24:27.877 00:24:27.877 Active Namespaces 00:24:27.877 ================= 00:24:27.877 get_feature(0x05) failed 00:24:27.877 Namespace ID:1 00:24:27.877 Command Set Identifier: NVM (00h) 00:24:27.877 Deallocate: Supported 00:24:27.877 Deallocated/Unwritten Error: Not Supported 00:24:27.877 Deallocated Read Value: Unknown 00:24:27.877 Deallocate in Write Zeroes: Not Supported 00:24:27.877 Deallocated Guard Field: 0xFFFF 00:24:27.877 Flush: Supported 00:24:27.877 Reservation: Not Supported 00:24:27.877 Namespace Sharing Capabilities: Multiple Controllers 00:24:27.877 Size (in LBAs): 1953525168 (931GiB) 00:24:27.877 Capacity (in LBAs): 1953525168 (931GiB) 00:24:27.877 Utilization (in LBAs): 1953525168 (931GiB) 00:24:27.877 UUID: 5edd4a0b-589b-4627-b9a2-3c9fa1abec65 00:24:27.877 Thin Provisioning: Not Supported 00:24:27.877 Per-NS Atomic Units: Yes 00:24:27.877 Atomic Boundary Size (Normal): 0 00:24:27.877 Atomic Boundary Size (PFail): 0 00:24:27.877 Atomic Boundary Offset: 0 00:24:27.877 NGUID/EUI64 Never Reused: No 00:24:27.877 ANA group ID: 1 00:24:27.877 Namespace Write Protected: No 00:24:27.877 Number of LBA Formats: 1 00:24:27.877 Current LBA Format: LBA Format #00 00:24:27.877 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:27.877 00:24:27.877 03:32:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:27.877 03:32:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:27.877 03:32:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:24:27.877 03:32:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:27.877 03:32:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:24:27.877 03:32:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:27.877 03:32:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:27.877 rmmod nvme_tcp 00:24:28.136 rmmod nvme_fabrics 00:24:28.136 03:32:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:28.136 03:32:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:24:28.136 03:32:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:24:28.136 03:32:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:24:28.136 03:32:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:28.136 03:32:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:28.136 03:32:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:28.136 03:32:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:24:28.136 03:32:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:24:28.136 03:32:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:24:28.136 03:32:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:28.136 03:32:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:28.136 03:32:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:28.136 03:32:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.136 03:32:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.136 03:32:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.037 03:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:30.037 03:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:30.037 03:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:30.037 03:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:24:30.037 03:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:30.037 03:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:30.037 03:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:30.037 03:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:30.037 03:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:30.037 03:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:24:30.295 03:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:32.825 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:32.825 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:32.825 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:33.083 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:33.083 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:33.083 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:33.083 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:33.083 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:33.083 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:33.083 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:33.083 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:33.083 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:33.083 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:33.083 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:33.083 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:33.083 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:34.018 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:24:34.018 00:24:34.018 real 0m16.065s 00:24:34.018 user 0m4.134s 00:24:34.018 sys 0m8.281s 00:24:34.018 03:32:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:34.018 03:32:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:34.018 ************************************ 00:24:34.018 END TEST nvmf_identify_kernel_target 00:24:34.018 ************************************ 00:24:34.018 03:32:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:34.018 03:32:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:34.018 03:32:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:34.018 03:32:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.018 ************************************ 00:24:34.018 START TEST nvmf_auth_host 00:24:34.018 ************************************ 00:24:34.018 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:34.276 * Looking for test storage... 00:24:34.276 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:34.276 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:34.276 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:24:34.276 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:34.276 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:34.276 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:34.276 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:34.276 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:34.276 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:34.276 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:34.276 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:34.276 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:34.276 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:34.276 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:34.276 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:34.276 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:34.276 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:24:34.276 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:24:34.276 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:34.276 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:34.276 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:24:34.276 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:24:34.276 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:34.276 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:24:34.276 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:34.276 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:24:34.276 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:24:34.276 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:34.276 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:24:34.276 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:34.276 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:34.276 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:34.276 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:24:34.276 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:34.276 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:34.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:34.276 --rc genhtml_branch_coverage=1 00:24:34.276 --rc genhtml_function_coverage=1 00:24:34.276 --rc genhtml_legend=1 00:24:34.276 --rc geninfo_all_blocks=1 00:24:34.276 --rc geninfo_unexecuted_blocks=1 00:24:34.276 00:24:34.276 ' 00:24:34.276 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:34.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:34.276 --rc genhtml_branch_coverage=1 00:24:34.276 --rc genhtml_function_coverage=1 00:24:34.276 --rc genhtml_legend=1 00:24:34.276 --rc geninfo_all_blocks=1 00:24:34.276 --rc geninfo_unexecuted_blocks=1 00:24:34.276 00:24:34.276 ' 00:24:34.276 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:34.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:34.276 --rc genhtml_branch_coverage=1 00:24:34.276 --rc genhtml_function_coverage=1 00:24:34.276 --rc genhtml_legend=1 00:24:34.276 --rc geninfo_all_blocks=1 00:24:34.276 --rc geninfo_unexecuted_blocks=1 00:24:34.276 00:24:34.276 ' 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:34.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:34.277 --rc genhtml_branch_coverage=1 00:24:34.277 --rc genhtml_function_coverage=1 00:24:34.277 --rc genhtml_legend=1 00:24:34.277 --rc geninfo_all_blocks=1 00:24:34.277 --rc geninfo_unexecuted_blocks=1 00:24:34.277 00:24:34.277 ' 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:34.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:34.277 03:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:40.839 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:40.839 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:40.839 Found net devices under 0000:86:00.0: cvl_0_0 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:40.839 Found net devices under 0000:86:00.1: cvl_0_1 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:40.839 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:40.840 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:40.840 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:40.840 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:40.840 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:40.840 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:40.840 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:40.840 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:40.840 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:40.840 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:40.840 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:40.840 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:40.840 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:40.840 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:40.840 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:40.840 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:40.840 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:40.840 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:40.840 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:40.840 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:40.840 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:40.840 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:40.840 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:24:40.840 00:24:40.840 --- 10.0.0.2 ping statistics --- 00:24:40.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:40.840 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:24:40.840 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:40.840 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:40.840 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:24:40.840 00:24:40.840 --- 10.0.0.1 ping statistics --- 00:24:40.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:40.840 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:24:40.840 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:40.840 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:24:40.840 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:40.840 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:40.840 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:40.840 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:40.840 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:40.840 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:40.840 03:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2743101 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2743101 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2743101 ']' 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=13fe025b2caaeeb4ada5260b7c47588e 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.hBf 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 13fe025b2caaeeb4ada5260b7c47588e 0 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 13fe025b2caaeeb4ada5260b7c47588e 0 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=13fe025b2caaeeb4ada5260b7c47588e 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.hBf 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.hBf 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.hBf 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6ff996f05ff42771d6c26a529a53964a4fe98c87df158edb7f68c80dbf353467 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.T0J 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6ff996f05ff42771d6c26a529a53964a4fe98c87df158edb7f68c80dbf353467 3 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6ff996f05ff42771d6c26a529a53964a4fe98c87df158edb7f68c80dbf353467 3 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6ff996f05ff42771d6c26a529a53964a4fe98c87df158edb7f68c80dbf353467 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.T0J 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.T0J 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.T0J 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:40.840 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ba58715cedb5c52eb224de9e31a92bbdb0809d30a5f34846 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.N1h 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ba58715cedb5c52eb224de9e31a92bbdb0809d30a5f34846 0 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ba58715cedb5c52eb224de9e31a92bbdb0809d30a5f34846 0 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ba58715cedb5c52eb224de9e31a92bbdb0809d30a5f34846 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.N1h 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.N1h 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.N1h 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=56d43d889207aabf14db14aed07de08f7c02d3258957ad98 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Xnn 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 56d43d889207aabf14db14aed07de08f7c02d3258957ad98 2 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 56d43d889207aabf14db14aed07de08f7c02d3258957ad98 2 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=56d43d889207aabf14db14aed07de08f7c02d3258957ad98 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Xnn 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Xnn 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Xnn 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d714a0f5da2ba6d9ff6b5cc228ec4986 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.6cX 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d714a0f5da2ba6d9ff6b5cc228ec4986 1 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d714a0f5da2ba6d9ff6b5cc228ec4986 1 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d714a0f5da2ba6d9ff6b5cc228ec4986 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.6cX 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.6cX 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.6cX 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=da56daf914418ca0bafac586de188e9d 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Am8 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key da56daf914418ca0bafac586de188e9d 1 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 da56daf914418ca0bafac586de188e9d 1 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=da56daf914418ca0bafac586de188e9d 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Am8 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Am8 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Am8 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3d5585e9bff6f0b0d7fbdf4deb7c136bc1ecaf33da72ac79 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Acw 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3d5585e9bff6f0b0d7fbdf4deb7c136bc1ecaf33da72ac79 2 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3d5585e9bff6f0b0d7fbdf4deb7c136bc1ecaf33da72ac79 2 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3d5585e9bff6f0b0d7fbdf4deb7c136bc1ecaf33da72ac79 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Acw 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Acw 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Acw 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3f6e1d86ec2ed091f4d5c46da4e24c61 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.zzW 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3f6e1d86ec2ed091f4d5c46da4e24c61 0 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3f6e1d86ec2ed091f4d5c46da4e24c61 0 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3f6e1d86ec2ed091f4d5c46da4e24c61 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.zzW 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.zzW 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.zzW 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:40.841 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:40.842 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:40.842 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:40.842 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:24:40.842 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:24:40.842 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:40.842 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e3a35c274d3cbc82e1532da2289ad3cafcd5fad2e0406ebc3ef429a46848e0fc 00:24:40.842 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:24:40.842 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.WrF 00:24:40.842 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e3a35c274d3cbc82e1532da2289ad3cafcd5fad2e0406ebc3ef429a46848e0fc 3 00:24:40.842 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e3a35c274d3cbc82e1532da2289ad3cafcd5fad2e0406ebc3ef429a46848e0fc 3 00:24:40.842 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:40.842 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:40.842 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e3a35c274d3cbc82e1532da2289ad3cafcd5fad2e0406ebc3ef429a46848e0fc 00:24:40.842 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:24:40.842 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:40.842 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.WrF 00:24:40.842 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.WrF 00:24:40.842 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.WrF 00:24:40.842 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:40.842 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2743101 00:24:40.842 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2743101 ']' 00:24:40.842 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.842 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:40.842 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.842 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:40.842 03:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.hBf 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.T0J ]] 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.T0J 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.N1h 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Xnn ]] 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Xnn 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.6cX 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Am8 ]] 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Am8 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Acw 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.zzW ]] 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.zzW 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.WrF 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:41.101 03:33:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:43.633 Waiting for block devices as requested 00:24:43.633 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:24:43.891 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:43.891 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:44.149 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:44.149 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:44.149 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:44.149 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:44.407 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:44.407 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:44.407 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:44.408 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:44.666 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:44.666 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:44.666 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:44.924 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:44.924 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:44.924 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:45.509 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:45.509 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:45.509 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:45.509 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:24:45.509 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:45.509 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:45.509 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:45.509 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:45.509 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:45.509 No valid GPT data, bailing 00:24:45.509 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:45.509 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:24:45.509 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:24:45.509 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:45.509 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:24:45.509 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:45.509 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:45.510 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:45.768 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:45.768 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:24:45.768 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:24:45.768 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:24:45.768 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:45.768 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:24:45.768 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:24:45.768 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:24:45.768 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:45.768 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:24:45.768 00:24:45.768 Discovery Log Number of Records 2, Generation counter 2 00:24:45.768 =====Discovery Log Entry 0====== 00:24:45.768 trtype: tcp 00:24:45.768 adrfam: ipv4 00:24:45.768 subtype: current discovery subsystem 00:24:45.768 treq: not specified, sq flow control disable supported 00:24:45.768 portid: 1 00:24:45.768 trsvcid: 4420 00:24:45.768 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:45.768 traddr: 10.0.0.1 00:24:45.768 eflags: none 00:24:45.768 sectype: none 00:24:45.768 =====Discovery Log Entry 1====== 00:24:45.768 trtype: tcp 00:24:45.768 adrfam: ipv4 00:24:45.768 subtype: nvme subsystem 00:24:45.768 treq: not specified, sq flow control disable supported 00:24:45.768 portid: 1 00:24:45.768 trsvcid: 4420 00:24:45.768 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:45.768 traddr: 10.0.0.1 00:24:45.768 eflags: none 00:24:45.768 sectype: none 00:24:45.768 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:45.768 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:24:45.768 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:45.768 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:45.768 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.768 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:45.768 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:45.768 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:45.768 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE1ODcxNWNlZGI1YzUyZWIyMjRkZTllMzFhOTJiYmRiMDgwOWQzMGE1ZjM0ODQ2aSIakg==: 00:24:45.769 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: 00:24:45.769 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:45.769 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:45.769 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE1ODcxNWNlZGI1YzUyZWIyMjRkZTllMzFhOTJiYmRiMDgwOWQzMGE1ZjM0ODQ2aSIakg==: 00:24:45.769 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: ]] 00:24:45.769 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: 00:24:45.769 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:45.769 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:24:45.769 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:45.769 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:45.769 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:45.769 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:45.769 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:24:45.769 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:45.769 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:45.769 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.769 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:45.769 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.769 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.769 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.769 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:45.769 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:45.769 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:45.769 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:45.769 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.769 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.769 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:45.769 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.769 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:45.769 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:45.769 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:45.769 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:45.769 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.769 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.028 nvme0n1 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNmZTAyNWIyY2FhZWViNGFkYTUyNjBiN2M0NzU4OGXjdpXB: 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNmZTAyNWIyY2FhZWViNGFkYTUyNjBiN2M0NzU4OGXjdpXB: 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: ]] 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.028 03:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.028 nvme0n1 00:24:46.028 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.028 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.028 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.028 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.028 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.028 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.287 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE1ODcxNWNlZGI1YzUyZWIyMjRkZTllMzFhOTJiYmRiMDgwOWQzMGE1ZjM0ODQ2aSIakg==: 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE1ODcxNWNlZGI1YzUyZWIyMjRkZTllMzFhOTJiYmRiMDgwOWQzMGE1ZjM0ODQ2aSIakg==: 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: ]] 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.288 nvme0n1 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDcxNGEwZjVkYTJiYTZkOWZmNmI1Y2MyMjhlYzQ5ODYFzPDb: 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDcxNGEwZjVkYTJiYTZkOWZmNmI1Y2MyMjhlYzQ5ODYFzPDb: 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: ]] 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.288 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.547 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.547 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.547 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:46.547 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:46.547 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:46.547 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.547 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.547 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:46.547 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.547 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:46.547 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:46.547 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:46.547 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:46.547 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.547 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.547 nvme0n1 00:24:46.547 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.547 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.547 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.547 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.547 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.547 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.547 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.547 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.547 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.547 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.547 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.547 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.547 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:46.547 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.547 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:46.547 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:46.547 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:46.547 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q1NTg1ZTliZmY2ZjBiMGQ3ZmJkZjRkZWI3YzEzNmJjMWVjYWYzM2RhNzJhYzc5MH2l/Q==: 00:24:46.547 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: 00:24:46.547 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:46.547 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:46.548 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q1NTg1ZTliZmY2ZjBiMGQ3ZmJkZjRkZWI3YzEzNmJjMWVjYWYzM2RhNzJhYzc5MH2l/Q==: 00:24:46.548 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: ]] 00:24:46.548 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: 00:24:46.548 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:24:46.548 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.548 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:46.548 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:46.548 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:46.548 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.548 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:46.548 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.548 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.548 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.548 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.548 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:46.548 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:46.548 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:46.548 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.548 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.548 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:46.548 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.548 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:46.548 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:46.548 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:46.548 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:46.548 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.548 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.807 nvme0n1 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTNhMzVjMjc0ZDNjYmM4MmUxNTMyZGEyMjg5YWQzY2FmY2Q1ZmFkMmUwNDA2ZWJjM2VmNDI5YTQ2ODQ4ZTBmYwHRJ6Q=: 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTNhMzVjMjc0ZDNjYmM4MmUxNTMyZGEyMjg5YWQzY2FmY2Q1ZmFkMmUwNDA2ZWJjM2VmNDI5YTQ2ODQ4ZTBmYwHRJ6Q=: 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.807 03:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.065 nvme0n1 00:24:47.065 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.065 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.065 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.065 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.065 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.065 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.065 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.065 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.065 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.065 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.065 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.065 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:47.065 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.065 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:47.065 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.065 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:47.066 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:47.066 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:47.066 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNmZTAyNWIyY2FhZWViNGFkYTUyNjBiN2M0NzU4OGXjdpXB: 00:24:47.066 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: 00:24:47.066 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:47.066 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:47.066 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNmZTAyNWIyY2FhZWViNGFkYTUyNjBiN2M0NzU4OGXjdpXB: 00:24:47.066 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: ]] 00:24:47.066 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: 00:24:47.066 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:47.066 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.066 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:47.066 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:47.066 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:47.066 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.066 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:47.066 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.066 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.066 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.066 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.066 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:47.066 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:47.066 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:47.066 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.066 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.066 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:47.066 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.066 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:47.066 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:47.066 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:47.066 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:47.066 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.066 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.324 nvme0n1 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE1ODcxNWNlZGI1YzUyZWIyMjRkZTllMzFhOTJiYmRiMDgwOWQzMGE1ZjM0ODQ2aSIakg==: 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE1ODcxNWNlZGI1YzUyZWIyMjRkZTllMzFhOTJiYmRiMDgwOWQzMGE1ZjM0ODQ2aSIakg==: 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: ]] 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.325 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.584 nvme0n1 00:24:47.584 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.584 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.584 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.584 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.584 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.584 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.584 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.584 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.584 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.584 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.584 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.584 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.584 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:47.584 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.584 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:47.584 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:47.584 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:47.584 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDcxNGEwZjVkYTJiYTZkOWZmNmI1Y2MyMjhlYzQ5ODYFzPDb: 00:24:47.584 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: 00:24:47.584 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:47.584 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:47.584 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDcxNGEwZjVkYTJiYTZkOWZmNmI1Y2MyMjhlYzQ5ODYFzPDb: 00:24:47.584 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: ]] 00:24:47.584 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: 00:24:47.584 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:47.584 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.584 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:47.584 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:47.584 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:47.584 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.584 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:47.584 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.584 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.584 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.584 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.584 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:47.584 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:47.584 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:47.584 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.584 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.585 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:47.585 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.585 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:47.585 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:47.585 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:47.585 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:47.585 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.585 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.843 nvme0n1 00:24:47.843 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.843 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.843 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.843 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.843 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.843 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.843 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.843 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.843 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.843 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.843 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.843 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.843 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:47.843 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.843 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:47.843 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:47.843 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:47.843 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q1NTg1ZTliZmY2ZjBiMGQ3ZmJkZjRkZWI3YzEzNmJjMWVjYWYzM2RhNzJhYzc5MH2l/Q==: 00:24:47.843 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: 00:24:47.843 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:47.843 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:47.844 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q1NTg1ZTliZmY2ZjBiMGQ3ZmJkZjRkZWI3YzEzNmJjMWVjYWYzM2RhNzJhYzc5MH2l/Q==: 00:24:47.844 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: ]] 00:24:47.844 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: 00:24:47.844 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:47.844 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.844 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:47.844 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:47.844 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:47.844 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.844 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:47.844 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.844 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.844 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.844 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.844 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:47.844 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:47.844 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:47.844 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.844 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.844 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:47.844 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.844 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:47.844 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:47.844 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:47.844 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:47.844 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.844 03:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.103 nvme0n1 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTNhMzVjMjc0ZDNjYmM4MmUxNTMyZGEyMjg5YWQzY2FmY2Q1ZmFkMmUwNDA2ZWJjM2VmNDI5YTQ2ODQ4ZTBmYwHRJ6Q=: 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTNhMzVjMjc0ZDNjYmM4MmUxNTMyZGEyMjg5YWQzY2FmY2Q1ZmFkMmUwNDA2ZWJjM2VmNDI5YTQ2ODQ4ZTBmYwHRJ6Q=: 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.103 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.362 nvme0n1 00:24:48.362 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.362 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.362 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.362 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.362 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.362 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.362 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.362 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.362 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.362 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.362 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.362 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:48.362 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.362 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:48.362 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.362 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:48.362 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:48.362 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:48.362 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNmZTAyNWIyY2FhZWViNGFkYTUyNjBiN2M0NzU4OGXjdpXB: 00:24:48.363 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: 00:24:48.363 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:48.363 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:48.363 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNmZTAyNWIyY2FhZWViNGFkYTUyNjBiN2M0NzU4OGXjdpXB: 00:24:48.363 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: ]] 00:24:48.363 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: 00:24:48.363 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:48.363 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.363 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:48.363 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:48.363 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:48.363 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.363 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:48.363 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.363 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.363 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.363 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.363 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:48.363 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:48.363 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:48.363 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.363 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.363 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:48.363 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.363 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:48.363 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:48.363 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:48.363 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:48.363 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.363 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.622 nvme0n1 00:24:48.622 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.622 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE1ODcxNWNlZGI1YzUyZWIyMjRkZTllMzFhOTJiYmRiMDgwOWQzMGE1ZjM0ODQ2aSIakg==: 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE1ODcxNWNlZGI1YzUyZWIyMjRkZTllMzFhOTJiYmRiMDgwOWQzMGE1ZjM0ODQ2aSIakg==: 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: ]] 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.623 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.882 nvme0n1 00:24:48.882 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.882 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.882 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.882 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.882 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.882 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.882 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.882 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.882 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.882 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.882 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.882 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.882 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:48.882 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.882 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:48.882 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:48.882 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:48.882 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDcxNGEwZjVkYTJiYTZkOWZmNmI1Y2MyMjhlYzQ5ODYFzPDb: 00:24:48.882 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: 00:24:48.882 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:48.882 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:48.882 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDcxNGEwZjVkYTJiYTZkOWZmNmI1Y2MyMjhlYzQ5ODYFzPDb: 00:24:48.882 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: ]] 00:24:48.882 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: 00:24:48.882 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:24:48.882 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.882 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:48.882 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:48.882 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:48.882 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.882 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:48.883 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.883 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.883 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.883 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.883 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:48.883 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:48.883 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:48.883 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.883 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.883 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:48.883 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.883 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:48.883 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:48.883 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:48.883 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:48.883 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.883 03:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.142 nvme0n1 00:24:49.142 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.142 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.142 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.142 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.142 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.142 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.142 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.142 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.142 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.142 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.142 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.142 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.142 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:49.142 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.142 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:49.142 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:49.142 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:49.142 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q1NTg1ZTliZmY2ZjBiMGQ3ZmJkZjRkZWI3YzEzNmJjMWVjYWYzM2RhNzJhYzc5MH2l/Q==: 00:24:49.142 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: 00:24:49.142 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:49.142 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:49.142 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q1NTg1ZTliZmY2ZjBiMGQ3ZmJkZjRkZWI3YzEzNmJjMWVjYWYzM2RhNzJhYzc5MH2l/Q==: 00:24:49.142 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: ]] 00:24:49.142 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: 00:24:49.142 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:24:49.142 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.142 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:49.142 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:49.142 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:49.142 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.142 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:49.142 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.142 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.142 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.142 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.142 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:49.402 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:49.402 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:49.402 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.402 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.402 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:49.402 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.402 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:49.402 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:49.402 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:49.402 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:49.402 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.402 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.402 nvme0n1 00:24:49.402 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.402 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.402 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.402 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.402 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTNhMzVjMjc0ZDNjYmM4MmUxNTMyZGEyMjg5YWQzY2FmY2Q1ZmFkMmUwNDA2ZWJjM2VmNDI5YTQ2ODQ4ZTBmYwHRJ6Q=: 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTNhMzVjMjc0ZDNjYmM4MmUxNTMyZGEyMjg5YWQzY2FmY2Q1ZmFkMmUwNDA2ZWJjM2VmNDI5YTQ2ODQ4ZTBmYwHRJ6Q=: 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.661 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.921 nvme0n1 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNmZTAyNWIyY2FhZWViNGFkYTUyNjBiN2M0NzU4OGXjdpXB: 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNmZTAyNWIyY2FhZWViNGFkYTUyNjBiN2M0NzU4OGXjdpXB: 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: ]] 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.921 03:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.180 nvme0n1 00:24:50.180 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.180 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.180 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.180 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.180 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE1ODcxNWNlZGI1YzUyZWIyMjRkZTllMzFhOTJiYmRiMDgwOWQzMGE1ZjM0ODQ2aSIakg==: 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE1ODcxNWNlZGI1YzUyZWIyMjRkZTllMzFhOTJiYmRiMDgwOWQzMGE1ZjM0ODQ2aSIakg==: 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: ]] 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.440 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.700 nvme0n1 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDcxNGEwZjVkYTJiYTZkOWZmNmI1Y2MyMjhlYzQ5ODYFzPDb: 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDcxNGEwZjVkYTJiYTZkOWZmNmI1Y2MyMjhlYzQ5ODYFzPDb: 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: ]] 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.700 03:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.270 nvme0n1 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q1NTg1ZTliZmY2ZjBiMGQ3ZmJkZjRkZWI3YzEzNmJjMWVjYWYzM2RhNzJhYzc5MH2l/Q==: 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q1NTg1ZTliZmY2ZjBiMGQ3ZmJkZjRkZWI3YzEzNmJjMWVjYWYzM2RhNzJhYzc5MH2l/Q==: 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: ]] 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.270 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.529 nvme0n1 00:24:51.529 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTNhMzVjMjc0ZDNjYmM4MmUxNTMyZGEyMjg5YWQzY2FmY2Q1ZmFkMmUwNDA2ZWJjM2VmNDI5YTQ2ODQ4ZTBmYwHRJ6Q=: 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTNhMzVjMjc0ZDNjYmM4MmUxNTMyZGEyMjg5YWQzY2FmY2Q1ZmFkMmUwNDA2ZWJjM2VmNDI5YTQ2ODQ4ZTBmYwHRJ6Q=: 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.788 03:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.047 nvme0n1 00:24:52.047 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.047 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.047 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.047 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.047 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.047 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.047 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.047 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.047 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.047 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.047 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.047 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:52.047 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.047 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:52.047 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.047 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:52.047 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:52.047 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:52.047 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNmZTAyNWIyY2FhZWViNGFkYTUyNjBiN2M0NzU4OGXjdpXB: 00:24:52.047 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: 00:24:52.047 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:52.047 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:52.047 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNmZTAyNWIyY2FhZWViNGFkYTUyNjBiN2M0NzU4OGXjdpXB: 00:24:52.047 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: ]] 00:24:52.047 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: 00:24:52.047 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:24:52.047 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.047 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:52.047 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:52.047 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:52.047 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.047 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:52.047 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.047 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.047 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.047 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.306 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:52.306 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:52.306 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:52.306 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.306 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.306 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:52.306 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.306 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:52.306 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:52.306 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:52.307 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:52.307 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.307 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.875 nvme0n1 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE1ODcxNWNlZGI1YzUyZWIyMjRkZTllMzFhOTJiYmRiMDgwOWQzMGE1ZjM0ODQ2aSIakg==: 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE1ODcxNWNlZGI1YzUyZWIyMjRkZTllMzFhOTJiYmRiMDgwOWQzMGE1ZjM0ODQ2aSIakg==: 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: ]] 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.875 03:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.645 nvme0n1 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDcxNGEwZjVkYTJiYTZkOWZmNmI1Y2MyMjhlYzQ5ODYFzPDb: 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDcxNGEwZjVkYTJiYTZkOWZmNmI1Y2MyMjhlYzQ5ODYFzPDb: 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: ]] 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.646 03:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.226 nvme0n1 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q1NTg1ZTliZmY2ZjBiMGQ3ZmJkZjRkZWI3YzEzNmJjMWVjYWYzM2RhNzJhYzc5MH2l/Q==: 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q1NTg1ZTliZmY2ZjBiMGQ3ZmJkZjRkZWI3YzEzNmJjMWVjYWYzM2RhNzJhYzc5MH2l/Q==: 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: ]] 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:54.226 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:54.227 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:54.227 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.227 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.795 nvme0n1 00:24:54.795 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.795 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.795 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.795 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.795 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.795 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.795 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.795 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.795 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.795 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.795 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.795 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.795 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:54.795 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.795 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:54.795 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:54.795 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:54.795 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTNhMzVjMjc0ZDNjYmM4MmUxNTMyZGEyMjg5YWQzY2FmY2Q1ZmFkMmUwNDA2ZWJjM2VmNDI5YTQ2ODQ4ZTBmYwHRJ6Q=: 00:24:54.795 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:54.795 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:54.795 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:54.795 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTNhMzVjMjc0ZDNjYmM4MmUxNTMyZGEyMjg5YWQzY2FmY2Q1ZmFkMmUwNDA2ZWJjM2VmNDI5YTQ2ODQ4ZTBmYwHRJ6Q=: 00:24:54.795 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:54.795 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:54.795 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.796 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:54.796 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:54.796 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:54.796 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.796 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:54.796 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.796 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.796 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.796 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.796 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:54.796 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:54.796 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:54.796 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.796 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.796 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:54.796 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.796 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:54.796 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:54.796 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:54.796 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:54.796 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.796 03:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.361 nvme0n1 00:24:55.361 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.361 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.361 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.361 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.361 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.361 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.361 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.361 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.361 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.361 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.361 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.361 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:55.361 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:55.361 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.361 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:55.361 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.361 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:55.361 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:55.361 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:55.361 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNmZTAyNWIyY2FhZWViNGFkYTUyNjBiN2M0NzU4OGXjdpXB: 00:24:55.361 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: 00:24:55.361 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:55.361 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:55.361 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNmZTAyNWIyY2FhZWViNGFkYTUyNjBiN2M0NzU4OGXjdpXB: 00:24:55.361 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: ]] 00:24:55.361 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: 00:24:55.362 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:24:55.362 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.362 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:55.362 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:55.362 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:55.362 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.362 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:55.362 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.362 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.362 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.362 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.362 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:55.362 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:55.362 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:55.362 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.362 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.362 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:55.362 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.362 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:55.362 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:55.362 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:55.362 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:55.362 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.362 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.619 nvme0n1 00:24:55.619 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.619 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.619 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.619 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.619 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.619 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.619 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.619 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.619 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.619 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.619 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.619 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.619 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:55.619 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.619 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:55.619 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:55.619 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:55.619 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE1ODcxNWNlZGI1YzUyZWIyMjRkZTllMzFhOTJiYmRiMDgwOWQzMGE1ZjM0ODQ2aSIakg==: 00:24:55.619 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: 00:24:55.619 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:55.619 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:55.619 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE1ODcxNWNlZGI1YzUyZWIyMjRkZTllMzFhOTJiYmRiMDgwOWQzMGE1ZjM0ODQ2aSIakg==: 00:24:55.620 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: ]] 00:24:55.620 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: 00:24:55.620 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:24:55.620 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.620 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:55.620 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:55.620 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:55.620 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.620 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:55.620 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.620 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.620 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.620 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.620 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:55.620 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:55.620 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:55.620 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.620 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.620 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:55.620 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.620 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:55.620 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:55.620 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:55.620 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:55.620 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.620 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.878 nvme0n1 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDcxNGEwZjVkYTJiYTZkOWZmNmI1Y2MyMjhlYzQ5ODYFzPDb: 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDcxNGEwZjVkYTJiYTZkOWZmNmI1Y2MyMjhlYzQ5ODYFzPDb: 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: ]] 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.878 03:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.136 nvme0n1 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q1NTg1ZTliZmY2ZjBiMGQ3ZmJkZjRkZWI3YzEzNmJjMWVjYWYzM2RhNzJhYzc5MH2l/Q==: 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q1NTg1ZTliZmY2ZjBiMGQ3ZmJkZjRkZWI3YzEzNmJjMWVjYWYzM2RhNzJhYzc5MH2l/Q==: 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: ]] 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.136 nvme0n1 00:24:56.136 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTNhMzVjMjc0ZDNjYmM4MmUxNTMyZGEyMjg5YWQzY2FmY2Q1ZmFkMmUwNDA2ZWJjM2VmNDI5YTQ2ODQ4ZTBmYwHRJ6Q=: 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTNhMzVjMjc0ZDNjYmM4MmUxNTMyZGEyMjg5YWQzY2FmY2Q1ZmFkMmUwNDA2ZWJjM2VmNDI5YTQ2ODQ4ZTBmYwHRJ6Q=: 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.395 nvme0n1 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.395 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNmZTAyNWIyY2FhZWViNGFkYTUyNjBiN2M0NzU4OGXjdpXB: 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNmZTAyNWIyY2FhZWViNGFkYTUyNjBiN2M0NzU4OGXjdpXB: 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: ]] 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.653 nvme0n1 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.653 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE1ODcxNWNlZGI1YzUyZWIyMjRkZTllMzFhOTJiYmRiMDgwOWQzMGE1ZjM0ODQ2aSIakg==: 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE1ODcxNWNlZGI1YzUyZWIyMjRkZTllMzFhOTJiYmRiMDgwOWQzMGE1ZjM0ODQ2aSIakg==: 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: ]] 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.912 nvme0n1 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.912 03:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.912 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.912 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.912 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.912 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.912 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.912 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.912 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:56.912 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.912 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:56.912 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:56.912 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:56.912 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDcxNGEwZjVkYTJiYTZkOWZmNmI1Y2MyMjhlYzQ5ODYFzPDb: 00:24:56.912 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: 00:24:56.912 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:56.912 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:56.912 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDcxNGEwZjVkYTJiYTZkOWZmNmI1Y2MyMjhlYzQ5ODYFzPDb: 00:24:56.912 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: ]] 00:24:56.912 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: 00:24:56.912 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:24:56.913 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.913 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:56.913 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:56.913 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:56.913 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.913 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:56.913 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.913 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.171 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.171 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.171 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:57.171 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:57.171 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:57.171 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.171 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.171 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:57.171 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.171 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:57.171 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:57.171 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:57.171 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:57.171 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.171 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.171 nvme0n1 00:24:57.171 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.171 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.171 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.171 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.171 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.171 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.171 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.171 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.171 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.171 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.171 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.171 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.171 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:57.171 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.171 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:57.171 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:57.171 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:57.171 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q1NTg1ZTliZmY2ZjBiMGQ3ZmJkZjRkZWI3YzEzNmJjMWVjYWYzM2RhNzJhYzc5MH2l/Q==: 00:24:57.171 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: 00:24:57.172 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:57.172 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:57.172 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q1NTg1ZTliZmY2ZjBiMGQ3ZmJkZjRkZWI3YzEzNmJjMWVjYWYzM2RhNzJhYzc5MH2l/Q==: 00:24:57.172 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: ]] 00:24:57.172 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: 00:24:57.172 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:24:57.172 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.172 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:57.172 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:57.172 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:57.172 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.172 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:57.172 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.172 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.172 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.172 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.172 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:57.172 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:57.172 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:57.172 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.172 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.172 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:57.172 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.172 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:57.172 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:57.172 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:57.172 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:57.172 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.172 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.430 nvme0n1 00:24:57.430 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.430 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.430 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.430 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.430 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.430 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.430 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.430 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.430 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.430 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.430 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.430 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.430 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:57.430 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.431 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:57.431 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:57.431 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:57.431 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTNhMzVjMjc0ZDNjYmM4MmUxNTMyZGEyMjg5YWQzY2FmY2Q1ZmFkMmUwNDA2ZWJjM2VmNDI5YTQ2ODQ4ZTBmYwHRJ6Q=: 00:24:57.431 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:57.431 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:57.431 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:57.431 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTNhMzVjMjc0ZDNjYmM4MmUxNTMyZGEyMjg5YWQzY2FmY2Q1ZmFkMmUwNDA2ZWJjM2VmNDI5YTQ2ODQ4ZTBmYwHRJ6Q=: 00:24:57.431 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:57.431 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:24:57.431 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.431 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:57.431 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:57.431 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:57.431 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.431 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:57.431 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.431 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.431 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.431 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.431 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:57.431 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:57.431 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:57.431 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.431 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.431 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:57.431 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.431 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:57.431 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:57.431 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:57.431 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:57.431 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.431 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.690 nvme0n1 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNmZTAyNWIyY2FhZWViNGFkYTUyNjBiN2M0NzU4OGXjdpXB: 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNmZTAyNWIyY2FhZWViNGFkYTUyNjBiN2M0NzU4OGXjdpXB: 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: ]] 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.690 03:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.948 nvme0n1 00:24:57.948 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.948 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.948 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.948 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.948 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.948 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE1ODcxNWNlZGI1YzUyZWIyMjRkZTllMzFhOTJiYmRiMDgwOWQzMGE1ZjM0ODQ2aSIakg==: 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE1ODcxNWNlZGI1YzUyZWIyMjRkZTllMzFhOTJiYmRiMDgwOWQzMGE1ZjM0ODQ2aSIakg==: 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: ]] 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.205 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.464 nvme0n1 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDcxNGEwZjVkYTJiYTZkOWZmNmI1Y2MyMjhlYzQ5ODYFzPDb: 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDcxNGEwZjVkYTJiYTZkOWZmNmI1Y2MyMjhlYzQ5ODYFzPDb: 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: ]] 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.464 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.724 nvme0n1 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q1NTg1ZTliZmY2ZjBiMGQ3ZmJkZjRkZWI3YzEzNmJjMWVjYWYzM2RhNzJhYzc5MH2l/Q==: 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q1NTg1ZTliZmY2ZjBiMGQ3ZmJkZjRkZWI3YzEzNmJjMWVjYWYzM2RhNzJhYzc5MH2l/Q==: 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: ]] 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.724 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.983 nvme0n1 00:24:58.983 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.983 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.983 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.983 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.983 03:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.983 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.983 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.983 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.983 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.983 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.983 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.983 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.983 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:58.983 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.983 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:58.983 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:58.983 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:58.983 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTNhMzVjMjc0ZDNjYmM4MmUxNTMyZGEyMjg5YWQzY2FmY2Q1ZmFkMmUwNDA2ZWJjM2VmNDI5YTQ2ODQ4ZTBmYwHRJ6Q=: 00:24:58.983 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:58.983 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:58.983 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:58.983 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTNhMzVjMjc0ZDNjYmM4MmUxNTMyZGEyMjg5YWQzY2FmY2Q1ZmFkMmUwNDA2ZWJjM2VmNDI5YTQ2ODQ4ZTBmYwHRJ6Q=: 00:24:58.983 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:58.983 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:24:58.983 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.983 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:58.983 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:58.983 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:58.983 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.984 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:58.984 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.984 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.984 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.984 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.984 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:58.984 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:58.984 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:58.984 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.984 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.984 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:58.984 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.984 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:58.984 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:58.984 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:58.984 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:58.984 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.984 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.242 nvme0n1 00:24:59.242 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.242 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.243 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.243 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.243 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.243 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.243 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.243 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.243 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.243 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.501 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.501 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:59.501 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.501 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:59.501 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.501 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:59.501 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:59.501 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:59.501 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNmZTAyNWIyY2FhZWViNGFkYTUyNjBiN2M0NzU4OGXjdpXB: 00:24:59.501 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: 00:24:59.501 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:59.501 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:59.501 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNmZTAyNWIyY2FhZWViNGFkYTUyNjBiN2M0NzU4OGXjdpXB: 00:24:59.501 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: ]] 00:24:59.501 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: 00:24:59.501 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:24:59.501 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.501 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:59.501 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:59.501 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:59.501 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.501 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:59.501 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.501 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.501 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.501 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.501 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:59.501 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:59.501 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:59.501 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.501 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.501 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:59.501 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.501 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:59.501 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:59.501 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:59.501 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:59.501 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.501 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.760 nvme0n1 00:24:59.760 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE1ODcxNWNlZGI1YzUyZWIyMjRkZTllMzFhOTJiYmRiMDgwOWQzMGE1ZjM0ODQ2aSIakg==: 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE1ODcxNWNlZGI1YzUyZWIyMjRkZTllMzFhOTJiYmRiMDgwOWQzMGE1ZjM0ODQ2aSIakg==: 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: ]] 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.761 03:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.329 nvme0n1 00:25:00.329 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.329 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.329 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.329 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.329 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.329 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.329 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.329 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.329 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.329 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.329 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.329 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.329 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:00.329 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.329 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:00.329 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:00.329 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:00.329 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDcxNGEwZjVkYTJiYTZkOWZmNmI1Y2MyMjhlYzQ5ODYFzPDb: 00:25:00.329 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: 00:25:00.329 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:00.329 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:00.329 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDcxNGEwZjVkYTJiYTZkOWZmNmI1Y2MyMjhlYzQ5ODYFzPDb: 00:25:00.330 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: ]] 00:25:00.330 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: 00:25:00.330 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:00.330 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.330 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:00.330 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:00.330 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:00.330 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.330 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:00.330 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.330 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.330 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.330 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.330 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:00.330 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:00.330 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:00.330 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.330 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.330 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:00.330 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.330 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:00.330 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:00.330 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:00.330 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:00.330 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.330 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.589 nvme0n1 00:25:00.589 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.589 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.589 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.589 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.589 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.589 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.589 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.589 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.589 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.589 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.848 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.848 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.848 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:00.848 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.848 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:00.848 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:00.848 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:00.848 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q1NTg1ZTliZmY2ZjBiMGQ3ZmJkZjRkZWI3YzEzNmJjMWVjYWYzM2RhNzJhYzc5MH2l/Q==: 00:25:00.848 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: 00:25:00.848 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:00.849 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:00.849 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q1NTg1ZTliZmY2ZjBiMGQ3ZmJkZjRkZWI3YzEzNmJjMWVjYWYzM2RhNzJhYzc5MH2l/Q==: 00:25:00.849 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: ]] 00:25:00.849 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: 00:25:00.849 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:00.849 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.849 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:00.849 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:00.849 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:00.849 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.849 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:00.849 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.849 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.849 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.849 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.849 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:00.849 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:00.849 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:00.849 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.849 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.849 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:00.849 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.849 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:00.849 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:00.849 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:00.849 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:00.849 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.849 03:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.108 nvme0n1 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTNhMzVjMjc0ZDNjYmM4MmUxNTMyZGEyMjg5YWQzY2FmY2Q1ZmFkMmUwNDA2ZWJjM2VmNDI5YTQ2ODQ4ZTBmYwHRJ6Q=: 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTNhMzVjMjc0ZDNjYmM4MmUxNTMyZGEyMjg5YWQzY2FmY2Q1ZmFkMmUwNDA2ZWJjM2VmNDI5YTQ2ODQ4ZTBmYwHRJ6Q=: 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.108 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.707 nvme0n1 00:25:01.707 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.707 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.707 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.707 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.707 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.707 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.707 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.707 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.707 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.707 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.707 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.707 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:01.707 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.707 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:01.707 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.707 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:01.707 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:01.707 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:01.707 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNmZTAyNWIyY2FhZWViNGFkYTUyNjBiN2M0NzU4OGXjdpXB: 00:25:01.707 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: 00:25:01.707 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:01.707 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:01.707 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNmZTAyNWIyY2FhZWViNGFkYTUyNjBiN2M0NzU4OGXjdpXB: 00:25:01.707 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: ]] 00:25:01.707 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: 00:25:01.707 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:01.707 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.707 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:01.707 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:01.708 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:01.708 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.708 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:01.708 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.708 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.708 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.708 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.708 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:01.708 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:01.708 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:01.708 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.708 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.708 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:01.708 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.708 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:01.708 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:01.708 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:01.708 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:01.708 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.708 03:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.276 nvme0n1 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE1ODcxNWNlZGI1YzUyZWIyMjRkZTllMzFhOTJiYmRiMDgwOWQzMGE1ZjM0ODQ2aSIakg==: 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE1ODcxNWNlZGI1YzUyZWIyMjRkZTllMzFhOTJiYmRiMDgwOWQzMGE1ZjM0ODQ2aSIakg==: 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: ]] 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.276 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.844 nvme0n1 00:25:02.844 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.844 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.844 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.844 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.844 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.844 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.844 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.844 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.844 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.844 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.844 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.844 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.844 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:02.844 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.844 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:02.844 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:02.844 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:02.844 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDcxNGEwZjVkYTJiYTZkOWZmNmI1Y2MyMjhlYzQ5ODYFzPDb: 00:25:02.844 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: 00:25:02.844 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:02.844 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:02.844 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDcxNGEwZjVkYTJiYTZkOWZmNmI1Y2MyMjhlYzQ5ODYFzPDb: 00:25:02.844 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: ]] 00:25:02.844 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: 00:25:02.844 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:02.844 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.844 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:02.844 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:02.844 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:02.844 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.844 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:02.844 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.845 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.845 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.845 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.845 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:02.845 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:02.845 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:02.845 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.845 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.845 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:02.845 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.845 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:02.845 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:02.845 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:02.845 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:02.845 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.845 03:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.781 nvme0n1 00:25:03.781 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.781 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.781 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.781 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.781 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.781 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.781 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.781 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.781 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.781 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.781 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.781 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.781 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:03.781 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.781 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:03.781 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:03.781 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:03.782 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q1NTg1ZTliZmY2ZjBiMGQ3ZmJkZjRkZWI3YzEzNmJjMWVjYWYzM2RhNzJhYzc5MH2l/Q==: 00:25:03.782 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: 00:25:03.782 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:03.782 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:03.782 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q1NTg1ZTliZmY2ZjBiMGQ3ZmJkZjRkZWI3YzEzNmJjMWVjYWYzM2RhNzJhYzc5MH2l/Q==: 00:25:03.782 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: ]] 00:25:03.782 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: 00:25:03.782 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:03.782 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.782 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:03.782 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:03.782 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:03.782 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.782 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:03.782 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.782 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.782 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.782 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.782 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:03.782 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:03.782 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:03.782 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.782 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.782 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:03.782 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.782 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:03.782 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:03.782 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:03.782 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:03.782 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.782 03:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.347 nvme0n1 00:25:04.347 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.347 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.347 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.347 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.347 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.347 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.347 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.347 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.347 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.347 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.347 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.347 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.347 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:04.347 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.347 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:04.347 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:04.347 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:04.347 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTNhMzVjMjc0ZDNjYmM4MmUxNTMyZGEyMjg5YWQzY2FmY2Q1ZmFkMmUwNDA2ZWJjM2VmNDI5YTQ2ODQ4ZTBmYwHRJ6Q=: 00:25:04.347 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:04.347 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:04.347 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:04.347 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTNhMzVjMjc0ZDNjYmM4MmUxNTMyZGEyMjg5YWQzY2FmY2Q1ZmFkMmUwNDA2ZWJjM2VmNDI5YTQ2ODQ4ZTBmYwHRJ6Q=: 00:25:04.347 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:04.347 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:04.347 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.347 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:04.347 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:04.347 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:04.347 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.347 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:04.347 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.347 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.347 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.347 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.347 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:04.347 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:04.347 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:04.347 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.347 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.347 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:04.347 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.348 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:04.348 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:04.348 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:04.348 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:04.348 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.348 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.914 nvme0n1 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNmZTAyNWIyY2FhZWViNGFkYTUyNjBiN2M0NzU4OGXjdpXB: 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNmZTAyNWIyY2FhZWViNGFkYTUyNjBiN2M0NzU4OGXjdpXB: 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: ]] 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.914 03:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.173 nvme0n1 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE1ODcxNWNlZGI1YzUyZWIyMjRkZTllMzFhOTJiYmRiMDgwOWQzMGE1ZjM0ODQ2aSIakg==: 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE1ODcxNWNlZGI1YzUyZWIyMjRkZTllMzFhOTJiYmRiMDgwOWQzMGE1ZjM0ODQ2aSIakg==: 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: ]] 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.173 nvme0n1 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.173 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDcxNGEwZjVkYTJiYTZkOWZmNmI1Y2MyMjhlYzQ5ODYFzPDb: 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDcxNGEwZjVkYTJiYTZkOWZmNmI1Y2MyMjhlYzQ5ODYFzPDb: 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: ]] 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.432 nvme0n1 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.432 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q1NTg1ZTliZmY2ZjBiMGQ3ZmJkZjRkZWI3YzEzNmJjMWVjYWYzM2RhNzJhYzc5MH2l/Q==: 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q1NTg1ZTliZmY2ZjBiMGQ3ZmJkZjRkZWI3YzEzNmJjMWVjYWYzM2RhNzJhYzc5MH2l/Q==: 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: ]] 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.690 nvme0n1 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTNhMzVjMjc0ZDNjYmM4MmUxNTMyZGEyMjg5YWQzY2FmY2Q1ZmFkMmUwNDA2ZWJjM2VmNDI5YTQ2ODQ4ZTBmYwHRJ6Q=: 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTNhMzVjMjc0ZDNjYmM4MmUxNTMyZGEyMjg5YWQzY2FmY2Q1ZmFkMmUwNDA2ZWJjM2VmNDI5YTQ2ODQ4ZTBmYwHRJ6Q=: 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:05.690 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.947 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.947 nvme0n1 00:25:05.947 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.947 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.948 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.948 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.948 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.948 03:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNmZTAyNWIyY2FhZWViNGFkYTUyNjBiN2M0NzU4OGXjdpXB: 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNmZTAyNWIyY2FhZWViNGFkYTUyNjBiN2M0NzU4OGXjdpXB: 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: ]] 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.948 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.205 nvme0n1 00:25:06.205 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.205 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.205 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.205 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.205 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.205 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.205 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.205 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.205 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.205 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.205 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.205 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.205 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:06.205 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.205 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:06.205 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:06.205 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:06.205 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE1ODcxNWNlZGI1YzUyZWIyMjRkZTllMzFhOTJiYmRiMDgwOWQzMGE1ZjM0ODQ2aSIakg==: 00:25:06.205 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: 00:25:06.205 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:06.205 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:06.205 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE1ODcxNWNlZGI1YzUyZWIyMjRkZTllMzFhOTJiYmRiMDgwOWQzMGE1ZjM0ODQ2aSIakg==: 00:25:06.205 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: ]] 00:25:06.205 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: 00:25:06.205 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:06.205 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.205 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:06.206 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:06.206 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:06.206 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.206 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:06.206 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.206 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.206 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.206 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.206 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:06.206 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:06.206 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:06.206 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.206 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.206 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:06.206 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.206 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:06.206 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:06.206 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:06.206 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:06.206 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.206 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.463 nvme0n1 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDcxNGEwZjVkYTJiYTZkOWZmNmI1Y2MyMjhlYzQ5ODYFzPDb: 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDcxNGEwZjVkYTJiYTZkOWZmNmI1Y2MyMjhlYzQ5ODYFzPDb: 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: ]] 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.463 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.721 nvme0n1 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q1NTg1ZTliZmY2ZjBiMGQ3ZmJkZjRkZWI3YzEzNmJjMWVjYWYzM2RhNzJhYzc5MH2l/Q==: 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q1NTg1ZTliZmY2ZjBiMGQ3ZmJkZjRkZWI3YzEzNmJjMWVjYWYzM2RhNzJhYzc5MH2l/Q==: 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: ]] 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.721 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.979 nvme0n1 00:25:06.979 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.979 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.979 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.979 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.979 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.979 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.979 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.979 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.979 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.979 03:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.979 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.979 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.979 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:06.979 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.979 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:06.979 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:06.979 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:06.979 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTNhMzVjMjc0ZDNjYmM4MmUxNTMyZGEyMjg5YWQzY2FmY2Q1ZmFkMmUwNDA2ZWJjM2VmNDI5YTQ2ODQ4ZTBmYwHRJ6Q=: 00:25:06.979 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:06.979 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:06.979 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:06.979 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTNhMzVjMjc0ZDNjYmM4MmUxNTMyZGEyMjg5YWQzY2FmY2Q1ZmFkMmUwNDA2ZWJjM2VmNDI5YTQ2ODQ4ZTBmYwHRJ6Q=: 00:25:06.979 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:06.979 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:06.979 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.979 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:06.979 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:06.979 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:06.979 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.979 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:06.979 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.979 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.979 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.979 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.979 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:06.979 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:06.979 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:06.979 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.979 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.979 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:06.979 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.979 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:06.979 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:06.979 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:06.979 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:06.979 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.979 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.237 nvme0n1 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNmZTAyNWIyY2FhZWViNGFkYTUyNjBiN2M0NzU4OGXjdpXB: 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNmZTAyNWIyY2FhZWViNGFkYTUyNjBiN2M0NzU4OGXjdpXB: 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: ]] 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.237 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.494 nvme0n1 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE1ODcxNWNlZGI1YzUyZWIyMjRkZTllMzFhOTJiYmRiMDgwOWQzMGE1ZjM0ODQ2aSIakg==: 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE1ODcxNWNlZGI1YzUyZWIyMjRkZTllMzFhOTJiYmRiMDgwOWQzMGE1ZjM0ODQ2aSIakg==: 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: ]] 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.494 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.752 nvme0n1 00:25:07.752 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.752 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.752 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.752 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.752 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.752 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDcxNGEwZjVkYTJiYTZkOWZmNmI1Y2MyMjhlYzQ5ODYFzPDb: 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDcxNGEwZjVkYTJiYTZkOWZmNmI1Y2MyMjhlYzQ5ODYFzPDb: 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: ]] 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.010 03:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.269 nvme0n1 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q1NTg1ZTliZmY2ZjBiMGQ3ZmJkZjRkZWI3YzEzNmJjMWVjYWYzM2RhNzJhYzc5MH2l/Q==: 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q1NTg1ZTliZmY2ZjBiMGQ3ZmJkZjRkZWI3YzEzNmJjMWVjYWYzM2RhNzJhYzc5MH2l/Q==: 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: ]] 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.269 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.527 nvme0n1 00:25:08.527 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.527 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.527 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.527 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.527 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTNhMzVjMjc0ZDNjYmM4MmUxNTMyZGEyMjg5YWQzY2FmY2Q1ZmFkMmUwNDA2ZWJjM2VmNDI5YTQ2ODQ4ZTBmYwHRJ6Q=: 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTNhMzVjMjc0ZDNjYmM4MmUxNTMyZGEyMjg5YWQzY2FmY2Q1ZmFkMmUwNDA2ZWJjM2VmNDI5YTQ2ODQ4ZTBmYwHRJ6Q=: 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.528 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.786 nvme0n1 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNmZTAyNWIyY2FhZWViNGFkYTUyNjBiN2M0NzU4OGXjdpXB: 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNmZTAyNWIyY2FhZWViNGFkYTUyNjBiN2M0NzU4OGXjdpXB: 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: ]] 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.786 03:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.350 nvme0n1 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE1ODcxNWNlZGI1YzUyZWIyMjRkZTllMzFhOTJiYmRiMDgwOWQzMGE1ZjM0ODQ2aSIakg==: 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE1ODcxNWNlZGI1YzUyZWIyMjRkZTllMzFhOTJiYmRiMDgwOWQzMGE1ZjM0ODQ2aSIakg==: 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: ]] 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.350 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.608 nvme0n1 00:25:09.608 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.608 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.608 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.608 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.608 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.608 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDcxNGEwZjVkYTJiYTZkOWZmNmI1Y2MyMjhlYzQ5ODYFzPDb: 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDcxNGEwZjVkYTJiYTZkOWZmNmI1Y2MyMjhlYzQ5ODYFzPDb: 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: ]] 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.866 03:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.124 nvme0n1 00:25:10.124 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.124 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.124 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.124 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.124 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.124 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.124 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.124 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.124 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.124 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.124 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.124 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.124 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:10.124 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.124 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:10.124 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:10.124 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:10.124 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q1NTg1ZTliZmY2ZjBiMGQ3ZmJkZjRkZWI3YzEzNmJjMWVjYWYzM2RhNzJhYzc5MH2l/Q==: 00:25:10.124 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: 00:25:10.124 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:10.124 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:10.124 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q1NTg1ZTliZmY2ZjBiMGQ3ZmJkZjRkZWI3YzEzNmJjMWVjYWYzM2RhNzJhYzc5MH2l/Q==: 00:25:10.124 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: ]] 00:25:10.124 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: 00:25:10.124 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:10.124 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.124 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:10.124 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:10.124 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:10.124 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.124 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:10.124 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.124 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.124 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.125 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.125 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:10.125 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:10.125 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:10.125 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.125 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.125 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:10.125 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.125 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:10.125 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:10.125 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:10.125 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:10.125 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.125 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.691 nvme0n1 00:25:10.691 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.691 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.691 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.691 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.691 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.691 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.691 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.691 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.691 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.691 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.691 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.691 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.691 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:10.691 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.691 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:10.691 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:10.691 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:10.691 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTNhMzVjMjc0ZDNjYmM4MmUxNTMyZGEyMjg5YWQzY2FmY2Q1ZmFkMmUwNDA2ZWJjM2VmNDI5YTQ2ODQ4ZTBmYwHRJ6Q=: 00:25:10.691 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:10.691 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:10.691 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:10.691 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTNhMzVjMjc0ZDNjYmM4MmUxNTMyZGEyMjg5YWQzY2FmY2Q1ZmFkMmUwNDA2ZWJjM2VmNDI5YTQ2ODQ4ZTBmYwHRJ6Q=: 00:25:10.691 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:10.691 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:10.691 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.691 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:10.691 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:10.691 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:10.692 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.692 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:10.692 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.692 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.692 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.692 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.692 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:10.692 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:10.692 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:10.692 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.692 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.692 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:10.692 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.692 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:10.692 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:10.692 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:10.692 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:10.692 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.692 03:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.949 nvme0n1 00:25:10.949 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.949 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.949 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.949 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.949 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.949 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.949 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.949 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.950 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.950 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.207 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.207 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:11.207 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.207 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:11.207 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.207 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:11.207 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:11.207 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:11.207 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNmZTAyNWIyY2FhZWViNGFkYTUyNjBiN2M0NzU4OGXjdpXB: 00:25:11.207 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: 00:25:11.207 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:11.207 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:11.207 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNmZTAyNWIyY2FhZWViNGFkYTUyNjBiN2M0NzU4OGXjdpXB: 00:25:11.207 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: ]] 00:25:11.207 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmZmOTk2ZjA1ZmY0Mjc3MWQ2YzI2YTUyOWE1Mzk2NGE0ZmU5OGM4N2RmMTU4ZWRiN2Y2OGM4MGRiZjM1MzQ2N+dfSV4=: 00:25:11.207 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:11.207 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.207 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:11.207 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:11.207 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:11.208 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.208 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:11.208 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.208 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.208 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.208 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.208 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:11.208 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:11.208 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:11.208 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.208 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.208 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:11.208 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.208 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:11.208 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:11.208 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:11.208 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:11.208 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.208 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.774 nvme0n1 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE1ODcxNWNlZGI1YzUyZWIyMjRkZTllMzFhOTJiYmRiMDgwOWQzMGE1ZjM0ODQ2aSIakg==: 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE1ODcxNWNlZGI1YzUyZWIyMjRkZTllMzFhOTJiYmRiMDgwOWQzMGE1ZjM0ODQ2aSIakg==: 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: ]] 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.774 03:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.340 nvme0n1 00:25:12.340 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.340 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.340 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.340 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.340 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.340 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.340 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.340 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.340 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.340 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.341 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.341 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.341 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:12.341 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.341 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:12.341 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:12.341 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:12.341 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDcxNGEwZjVkYTJiYTZkOWZmNmI1Y2MyMjhlYzQ5ODYFzPDb: 00:25:12.341 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: 00:25:12.341 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:12.341 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:12.341 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDcxNGEwZjVkYTJiYTZkOWZmNmI1Y2MyMjhlYzQ5ODYFzPDb: 00:25:12.341 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: ]] 00:25:12.341 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: 00:25:12.341 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:12.341 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.341 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:12.341 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:12.341 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:12.341 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.341 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:12.341 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.341 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.341 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.341 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.341 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:12.341 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:12.341 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:12.341 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.341 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.341 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:12.341 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.341 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:12.341 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:12.341 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:12.341 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:12.341 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.341 03:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.907 nvme0n1 00:25:12.907 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.907 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.907 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.907 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.907 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.907 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Q1NTg1ZTliZmY2ZjBiMGQ3ZmJkZjRkZWI3YzEzNmJjMWVjYWYzM2RhNzJhYzc5MH2l/Q==: 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Q1NTg1ZTliZmY2ZjBiMGQ3ZmJkZjRkZWI3YzEzNmJjMWVjYWYzM2RhNzJhYzc5MH2l/Q==: 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: ]] 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2Y2ZTFkODZlYzJlZDA5MWY0ZDVjNDZkYTRlMjRjNjF3F7Qa: 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.167 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.733 nvme0n1 00:25:13.733 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.733 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.733 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.733 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.733 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.733 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.733 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.733 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.733 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.733 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.733 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.733 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.733 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:13.733 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.733 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:13.733 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:13.733 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:13.733 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTNhMzVjMjc0ZDNjYmM4MmUxNTMyZGEyMjg5YWQzY2FmY2Q1ZmFkMmUwNDA2ZWJjM2VmNDI5YTQ2ODQ4ZTBmYwHRJ6Q=: 00:25:13.733 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:13.733 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:13.733 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:13.733 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTNhMzVjMjc0ZDNjYmM4MmUxNTMyZGEyMjg5YWQzY2FmY2Q1ZmFkMmUwNDA2ZWJjM2VmNDI5YTQ2ODQ4ZTBmYwHRJ6Q=: 00:25:13.733 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:13.734 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:13.734 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.734 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:13.734 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:13.734 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:13.734 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.734 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:13.734 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.734 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.734 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.734 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.734 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:13.734 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:13.734 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:13.734 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.734 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.734 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:13.734 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.734 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:13.734 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:13.734 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:13.734 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:13.734 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.734 03:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.298 nvme0n1 00:25:14.298 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.298 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.298 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.298 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.298 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.298 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.298 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.298 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.298 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.298 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.298 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.298 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:14.298 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.299 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:14.299 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:14.299 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:14.299 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE1ODcxNWNlZGI1YzUyZWIyMjRkZTllMzFhOTJiYmRiMDgwOWQzMGE1ZjM0ODQ2aSIakg==: 00:25:14.299 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: 00:25:14.299 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:14.299 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:14.299 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE1ODcxNWNlZGI1YzUyZWIyMjRkZTllMzFhOTJiYmRiMDgwOWQzMGE1ZjM0ODQ2aSIakg==: 00:25:14.299 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: ]] 00:25:14.299 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: 00:25:14.299 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:14.299 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.299 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.299 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.299 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:14.299 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.299 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.299 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.299 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.299 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.299 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.299 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.299 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.299 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.299 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.299 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:14.299 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:14.299 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:14.299 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:14.299 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:14.299 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:14.299 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:14.299 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:14.299 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.299 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.557 request: 00:25:14.557 { 00:25:14.557 "name": "nvme0", 00:25:14.557 "trtype": "tcp", 00:25:14.557 "traddr": "10.0.0.1", 00:25:14.557 "adrfam": "ipv4", 00:25:14.557 "trsvcid": "4420", 00:25:14.557 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:14.557 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:14.557 "prchk_reftag": false, 00:25:14.557 "prchk_guard": false, 00:25:14.557 "hdgst": false, 00:25:14.557 "ddgst": false, 00:25:14.557 "allow_unrecognized_csi": false, 00:25:14.557 "method": "bdev_nvme_attach_controller", 00:25:14.557 "req_id": 1 00:25:14.557 } 00:25:14.557 Got JSON-RPC error response 00:25:14.557 response: 00:25:14.557 { 00:25:14.557 "code": -5, 00:25:14.557 "message": "Input/output error" 00:25:14.557 } 00:25:14.557 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:14.557 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:14.557 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:14.557 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:14.557 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:14.557 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.557 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:14.557 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.557 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.557 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.557 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:14.557 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:14.557 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.557 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.557 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.557 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.557 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.557 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.557 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.557 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.557 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.557 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.558 request: 00:25:14.558 { 00:25:14.558 "name": "nvme0", 00:25:14.558 "trtype": "tcp", 00:25:14.558 "traddr": "10.0.0.1", 00:25:14.558 "adrfam": "ipv4", 00:25:14.558 "trsvcid": "4420", 00:25:14.558 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:14.558 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:14.558 "prchk_reftag": false, 00:25:14.558 "prchk_guard": false, 00:25:14.558 "hdgst": false, 00:25:14.558 "ddgst": false, 00:25:14.558 "dhchap_key": "key2", 00:25:14.558 "allow_unrecognized_csi": false, 00:25:14.558 "method": "bdev_nvme_attach_controller", 00:25:14.558 "req_id": 1 00:25:14.558 } 00:25:14.558 Got JSON-RPC error response 00:25:14.558 response: 00:25:14.558 { 00:25:14.558 "code": -5, 00:25:14.558 "message": "Input/output error" 00:25:14.558 } 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.558 request: 00:25:14.558 { 00:25:14.558 "name": "nvme0", 00:25:14.558 "trtype": "tcp", 00:25:14.558 "traddr": "10.0.0.1", 00:25:14.558 "adrfam": "ipv4", 00:25:14.558 "trsvcid": "4420", 00:25:14.558 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:14.558 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:14.558 "prchk_reftag": false, 00:25:14.558 "prchk_guard": false, 00:25:14.558 "hdgst": false, 00:25:14.558 "ddgst": false, 00:25:14.558 "dhchap_key": "key1", 00:25:14.558 "dhchap_ctrlr_key": "ckey2", 00:25:14.558 "allow_unrecognized_csi": false, 00:25:14.558 "method": "bdev_nvme_attach_controller", 00:25:14.558 "req_id": 1 00:25:14.558 } 00:25:14.558 Got JSON-RPC error response 00:25:14.558 response: 00:25:14.558 { 00:25:14.558 "code": -5, 00:25:14.558 "message": "Input/output error" 00:25:14.558 } 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:14.558 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:14.817 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:14.817 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:14.817 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:25:14.817 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.817 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.817 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.817 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.817 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.817 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.817 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.817 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.817 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.817 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.817 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:14.817 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.817 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.817 nvme0n1 00:25:14.817 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.817 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:14.817 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.817 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:14.817 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:14.817 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:14.817 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDcxNGEwZjVkYTJiYTZkOWZmNmI1Y2MyMjhlYzQ5ODYFzPDb: 00:25:14.817 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: 00:25:14.817 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:14.817 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:14.817 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDcxNGEwZjVkYTJiYTZkOWZmNmI1Y2MyMjhlYzQ5ODYFzPDb: 00:25:14.817 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: ]] 00:25:14.817 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: 00:25:14.817 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:14.817 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.817 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.817 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.817 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.817 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.817 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:25:14.817 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.817 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.075 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.075 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:15.075 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:15.075 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:15.075 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:15.075 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:15.075 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:15.075 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:15.075 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:15.075 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.075 03:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.075 request: 00:25:15.075 { 00:25:15.075 "name": "nvme0", 00:25:15.075 "dhchap_key": "key1", 00:25:15.075 "dhchap_ctrlr_key": "ckey2", 00:25:15.075 "method": "bdev_nvme_set_keys", 00:25:15.075 "req_id": 1 00:25:15.075 } 00:25:15.075 Got JSON-RPC error response 00:25:15.075 response: 00:25:15.075 { 00:25:15.075 "code": -13, 00:25:15.075 "message": "Permission denied" 00:25:15.075 } 00:25:15.075 03:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:15.075 03:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:15.075 03:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:15.075 03:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:15.075 03:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:15.075 03:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.075 03:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:15.075 03:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.075 03:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.075 03:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.075 03:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:15.075 03:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:16.003 03:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.003 03:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:16.003 03:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.003 03:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.003 03:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.003 03:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:16.003 03:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE1ODcxNWNlZGI1YzUyZWIyMjRkZTllMzFhOTJiYmRiMDgwOWQzMGE1ZjM0ODQ2aSIakg==: 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE1ODcxNWNlZGI1YzUyZWIyMjRkZTllMzFhOTJiYmRiMDgwOWQzMGE1ZjM0ODQ2aSIakg==: 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: ]] 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZkNDNkODg5MjA3YWFiZjE0ZGIxNGFlZDA3ZGUwOGY3YzAyZDMyNTg5NTdhZDk45rgwuw==: 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.376 nvme0n1 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDcxNGEwZjVkYTJiYTZkOWZmNmI1Y2MyMjhlYzQ5ODYFzPDb: 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDcxNGEwZjVkYTJiYTZkOWZmNmI1Y2MyMjhlYzQ5ODYFzPDb: 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: ]] 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGE1NmRhZjkxNDQxOGNhMGJhZmFjNTg2ZGUxODhlOWSjBR8J: 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:17.376 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:17.377 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.377 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.377 request: 00:25:17.377 { 00:25:17.377 "name": "nvme0", 00:25:17.377 "dhchap_key": "key2", 00:25:17.377 "dhchap_ctrlr_key": "ckey1", 00:25:17.377 "method": "bdev_nvme_set_keys", 00:25:17.377 "req_id": 1 00:25:17.377 } 00:25:17.377 Got JSON-RPC error response 00:25:17.377 response: 00:25:17.377 { 00:25:17.377 "code": -13, 00:25:17.377 "message": "Permission denied" 00:25:17.377 } 00:25:17.377 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:17.377 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:17.377 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:17.377 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:17.377 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:17.377 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.377 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.377 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.377 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:17.377 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.377 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:25:17.377 03:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:25:18.311 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.311 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:18.311 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.311 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.570 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.570 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:25:18.570 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:25:18.570 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:25:18.570 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:18.570 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:18.570 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:25:18.570 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:18.570 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:25:18.570 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:18.570 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:18.570 rmmod nvme_tcp 00:25:18.570 rmmod nvme_fabrics 00:25:18.570 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:18.570 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:25:18.570 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:25:18.570 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2743101 ']' 00:25:18.570 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2743101 00:25:18.570 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2743101 ']' 00:25:18.570 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2743101 00:25:18.570 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:25:18.570 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:18.570 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2743101 00:25:18.570 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:18.570 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:18.570 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2743101' 00:25:18.570 killing process with pid 2743101 00:25:18.570 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2743101 00:25:18.570 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2743101 00:25:18.828 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:18.828 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:18.828 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:18.828 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:25:18.828 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:25:18.828 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:18.828 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:18.828 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:18.828 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:18.828 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.828 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:18.828 03:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.752 03:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:20.752 03:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:20.752 03:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:20.752 03:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:20.752 03:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:20.752 03:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:25:20.753 03:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:20.753 03:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:20.753 03:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:20.753 03:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:20.753 03:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:20.753 03:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:20.753 03:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:23.278 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:23.278 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:23.278 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:23.278 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:23.278 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:23.278 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:23.278 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:23.278 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:23.278 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:23.278 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:23.278 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:23.278 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:23.278 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:23.278 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:23.278 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:23.278 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:24.215 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:24.215 03:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.hBf /tmp/spdk.key-null.N1h /tmp/spdk.key-sha256.6cX /tmp/spdk.key-sha384.Acw /tmp/spdk.key-sha512.WrF /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:24.215 03:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:26.742 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:25:26.742 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:26.742 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:25:26.742 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:25:26.742 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:25:26.742 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:25:26.742 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:25:26.742 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:25:26.743 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:25:26.743 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:25:26.743 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:25:26.743 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:25:26.743 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:25:26.743 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:25:26.743 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:25:26.743 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:25:26.743 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:25:27.033 00:25:27.033 real 0m52.809s 00:25:27.033 user 0m47.683s 00:25:27.033 sys 0m11.995s 00:25:27.033 03:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:27.033 03:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.033 ************************************ 00:25:27.033 END TEST nvmf_auth_host 00:25:27.033 ************************************ 00:25:27.033 03:33:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:25:27.033 03:33:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:27.033 03:33:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:27.033 03:33:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:27.033 03:33:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.033 ************************************ 00:25:27.033 START TEST nvmf_digest 00:25:27.033 ************************************ 00:25:27.033 03:33:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:27.033 * Looking for test storage... 00:25:27.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:27.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:27.033 --rc genhtml_branch_coverage=1 00:25:27.033 --rc genhtml_function_coverage=1 00:25:27.033 --rc genhtml_legend=1 00:25:27.033 --rc geninfo_all_blocks=1 00:25:27.033 --rc geninfo_unexecuted_blocks=1 00:25:27.033 00:25:27.033 ' 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:27.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:27.033 --rc genhtml_branch_coverage=1 00:25:27.033 --rc genhtml_function_coverage=1 00:25:27.033 --rc genhtml_legend=1 00:25:27.033 --rc geninfo_all_blocks=1 00:25:27.033 --rc geninfo_unexecuted_blocks=1 00:25:27.033 00:25:27.033 ' 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:27.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:27.033 --rc genhtml_branch_coverage=1 00:25:27.033 --rc genhtml_function_coverage=1 00:25:27.033 --rc genhtml_legend=1 00:25:27.033 --rc geninfo_all_blocks=1 00:25:27.033 --rc geninfo_unexecuted_blocks=1 00:25:27.033 00:25:27.033 ' 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:27.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:27.033 --rc genhtml_branch_coverage=1 00:25:27.033 --rc genhtml_function_coverage=1 00:25:27.033 --rc genhtml_legend=1 00:25:27.033 --rc geninfo_all_blocks=1 00:25:27.033 --rc geninfo_unexecuted_blocks=1 00:25:27.033 00:25:27.033 ' 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:27.033 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:27.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:25:27.034 03:33:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:33.598 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:33.598 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:33.598 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:33.599 Found net devices under 0000:86:00.0: cvl_0_0 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:33.599 Found net devices under 0000:86:00.1: cvl_0_1 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:33.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:33.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.504 ms 00:25:33.599 00:25:33.599 --- 10.0.0.2 ping statistics --- 00:25:33.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.599 rtt min/avg/max/mdev = 0.504/0.504/0.504/0.000 ms 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:33.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:33.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:25:33.599 00:25:33.599 --- 10.0.0.1 ping statistics --- 00:25:33.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.599 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:33.599 ************************************ 00:25:33.599 START TEST nvmf_digest_clean 00:25:33.599 ************************************ 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2757255 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2757255 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2757255 ']' 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:33.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:33.599 03:33:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:33.599 [2024-12-06 03:33:52.948915] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:25:33.599 [2024-12-06 03:33:52.948968] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:33.599 [2024-12-06 03:33:53.014519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.599 [2024-12-06 03:33:53.056349] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:33.599 [2024-12-06 03:33:53.056389] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:33.599 [2024-12-06 03:33:53.056397] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:33.599 [2024-12-06 03:33:53.056404] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:33.599 [2024-12-06 03:33:53.056409] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:33.599 [2024-12-06 03:33:53.056983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.599 03:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:33.599 03:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:33.599 03:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:33.600 03:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:33.600 03:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:33.600 03:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:33.600 03:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:33.600 03:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:25:33.600 03:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:25:33.600 03:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.600 03:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:33.600 null0 00:25:33.600 [2024-12-06 03:33:53.226907] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:33.600 [2024-12-06 03:33:53.251107] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:33.600 03:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.600 03:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:33.600 03:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:33.600 03:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:33.600 03:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:33.600 03:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:33.600 03:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:33.600 03:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:33.600 03:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2757412 00:25:33.600 03:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2757412 /var/tmp/bperf.sock 00:25:33.600 03:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:33.600 03:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2757412 ']' 00:25:33.600 03:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:33.600 03:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:33.600 03:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:33.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:33.600 03:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:33.600 03:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:33.600 [2024-12-06 03:33:53.304202] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:25:33.600 [2024-12-06 03:33:53.304246] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2757412 ] 00:25:33.600 [2024-12-06 03:33:53.366883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.600 [2024-12-06 03:33:53.412672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:33.600 03:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:33.600 03:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:33.600 03:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:33.600 03:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:33.600 03:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:33.600 03:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:33.600 03:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:34.164 nvme0n1 00:25:34.164 03:33:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:34.164 03:33:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:34.164 Running I/O for 2 seconds... 00:25:36.029 24207.00 IOPS, 94.56 MiB/s [2024-12-06T02:33:56.429Z] 24628.00 IOPS, 96.20 MiB/s 00:25:36.288 Latency(us) 00:25:36.288 [2024-12-06T02:33:56.429Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:36.288 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:36.288 nvme0n1 : 2.01 24631.14 96.22 0.00 0.00 5191.78 2621.44 11967.44 00:25:36.288 [2024-12-06T02:33:56.429Z] =================================================================================================================== 00:25:36.288 [2024-12-06T02:33:56.429Z] Total : 24631.14 96.22 0.00 0.00 5191.78 2621.44 11967.44 00:25:36.288 { 00:25:36.288 "results": [ 00:25:36.288 { 00:25:36.288 "job": "nvme0n1", 00:25:36.288 "core_mask": "0x2", 00:25:36.288 "workload": "randread", 00:25:36.288 "status": "finished", 00:25:36.288 "queue_depth": 128, 00:25:36.288 "io_size": 4096, 00:25:36.288 "runtime": 2.006728, 00:25:36.288 "iops": 24631.140842206816, 00:25:36.288 "mibps": 96.21539391487038, 00:25:36.288 "io_failed": 0, 00:25:36.288 "io_timeout": 0, 00:25:36.288 "avg_latency_us": 5191.776667335184, 00:25:36.288 "min_latency_us": 2621.44, 00:25:36.288 "max_latency_us": 11967.44347826087 00:25:36.288 } 00:25:36.288 ], 00:25:36.288 "core_count": 1 00:25:36.288 } 00:25:36.288 03:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:36.288 03:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:36.288 03:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:36.288 03:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:36.288 | select(.opcode=="crc32c") 00:25:36.288 | "\(.module_name) \(.executed)"' 00:25:36.288 03:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:36.288 03:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:36.288 03:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:36.288 03:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:36.288 03:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:36.288 03:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2757412 00:25:36.288 03:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2757412 ']' 00:25:36.288 03:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2757412 00:25:36.288 03:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:36.288 03:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:36.288 03:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2757412 00:25:36.547 03:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:36.547 03:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:36.547 03:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2757412' 00:25:36.547 killing process with pid 2757412 00:25:36.547 03:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2757412 00:25:36.547 Received shutdown signal, test time was about 2.000000 seconds 00:25:36.547 00:25:36.547 Latency(us) 00:25:36.547 [2024-12-06T02:33:56.688Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:36.547 [2024-12-06T02:33:56.688Z] =================================================================================================================== 00:25:36.547 [2024-12-06T02:33:56.688Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:36.547 03:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2757412 00:25:36.547 03:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:36.547 03:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:36.547 03:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:36.547 03:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:36.547 03:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:36.547 03:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:36.547 03:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:36.547 03:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2757884 00:25:36.547 03:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2757884 /var/tmp/bperf.sock 00:25:36.547 03:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:36.547 03:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2757884 ']' 00:25:36.547 03:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:36.547 03:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:36.547 03:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:36.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:36.547 03:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:36.547 03:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:36.547 [2024-12-06 03:33:56.653936] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:25:36.547 [2024-12-06 03:33:56.654005] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2757884 ] 00:25:36.547 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:36.547 Zero copy mechanism will not be used. 00:25:36.806 [2024-12-06 03:33:56.717199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.806 [2024-12-06 03:33:56.755847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:36.806 03:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:36.806 03:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:36.806 03:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:36.806 03:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:36.806 03:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:37.065 03:33:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:37.065 03:33:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:37.324 nvme0n1 00:25:37.324 03:33:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:37.324 03:33:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:37.583 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:37.584 Zero copy mechanism will not be used. 00:25:37.584 Running I/O for 2 seconds... 00:25:39.458 5444.00 IOPS, 680.50 MiB/s [2024-12-06T02:33:59.599Z] 5450.00 IOPS, 681.25 MiB/s 00:25:39.458 Latency(us) 00:25:39.458 [2024-12-06T02:33:59.599Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:39.458 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:39.458 nvme0n1 : 2.00 5448.67 681.08 0.00 0.00 2933.94 648.24 9630.94 00:25:39.458 [2024-12-06T02:33:59.599Z] =================================================================================================================== 00:25:39.458 [2024-12-06T02:33:59.599Z] Total : 5448.67 681.08 0.00 0.00 2933.94 648.24 9630.94 00:25:39.458 { 00:25:39.458 "results": [ 00:25:39.458 { 00:25:39.458 "job": "nvme0n1", 00:25:39.458 "core_mask": "0x2", 00:25:39.458 "workload": "randread", 00:25:39.458 "status": "finished", 00:25:39.458 "queue_depth": 16, 00:25:39.458 "io_size": 131072, 00:25:39.458 "runtime": 2.003424, 00:25:39.458 "iops": 5448.671873752136, 00:25:39.458 "mibps": 681.083984219017, 00:25:39.458 "io_failed": 0, 00:25:39.458 "io_timeout": 0, 00:25:39.458 "avg_latency_us": 2933.938070642216, 00:25:39.458 "min_latency_us": 648.2365217391305, 00:25:39.458 "max_latency_us": 9630.942608695652 00:25:39.458 } 00:25:39.458 ], 00:25:39.458 "core_count": 1 00:25:39.458 } 00:25:39.458 03:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:39.458 03:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:39.458 03:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:39.458 03:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:39.458 | select(.opcode=="crc32c") 00:25:39.458 | "\(.module_name) \(.executed)"' 00:25:39.458 03:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:39.718 03:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:39.718 03:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:39.718 03:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:39.718 03:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:39.718 03:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2757884 00:25:39.718 03:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2757884 ']' 00:25:39.718 03:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2757884 00:25:39.718 03:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:39.718 03:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:39.718 03:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2757884 00:25:39.718 03:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:39.718 03:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:39.718 03:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2757884' 00:25:39.718 killing process with pid 2757884 00:25:39.718 03:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2757884 00:25:39.718 Received shutdown signal, test time was about 2.000000 seconds 00:25:39.718 00:25:39.718 Latency(us) 00:25:39.718 [2024-12-06T02:33:59.859Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:39.718 [2024-12-06T02:33:59.859Z] =================================================================================================================== 00:25:39.718 [2024-12-06T02:33:59.859Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:39.718 03:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2757884 00:25:39.978 03:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:39.978 03:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:39.978 03:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:39.978 03:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:39.979 03:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:39.979 03:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:39.979 03:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:39.979 03:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2758395 00:25:39.979 03:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2758395 /var/tmp/bperf.sock 00:25:39.979 03:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:39.979 03:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2758395 ']' 00:25:39.979 03:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:39.979 03:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:39.979 03:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:39.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:39.979 03:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:39.979 03:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:39.979 [2024-12-06 03:33:59.998546] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:25:39.979 [2024-12-06 03:33:59.998597] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2758395 ] 00:25:39.979 [2024-12-06 03:34:00.061998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.979 [2024-12-06 03:34:00.109477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:40.238 03:34:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:40.238 03:34:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:40.238 03:34:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:40.238 03:34:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:40.238 03:34:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:40.498 03:34:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:40.498 03:34:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:40.757 nvme0n1 00:25:40.757 03:34:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:40.757 03:34:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:40.757 Running I/O for 2 seconds... 00:25:43.072 27950.00 IOPS, 109.18 MiB/s [2024-12-06T02:34:03.213Z] 27798.50 IOPS, 108.59 MiB/s 00:25:43.072 Latency(us) 00:25:43.072 [2024-12-06T02:34:03.213Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:43.072 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:43.072 nvme0n1 : 2.00 27813.28 108.65 0.00 0.00 4597.86 2450.48 8719.14 00:25:43.072 [2024-12-06T02:34:03.213Z] =================================================================================================================== 00:25:43.072 [2024-12-06T02:34:03.213Z] Total : 27813.28 108.65 0.00 0.00 4597.86 2450.48 8719.14 00:25:43.072 { 00:25:43.072 "results": [ 00:25:43.072 { 00:25:43.072 "job": "nvme0n1", 00:25:43.072 "core_mask": "0x2", 00:25:43.072 "workload": "randwrite", 00:25:43.072 "status": "finished", 00:25:43.072 "queue_depth": 128, 00:25:43.072 "io_size": 4096, 00:25:43.072 "runtime": 2.003539, 00:25:43.072 "iops": 27813.284393266116, 00:25:43.072 "mibps": 108.64564216119577, 00:25:43.072 "io_failed": 0, 00:25:43.072 "io_timeout": 0, 00:25:43.072 "avg_latency_us": 4597.85604231962, 00:25:43.072 "min_latency_us": 2450.4765217391305, 00:25:43.072 "max_latency_us": 8719.137391304348 00:25:43.072 } 00:25:43.072 ], 00:25:43.072 "core_count": 1 00:25:43.072 } 00:25:43.072 03:34:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:43.072 03:34:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:43.072 03:34:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:43.072 03:34:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:43.072 | select(.opcode=="crc32c") 00:25:43.072 | "\(.module_name) \(.executed)"' 00:25:43.072 03:34:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:43.072 03:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:43.072 03:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:43.072 03:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:43.072 03:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:43.072 03:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2758395 00:25:43.072 03:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2758395 ']' 00:25:43.072 03:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2758395 00:25:43.072 03:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:43.072 03:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:43.072 03:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2758395 00:25:43.072 03:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:43.072 03:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:43.073 03:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2758395' 00:25:43.073 killing process with pid 2758395 00:25:43.073 03:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2758395 00:25:43.073 Received shutdown signal, test time was about 2.000000 seconds 00:25:43.073 00:25:43.073 Latency(us) 00:25:43.073 [2024-12-06T02:34:03.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:43.073 [2024-12-06T02:34:03.214Z] =================================================================================================================== 00:25:43.073 [2024-12-06T02:34:03.214Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:43.073 03:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2758395 00:25:43.332 03:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:43.332 03:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:43.332 03:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:43.332 03:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:43.332 03:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:43.332 03:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:43.332 03:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:43.332 03:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2759046 00:25:43.332 03:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2759046 /var/tmp/bperf.sock 00:25:43.332 03:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:43.332 03:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2759046 ']' 00:25:43.332 03:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:43.332 03:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:43.332 03:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:43.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:43.332 03:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:43.332 03:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:43.332 [2024-12-06 03:34:03.297452] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:25:43.332 [2024-12-06 03:34:03.297502] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2759046 ] 00:25:43.332 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:43.332 Zero copy mechanism will not be used. 00:25:43.332 [2024-12-06 03:34:03.359371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.332 [2024-12-06 03:34:03.402130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:43.332 03:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:43.332 03:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:43.332 03:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:43.332 03:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:43.332 03:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:43.591 03:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:43.591 03:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:44.157 nvme0n1 00:25:44.157 03:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:44.157 03:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:44.157 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:44.157 Zero copy mechanism will not be used. 00:25:44.157 Running I/O for 2 seconds... 00:25:46.026 5856.00 IOPS, 732.00 MiB/s [2024-12-06T02:34:06.167Z] 5857.50 IOPS, 732.19 MiB/s 00:25:46.026 Latency(us) 00:25:46.026 [2024-12-06T02:34:06.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:46.026 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:46.026 nvme0n1 : 2.00 5853.36 731.67 0.00 0.00 2728.20 2023.07 8206.25 00:25:46.026 [2024-12-06T02:34:06.167Z] =================================================================================================================== 00:25:46.026 [2024-12-06T02:34:06.167Z] Total : 5853.36 731.67 0.00 0.00 2728.20 2023.07 8206.25 00:25:46.284 { 00:25:46.284 "results": [ 00:25:46.284 { 00:25:46.284 "job": "nvme0n1", 00:25:46.284 "core_mask": "0x2", 00:25:46.284 "workload": "randwrite", 00:25:46.284 "status": "finished", 00:25:46.284 "queue_depth": 16, 00:25:46.284 "io_size": 131072, 00:25:46.284 "runtime": 2.003978, 00:25:46.284 "iops": 5853.357671591205, 00:25:46.284 "mibps": 731.6697089489006, 00:25:46.284 "io_failed": 0, 00:25:46.284 "io_timeout": 0, 00:25:46.284 "avg_latency_us": 2728.204724563549, 00:25:46.284 "min_latency_us": 2023.0678260869565, 00:25:46.284 "max_latency_us": 8206.24695652174 00:25:46.284 } 00:25:46.284 ], 00:25:46.284 "core_count": 1 00:25:46.284 } 00:25:46.284 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:46.284 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:46.284 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:46.284 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:46.284 | select(.opcode=="crc32c") 00:25:46.284 | "\(.module_name) \(.executed)"' 00:25:46.284 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:46.284 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:46.284 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:46.284 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:46.284 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:46.284 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2759046 00:25:46.284 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2759046 ']' 00:25:46.284 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2759046 00:25:46.284 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:46.284 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:46.284 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2759046 00:25:46.541 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:46.541 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:46.542 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2759046' 00:25:46.542 killing process with pid 2759046 00:25:46.542 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2759046 00:25:46.542 Received shutdown signal, test time was about 2.000000 seconds 00:25:46.542 00:25:46.542 Latency(us) 00:25:46.542 [2024-12-06T02:34:06.683Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:46.542 [2024-12-06T02:34:06.683Z] =================================================================================================================== 00:25:46.542 [2024-12-06T02:34:06.683Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:46.542 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2759046 00:25:46.542 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2757255 00:25:46.542 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2757255 ']' 00:25:46.542 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2757255 00:25:46.542 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:46.542 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:46.542 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2757255 00:25:46.542 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:46.542 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:46.542 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2757255' 00:25:46.542 killing process with pid 2757255 00:25:46.542 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2757255 00:25:46.542 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2757255 00:25:46.799 00:25:46.799 real 0m13.922s 00:25:46.799 user 0m26.715s 00:25:46.799 sys 0m4.477s 00:25:46.799 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:46.799 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:46.799 ************************************ 00:25:46.799 END TEST nvmf_digest_clean 00:25:46.799 ************************************ 00:25:46.799 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:46.799 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:46.799 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:46.799 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:46.799 ************************************ 00:25:46.799 START TEST nvmf_digest_error 00:25:46.799 ************************************ 00:25:46.799 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:25:46.799 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:46.799 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:46.799 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:46.799 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:46.799 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2759547 00:25:46.800 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2759547 00:25:46.800 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:46.800 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2759547 ']' 00:25:46.800 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:46.800 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:46.800 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:46.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:46.800 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:46.800 03:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:47.057 [2024-12-06 03:34:06.939920] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:25:47.057 [2024-12-06 03:34:06.939969] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:47.057 [2024-12-06 03:34:07.006102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.057 [2024-12-06 03:34:07.048863] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:47.057 [2024-12-06 03:34:07.048897] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:47.057 [2024-12-06 03:34:07.048906] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:47.057 [2024-12-06 03:34:07.048912] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:47.057 [2024-12-06 03:34:07.048917] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:47.057 [2024-12-06 03:34:07.049479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.057 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:47.057 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:47.057 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:47.057 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:47.057 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:47.057 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:47.057 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:47.057 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.057 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:47.057 [2024-12-06 03:34:07.133963] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:47.057 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.057 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:25:47.057 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:25:47.057 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.057 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:47.315 null0 00:25:47.315 [2024-12-06 03:34:07.227769] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:47.316 [2024-12-06 03:34:07.251971] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:47.316 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.316 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:47.316 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:47.316 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:47.316 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:47.316 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:47.316 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2759687 00:25:47.316 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2759687 /var/tmp/bperf.sock 00:25:47.316 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:47.316 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2759687 ']' 00:25:47.316 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:47.316 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:47.316 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:47.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:47.316 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:47.316 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:47.316 [2024-12-06 03:34:07.307021] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:25:47.316 [2024-12-06 03:34:07.307065] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2759687 ] 00:25:47.316 [2024-12-06 03:34:07.369619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.316 [2024-12-06 03:34:07.412642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:47.573 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:47.573 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:47.573 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:47.573 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:47.573 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:47.573 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.573 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:47.831 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.831 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:47.831 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:47.831 nvme0n1 00:25:48.088 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:48.088 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.088 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:48.088 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.088 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:48.088 03:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:48.088 Running I/O for 2 seconds... 00:25:48.088 [2024-12-06 03:34:08.097669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.089 [2024-12-06 03:34:08.097703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.089 [2024-12-06 03:34:08.097713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.089 [2024-12-06 03:34:08.106643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.089 [2024-12-06 03:34:08.106666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.089 [2024-12-06 03:34:08.106676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.089 [2024-12-06 03:34:08.118457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.089 [2024-12-06 03:34:08.118479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.089 [2024-12-06 03:34:08.118488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.089 [2024-12-06 03:34:08.126863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.089 [2024-12-06 03:34:08.126884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.089 [2024-12-06 03:34:08.126892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.089 [2024-12-06 03:34:08.138320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.089 [2024-12-06 03:34:08.138342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.089 [2024-12-06 03:34:08.138351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.089 [2024-12-06 03:34:08.149932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.089 [2024-12-06 03:34:08.149959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.089 [2024-12-06 03:34:08.149968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.089 [2024-12-06 03:34:08.159485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.089 [2024-12-06 03:34:08.159505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.089 [2024-12-06 03:34:08.159513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.089 [2024-12-06 03:34:08.168730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.089 [2024-12-06 03:34:08.168750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.089 [2024-12-06 03:34:08.168758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.089 [2024-12-06 03:34:08.180562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.089 [2024-12-06 03:34:08.180582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.089 [2024-12-06 03:34:08.180590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.089 [2024-12-06 03:34:08.189758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.089 [2024-12-06 03:34:08.189778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.089 [2024-12-06 03:34:08.189790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.089 [2024-12-06 03:34:08.199349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.089 [2024-12-06 03:34:08.199369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.089 [2024-12-06 03:34:08.199377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.089 [2024-12-06 03:34:08.209267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.089 [2024-12-06 03:34:08.209287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.089 [2024-12-06 03:34:08.209296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.089 [2024-12-06 03:34:08.218929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.089 [2024-12-06 03:34:08.218953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.089 [2024-12-06 03:34:08.218962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.373 [2024-12-06 03:34:08.228416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.374 [2024-12-06 03:34:08.228438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.374 [2024-12-06 03:34:08.228447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.374 [2024-12-06 03:34:08.238180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.374 [2024-12-06 03:34:08.238202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.374 [2024-12-06 03:34:08.238211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.374 [2024-12-06 03:34:08.248382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.374 [2024-12-06 03:34:08.248403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.374 [2024-12-06 03:34:08.248411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.374 [2024-12-06 03:34:08.256535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.374 [2024-12-06 03:34:08.256555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.374 [2024-12-06 03:34:08.256563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.374 [2024-12-06 03:34:08.266964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.374 [2024-12-06 03:34:08.266983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.374 [2024-12-06 03:34:08.266991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.374 [2024-12-06 03:34:08.276344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.374 [2024-12-06 03:34:08.276364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.374 [2024-12-06 03:34:08.276372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.374 [2024-12-06 03:34:08.285942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.374 [2024-12-06 03:34:08.285968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.374 [2024-12-06 03:34:08.285976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.374 [2024-12-06 03:34:08.295745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.374 [2024-12-06 03:34:08.295766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.374 [2024-12-06 03:34:08.295773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.374 [2024-12-06 03:34:08.305705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.374 [2024-12-06 03:34:08.305724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.374 [2024-12-06 03:34:08.305733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.374 [2024-12-06 03:34:08.313928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.374 [2024-12-06 03:34:08.313952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.374 [2024-12-06 03:34:08.313961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.374 [2024-12-06 03:34:08.324594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.374 [2024-12-06 03:34:08.324614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.374 [2024-12-06 03:34:08.324622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.374 [2024-12-06 03:34:08.334071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.374 [2024-12-06 03:34:08.334091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.374 [2024-12-06 03:34:08.334099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.374 [2024-12-06 03:34:08.343584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.374 [2024-12-06 03:34:08.343604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.374 [2024-12-06 03:34:08.343612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.374 [2024-12-06 03:34:08.355744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.374 [2024-12-06 03:34:08.355764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.374 [2024-12-06 03:34:08.355776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.374 [2024-12-06 03:34:08.364183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.374 [2024-12-06 03:34:08.364214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.374 [2024-12-06 03:34:08.364222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.374 [2024-12-06 03:34:08.375870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.374 [2024-12-06 03:34:08.375891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.374 [2024-12-06 03:34:08.375900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.374 [2024-12-06 03:34:08.386331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.374 [2024-12-06 03:34:08.386351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.374 [2024-12-06 03:34:08.386359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.374 [2024-12-06 03:34:08.394777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.374 [2024-12-06 03:34:08.394796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.374 [2024-12-06 03:34:08.394804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.374 [2024-12-06 03:34:08.405103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.374 [2024-12-06 03:34:08.405124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.374 [2024-12-06 03:34:08.405132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.374 [2024-12-06 03:34:08.415144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.374 [2024-12-06 03:34:08.415164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.374 [2024-12-06 03:34:08.415172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.374 [2024-12-06 03:34:08.424413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.374 [2024-12-06 03:34:08.424433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.374 [2024-12-06 03:34:08.424441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.374 [2024-12-06 03:34:08.432813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.374 [2024-12-06 03:34:08.432833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.374 [2024-12-06 03:34:08.432841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.374 [2024-12-06 03:34:08.443091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.374 [2024-12-06 03:34:08.443118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.374 [2024-12-06 03:34:08.443126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.374 [2024-12-06 03:34:08.453839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.374 [2024-12-06 03:34:08.453861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.374 [2024-12-06 03:34:08.453869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.374 [2024-12-06 03:34:08.461903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.374 [2024-12-06 03:34:08.461923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.374 [2024-12-06 03:34:08.461932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.374 [2024-12-06 03:34:08.472385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.374 [2024-12-06 03:34:08.472407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.374 [2024-12-06 03:34:08.472415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.374 [2024-12-06 03:34:08.482164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.374 [2024-12-06 03:34:08.482188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.375 [2024-12-06 03:34:08.482197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.375 [2024-12-06 03:34:08.491512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.375 [2024-12-06 03:34:08.491534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.375 [2024-12-06 03:34:08.491543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.375 [2024-12-06 03:34:08.502051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.375 [2024-12-06 03:34:08.502072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.375 [2024-12-06 03:34:08.502081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.633 [2024-12-06 03:34:08.511319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.633 [2024-12-06 03:34:08.511342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.633 [2024-12-06 03:34:08.511350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.633 [2024-12-06 03:34:08.521073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.633 [2024-12-06 03:34:08.521096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.633 [2024-12-06 03:34:08.521104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.633 [2024-12-06 03:34:08.530424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.633 [2024-12-06 03:34:08.530445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.633 [2024-12-06 03:34:08.530454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.633 [2024-12-06 03:34:08.540487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.633 [2024-12-06 03:34:08.540509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.633 [2024-12-06 03:34:08.540517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.633 [2024-12-06 03:34:08.548915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.633 [2024-12-06 03:34:08.548937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.633 [2024-12-06 03:34:08.548945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.633 [2024-12-06 03:34:08.560892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.633 [2024-12-06 03:34:08.560912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.633 [2024-12-06 03:34:08.560921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.633 [2024-12-06 03:34:08.570097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.633 [2024-12-06 03:34:08.570117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.633 [2024-12-06 03:34:08.570126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.633 [2024-12-06 03:34:08.580556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.633 [2024-12-06 03:34:08.580576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.633 [2024-12-06 03:34:08.580584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.633 [2024-12-06 03:34:08.589420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.633 [2024-12-06 03:34:08.589441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.633 [2024-12-06 03:34:08.589449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.633 [2024-12-06 03:34:08.599552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.633 [2024-12-06 03:34:08.599573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:25587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.633 [2024-12-06 03:34:08.599581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.633 [2024-12-06 03:34:08.608377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.633 [2024-12-06 03:34:08.608398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.633 [2024-12-06 03:34:08.608410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.633 [2024-12-06 03:34:08.619077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.633 [2024-12-06 03:34:08.619099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.634 [2024-12-06 03:34:08.619107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.634 [2024-12-06 03:34:08.628137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.634 [2024-12-06 03:34:08.628158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.634 [2024-12-06 03:34:08.628167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.634 [2024-12-06 03:34:08.636814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.634 [2024-12-06 03:34:08.636835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.634 [2024-12-06 03:34:08.636843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.634 [2024-12-06 03:34:08.646828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.634 [2024-12-06 03:34:08.646849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.634 [2024-12-06 03:34:08.646857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.634 [2024-12-06 03:34:08.656013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.634 [2024-12-06 03:34:08.656035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.634 [2024-12-06 03:34:08.656043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.634 [2024-12-06 03:34:08.665338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.634 [2024-12-06 03:34:08.665359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.634 [2024-12-06 03:34:08.665368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.634 [2024-12-06 03:34:08.674325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.634 [2024-12-06 03:34:08.674345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.634 [2024-12-06 03:34:08.674354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.634 [2024-12-06 03:34:08.685056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.634 [2024-12-06 03:34:08.685077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.634 [2024-12-06 03:34:08.685085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.634 [2024-12-06 03:34:08.696142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.634 [2024-12-06 03:34:08.696164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.634 [2024-12-06 03:34:08.696172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.634 [2024-12-06 03:34:08.705739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.634 [2024-12-06 03:34:08.705760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.634 [2024-12-06 03:34:08.705768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.634 [2024-12-06 03:34:08.715175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.634 [2024-12-06 03:34:08.715196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.634 [2024-12-06 03:34:08.715204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.634 [2024-12-06 03:34:08.723594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.634 [2024-12-06 03:34:08.723615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.634 [2024-12-06 03:34:08.723623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.634 [2024-12-06 03:34:08.735872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.634 [2024-12-06 03:34:08.735894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.634 [2024-12-06 03:34:08.735902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.634 [2024-12-06 03:34:08.748628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.634 [2024-12-06 03:34:08.748649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.634 [2024-12-06 03:34:08.748657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.634 [2024-12-06 03:34:08.759501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.634 [2024-12-06 03:34:08.759522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.634 [2024-12-06 03:34:08.759530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.634 [2024-12-06 03:34:08.767691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.634 [2024-12-06 03:34:08.767713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.634 [2024-12-06 03:34:08.767722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.892 [2024-12-06 03:34:08.779996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.892 [2024-12-06 03:34:08.780019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.892 [2024-12-06 03:34:08.780032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.892 [2024-12-06 03:34:08.791924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.892 [2024-12-06 03:34:08.791945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.892 [2024-12-06 03:34:08.791961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.892 [2024-12-06 03:34:08.800809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.892 [2024-12-06 03:34:08.800830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.892 [2024-12-06 03:34:08.800838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.892 [2024-12-06 03:34:08.811473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.892 [2024-12-06 03:34:08.811494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.892 [2024-12-06 03:34:08.811503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.892 [2024-12-06 03:34:08.823744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.892 [2024-12-06 03:34:08.823766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.892 [2024-12-06 03:34:08.823775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.892 [2024-12-06 03:34:08.833221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.892 [2024-12-06 03:34:08.833240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.892 [2024-12-06 03:34:08.833248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.892 [2024-12-06 03:34:08.843816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.892 [2024-12-06 03:34:08.843837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.892 [2024-12-06 03:34:08.843845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.892 [2024-12-06 03:34:08.852845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.892 [2024-12-06 03:34:08.852867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.892 [2024-12-06 03:34:08.852875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.892 [2024-12-06 03:34:08.865479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.892 [2024-12-06 03:34:08.865500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.892 [2024-12-06 03:34:08.865508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.892 [2024-12-06 03:34:08.877236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.892 [2024-12-06 03:34:08.877260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.892 [2024-12-06 03:34:08.877268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.892 [2024-12-06 03:34:08.886535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.892 [2024-12-06 03:34:08.886556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.892 [2024-12-06 03:34:08.886564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.892 [2024-12-06 03:34:08.898047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.892 [2024-12-06 03:34:08.898068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.892 [2024-12-06 03:34:08.898077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.892 [2024-12-06 03:34:08.909203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.892 [2024-12-06 03:34:08.909225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.892 [2024-12-06 03:34:08.909233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.892 [2024-12-06 03:34:08.917764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.892 [2024-12-06 03:34:08.917784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.892 [2024-12-06 03:34:08.917793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.892 [2024-12-06 03:34:08.928695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.892 [2024-12-06 03:34:08.928715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.892 [2024-12-06 03:34:08.928723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.893 [2024-12-06 03:34:08.939885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.893 [2024-12-06 03:34:08.939906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.893 [2024-12-06 03:34:08.939915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.893 [2024-12-06 03:34:08.949868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.893 [2024-12-06 03:34:08.949889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.893 [2024-12-06 03:34:08.949897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.893 [2024-12-06 03:34:08.958588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.893 [2024-12-06 03:34:08.958609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.893 [2024-12-06 03:34:08.958618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.893 [2024-12-06 03:34:08.968070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.893 [2024-12-06 03:34:08.968091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.893 [2024-12-06 03:34:08.968099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.893 [2024-12-06 03:34:08.978791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.893 [2024-12-06 03:34:08.978812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.893 [2024-12-06 03:34:08.978820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.893 [2024-12-06 03:34:08.988179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.893 [2024-12-06 03:34:08.988199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.893 [2024-12-06 03:34:08.988207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.893 [2024-12-06 03:34:08.997149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.893 [2024-12-06 03:34:08.997169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.893 [2024-12-06 03:34:08.997177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.893 [2024-12-06 03:34:09.006536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.893 [2024-12-06 03:34:09.006557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.893 [2024-12-06 03:34:09.006565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.893 [2024-12-06 03:34:09.016813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.893 [2024-12-06 03:34:09.016834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.893 [2024-12-06 03:34:09.016842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.893 [2024-12-06 03:34:09.025812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:48.893 [2024-12-06 03:34:09.025833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.893 [2024-12-06 03:34:09.025841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.152 [2024-12-06 03:34:09.035737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.152 [2024-12-06 03:34:09.035759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.152 [2024-12-06 03:34:09.035767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.152 [2024-12-06 03:34:09.044952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.152 [2024-12-06 03:34:09.044974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.152 [2024-12-06 03:34:09.044985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.152 [2024-12-06 03:34:09.054829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.152 [2024-12-06 03:34:09.054849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.152 [2024-12-06 03:34:09.054858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.152 [2024-12-06 03:34:09.064653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.152 [2024-12-06 03:34:09.064673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.152 [2024-12-06 03:34:09.064681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.153 [2024-12-06 03:34:09.073956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.153 [2024-12-06 03:34:09.073977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.153 [2024-12-06 03:34:09.073986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.153 25323.00 IOPS, 98.92 MiB/s [2024-12-06T02:34:09.294Z] [2024-12-06 03:34:09.085810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.153 [2024-12-06 03:34:09.085830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.153 [2024-12-06 03:34:09.085839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.153 [2024-12-06 03:34:09.094434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.153 [2024-12-06 03:34:09.094454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.153 [2024-12-06 03:34:09.094462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.153 [2024-12-06 03:34:09.105294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.153 [2024-12-06 03:34:09.105315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.153 [2024-12-06 03:34:09.105323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.153 [2024-12-06 03:34:09.116256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.153 [2024-12-06 03:34:09.116277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.153 [2024-12-06 03:34:09.116285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.153 [2024-12-06 03:34:09.124401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.153 [2024-12-06 03:34:09.124421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.153 [2024-12-06 03:34:09.124429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.153 [2024-12-06 03:34:09.135132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.153 [2024-12-06 03:34:09.135152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.153 [2024-12-06 03:34:09.135161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.153 [2024-12-06 03:34:09.146194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.153 [2024-12-06 03:34:09.146214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.153 [2024-12-06 03:34:09.146222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.153 [2024-12-06 03:34:09.158027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.153 [2024-12-06 03:34:09.158049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.153 [2024-12-06 03:34:09.158058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.153 [2024-12-06 03:34:09.166133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.153 [2024-12-06 03:34:09.166154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.153 [2024-12-06 03:34:09.166162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.153 [2024-12-06 03:34:09.176860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.153 [2024-12-06 03:34:09.176880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.153 [2024-12-06 03:34:09.176888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.153 [2024-12-06 03:34:09.189358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.153 [2024-12-06 03:34:09.189378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.153 [2024-12-06 03:34:09.189386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.153 [2024-12-06 03:34:09.197567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.153 [2024-12-06 03:34:09.197587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.153 [2024-12-06 03:34:09.197595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.153 [2024-12-06 03:34:09.209278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.153 [2024-12-06 03:34:09.209298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.153 [2024-12-06 03:34:09.209306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.153 [2024-12-06 03:34:09.218963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.153 [2024-12-06 03:34:09.218982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.153 [2024-12-06 03:34:09.218993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.153 [2024-12-06 03:34:09.227686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.153 [2024-12-06 03:34:09.227706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.153 [2024-12-06 03:34:09.227714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.153 [2024-12-06 03:34:09.236774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.153 [2024-12-06 03:34:09.236794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.153 [2024-12-06 03:34:09.236802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.153 [2024-12-06 03:34:09.246258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.153 [2024-12-06 03:34:09.246278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.153 [2024-12-06 03:34:09.246286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.153 [2024-12-06 03:34:09.256484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.153 [2024-12-06 03:34:09.256504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.153 [2024-12-06 03:34:09.256512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.153 [2024-12-06 03:34:09.265792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.153 [2024-12-06 03:34:09.265812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.153 [2024-12-06 03:34:09.265820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.153 [2024-12-06 03:34:09.276344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.153 [2024-12-06 03:34:09.276364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.153 [2024-12-06 03:34:09.276372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.153 [2024-12-06 03:34:09.288819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.153 [2024-12-06 03:34:09.288841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.153 [2024-12-06 03:34:09.288850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.413 [2024-12-06 03:34:09.297171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.413 [2024-12-06 03:34:09.297192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.413 [2024-12-06 03:34:09.297201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.413 [2024-12-06 03:34:09.308942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.413 [2024-12-06 03:34:09.308973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.413 [2024-12-06 03:34:09.308982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.413 [2024-12-06 03:34:09.318750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.413 [2024-12-06 03:34:09.318770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.413 [2024-12-06 03:34:09.318778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.413 [2024-12-06 03:34:09.328656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.413 [2024-12-06 03:34:09.328676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.413 [2024-12-06 03:34:09.328684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.413 [2024-12-06 03:34:09.338279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.413 [2024-12-06 03:34:09.338299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.413 [2024-12-06 03:34:09.338308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.413 [2024-12-06 03:34:09.349286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.413 [2024-12-06 03:34:09.349307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.413 [2024-12-06 03:34:09.349315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.413 [2024-12-06 03:34:09.357570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.413 [2024-12-06 03:34:09.357590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.413 [2024-12-06 03:34:09.357598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.413 [2024-12-06 03:34:09.366814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.413 [2024-12-06 03:34:09.366835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.413 [2024-12-06 03:34:09.366842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.413 [2024-12-06 03:34:09.377040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.413 [2024-12-06 03:34:09.377060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.413 [2024-12-06 03:34:09.377068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.413 [2024-12-06 03:34:09.386615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.413 [2024-12-06 03:34:09.386636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.413 [2024-12-06 03:34:09.386644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.413 [2024-12-06 03:34:09.396282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.413 [2024-12-06 03:34:09.396302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.413 [2024-12-06 03:34:09.396310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.413 [2024-12-06 03:34:09.406144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.413 [2024-12-06 03:34:09.406164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.413 [2024-12-06 03:34:09.406173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.413 [2024-12-06 03:34:09.416251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.413 [2024-12-06 03:34:09.416272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.413 [2024-12-06 03:34:09.416280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.413 [2024-12-06 03:34:09.424360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.413 [2024-12-06 03:34:09.424381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.413 [2024-12-06 03:34:09.424389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.413 [2024-12-06 03:34:09.436428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.413 [2024-12-06 03:34:09.436450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.413 [2024-12-06 03:34:09.436458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.413 [2024-12-06 03:34:09.448528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.413 [2024-12-06 03:34:09.448549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.413 [2024-12-06 03:34:09.448557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.413 [2024-12-06 03:34:09.457345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.413 [2024-12-06 03:34:09.457365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.414 [2024-12-06 03:34:09.457374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.414 [2024-12-06 03:34:09.467249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.414 [2024-12-06 03:34:09.467269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.414 [2024-12-06 03:34:09.467277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.414 [2024-12-06 03:34:09.475728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.414 [2024-12-06 03:34:09.475748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.414 [2024-12-06 03:34:09.475760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.414 [2024-12-06 03:34:09.486553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.414 [2024-12-06 03:34:09.486574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.414 [2024-12-06 03:34:09.486582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.414 [2024-12-06 03:34:09.496197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.414 [2024-12-06 03:34:09.496217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.414 [2024-12-06 03:34:09.496225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.414 [2024-12-06 03:34:09.504470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.414 [2024-12-06 03:34:09.504491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.414 [2024-12-06 03:34:09.504498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.414 [2024-12-06 03:34:09.514799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.414 [2024-12-06 03:34:09.514820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.414 [2024-12-06 03:34:09.514828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.414 [2024-12-06 03:34:09.524152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.414 [2024-12-06 03:34:09.524173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.414 [2024-12-06 03:34:09.524181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.414 [2024-12-06 03:34:09.536860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.414 [2024-12-06 03:34:09.536881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.414 [2024-12-06 03:34:09.536889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.414 [2024-12-06 03:34:09.545513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.414 [2024-12-06 03:34:09.545533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.414 [2024-12-06 03:34:09.545541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.673 [2024-12-06 03:34:09.557591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.673 [2024-12-06 03:34:09.557613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.673 [2024-12-06 03:34:09.557622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.673 [2024-12-06 03:34:09.570495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.673 [2024-12-06 03:34:09.570515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.673 [2024-12-06 03:34:09.570523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.673 [2024-12-06 03:34:09.581415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.673 [2024-12-06 03:34:09.581435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.673 [2024-12-06 03:34:09.581444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.673 [2024-12-06 03:34:09.590367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.673 [2024-12-06 03:34:09.590395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.673 [2024-12-06 03:34:09.590403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.673 [2024-12-06 03:34:09.603482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.673 [2024-12-06 03:34:09.603502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.673 [2024-12-06 03:34:09.603511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.673 [2024-12-06 03:34:09.616348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.673 [2024-12-06 03:34:09.616368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.673 [2024-12-06 03:34:09.616376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.673 [2024-12-06 03:34:09.629302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.673 [2024-12-06 03:34:09.629322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.673 [2024-12-06 03:34:09.629330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.673 [2024-12-06 03:34:09.640956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.673 [2024-12-06 03:34:09.640976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.673 [2024-12-06 03:34:09.640984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.673 [2024-12-06 03:34:09.653232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.673 [2024-12-06 03:34:09.653252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.674 [2024-12-06 03:34:09.653260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.674 [2024-12-06 03:34:09.664595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.674 [2024-12-06 03:34:09.664615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.674 [2024-12-06 03:34:09.664626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.674 [2024-12-06 03:34:09.673432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.674 [2024-12-06 03:34:09.673452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.674 [2024-12-06 03:34:09.673460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.674 [2024-12-06 03:34:09.684923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.674 [2024-12-06 03:34:09.684943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.674 [2024-12-06 03:34:09.684957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.674 [2024-12-06 03:34:09.697602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.674 [2024-12-06 03:34:09.697622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.674 [2024-12-06 03:34:09.697630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.674 [2024-12-06 03:34:09.711515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.674 [2024-12-06 03:34:09.711535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.674 [2024-12-06 03:34:09.711543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.674 [2024-12-06 03:34:09.722819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.674 [2024-12-06 03:34:09.722839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.674 [2024-12-06 03:34:09.722847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.674 [2024-12-06 03:34:09.731638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.674 [2024-12-06 03:34:09.731659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.674 [2024-12-06 03:34:09.731667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.674 [2024-12-06 03:34:09.742864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.674 [2024-12-06 03:34:09.742885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.674 [2024-12-06 03:34:09.742893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.674 [2024-12-06 03:34:09.752319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.674 [2024-12-06 03:34:09.752339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.674 [2024-12-06 03:34:09.752347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.674 [2024-12-06 03:34:09.761823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.674 [2024-12-06 03:34:09.761850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.674 [2024-12-06 03:34:09.761859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.674 [2024-12-06 03:34:09.773066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.674 [2024-12-06 03:34:09.773086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.674 [2024-12-06 03:34:09.773095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.674 [2024-12-06 03:34:09.784628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.674 [2024-12-06 03:34:09.784648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.674 [2024-12-06 03:34:09.784657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.674 [2024-12-06 03:34:09.794104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.674 [2024-12-06 03:34:09.794124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.674 [2024-12-06 03:34:09.794132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.674 [2024-12-06 03:34:09.806103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.674 [2024-12-06 03:34:09.806127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.674 [2024-12-06 03:34:09.806137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.934 [2024-12-06 03:34:09.818161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.934 [2024-12-06 03:34:09.818183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.934 [2024-12-06 03:34:09.818192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.934 [2024-12-06 03:34:09.827084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.934 [2024-12-06 03:34:09.827105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.934 [2024-12-06 03:34:09.827114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.934 [2024-12-06 03:34:09.837934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.934 [2024-12-06 03:34:09.837959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.934 [2024-12-06 03:34:09.837967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.934 [2024-12-06 03:34:09.847476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.934 [2024-12-06 03:34:09.847495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.934 [2024-12-06 03:34:09.847503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.934 [2024-12-06 03:34:09.856883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.934 [2024-12-06 03:34:09.856904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.934 [2024-12-06 03:34:09.856912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.934 [2024-12-06 03:34:09.866658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.934 [2024-12-06 03:34:09.866679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.934 [2024-12-06 03:34:09.866687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.934 [2024-12-06 03:34:09.875940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.934 [2024-12-06 03:34:09.875966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.934 [2024-12-06 03:34:09.875974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.934 [2024-12-06 03:34:09.885713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.934 [2024-12-06 03:34:09.885734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.934 [2024-12-06 03:34:09.885742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.934 [2024-12-06 03:34:09.894107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.934 [2024-12-06 03:34:09.894127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.934 [2024-12-06 03:34:09.894135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.934 [2024-12-06 03:34:09.905271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.934 [2024-12-06 03:34:09.905291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.934 [2024-12-06 03:34:09.905299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.934 [2024-12-06 03:34:09.915616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.934 [2024-12-06 03:34:09.915637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.934 [2024-12-06 03:34:09.915647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.934 [2024-12-06 03:34:09.925754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.934 [2024-12-06 03:34:09.925776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.934 [2024-12-06 03:34:09.925784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.934 [2024-12-06 03:34:09.934287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.934 [2024-12-06 03:34:09.934308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.935 [2024-12-06 03:34:09.934320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.935 [2024-12-06 03:34:09.944953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.935 [2024-12-06 03:34:09.944974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.935 [2024-12-06 03:34:09.944983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.935 [2024-12-06 03:34:09.956398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.935 [2024-12-06 03:34:09.956419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.935 [2024-12-06 03:34:09.956427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.935 [2024-12-06 03:34:09.965299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.935 [2024-12-06 03:34:09.965320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.935 [2024-12-06 03:34:09.965328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.935 [2024-12-06 03:34:09.975515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.935 [2024-12-06 03:34:09.975535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.935 [2024-12-06 03:34:09.975544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.935 [2024-12-06 03:34:09.983812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.935 [2024-12-06 03:34:09.983832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.935 [2024-12-06 03:34:09.983841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.935 [2024-12-06 03:34:09.994391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.935 [2024-12-06 03:34:09.994412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.935 [2024-12-06 03:34:09.994420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.935 [2024-12-06 03:34:10.003759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.935 [2024-12-06 03:34:10.003781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.935 [2024-12-06 03:34:10.003790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.935 [2024-12-06 03:34:10.016181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.935 [2024-12-06 03:34:10.016206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.935 [2024-12-06 03:34:10.016215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.935 [2024-12-06 03:34:10.027867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.935 [2024-12-06 03:34:10.027889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.935 [2024-12-06 03:34:10.027897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.935 [2024-12-06 03:34:10.037795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.935 [2024-12-06 03:34:10.037818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.935 [2024-12-06 03:34:10.037826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.935 [2024-12-06 03:34:10.047504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.935 [2024-12-06 03:34:10.047527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.935 [2024-12-06 03:34:10.047536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.935 [2024-12-06 03:34:10.058757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.935 [2024-12-06 03:34:10.058779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.935 [2024-12-06 03:34:10.058788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.935 [2024-12-06 03:34:10.069852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:49.935 [2024-12-06 03:34:10.069874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.935 [2024-12-06 03:34:10.069883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:50.194 24965.00 IOPS, 97.52 MiB/s [2024-12-06T02:34:10.335Z] [2024-12-06 03:34:10.082126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221c2e0) 00:25:50.194 [2024-12-06 03:34:10.082146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.194 [2024-12-06 03:34:10.082155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:50.194 00:25:50.194 Latency(us) 00:25:50.194 [2024-12-06T02:34:10.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:50.194 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:50.194 nvme0n1 : 2.04 24489.05 95.66 0.00 0.00 5118.87 2279.51 48781.58 00:25:50.194 [2024-12-06T02:34:10.335Z] =================================================================================================================== 00:25:50.194 [2024-12-06T02:34:10.335Z] Total : 24489.05 95.66 0.00 0.00 5118.87 2279.51 48781.58 00:25:50.194 { 00:25:50.194 "results": [ 00:25:50.194 { 00:25:50.194 "job": "nvme0n1", 00:25:50.194 "core_mask": "0x2", 00:25:50.194 "workload": "randread", 00:25:50.194 "status": "finished", 00:25:50.194 "queue_depth": 128, 00:25:50.194 "io_size": 4096, 00:25:50.194 "runtime": 2.044097, 00:25:50.194 "iops": 24489.053112450143, 00:25:50.194 "mibps": 95.66036372050837, 00:25:50.194 "io_failed": 0, 00:25:50.194 "io_timeout": 0, 00:25:50.194 "avg_latency_us": 5118.871738835126, 00:25:50.194 "min_latency_us": 2279.513043478261, 00:25:50.194 "max_latency_us": 48781.57913043478 00:25:50.194 } 00:25:50.194 ], 00:25:50.194 "core_count": 1 00:25:50.194 } 00:25:50.194 03:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:50.194 03:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:50.194 03:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:50.194 | .driver_specific 00:25:50.194 | .nvme_error 00:25:50.194 | .status_code 00:25:50.194 | .command_transient_transport_error' 00:25:50.194 03:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:50.453 03:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 196 > 0 )) 00:25:50.453 03:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2759687 00:25:50.453 03:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2759687 ']' 00:25:50.453 03:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2759687 00:25:50.453 03:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:50.453 03:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:50.453 03:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2759687 00:25:50.453 03:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:50.453 03:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:50.453 03:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2759687' 00:25:50.453 killing process with pid 2759687 00:25:50.453 03:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2759687 00:25:50.453 Received shutdown signal, test time was about 2.000000 seconds 00:25:50.453 00:25:50.453 Latency(us) 00:25:50.453 [2024-12-06T02:34:10.594Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:50.453 [2024-12-06T02:34:10.594Z] =================================================================================================================== 00:25:50.453 [2024-12-06T02:34:10.594Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:50.453 03:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2759687 00:25:50.453 03:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:50.453 03:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:50.453 03:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:50.453 03:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:50.453 03:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:50.453 03:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2760259 00:25:50.453 03:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2760259 /var/tmp/bperf.sock 00:25:50.453 03:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:50.453 03:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2760259 ']' 00:25:50.453 03:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:50.453 03:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:50.453 03:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:50.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:50.453 03:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:50.453 03:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:50.712 [2024-12-06 03:34:10.604375] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:25:50.712 [2024-12-06 03:34:10.604426] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2760259 ] 00:25:50.712 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:50.712 Zero copy mechanism will not be used. 00:25:50.712 [2024-12-06 03:34:10.667654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.712 [2024-12-06 03:34:10.709048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:50.712 03:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:50.712 03:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:50.712 03:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:50.712 03:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:50.970 03:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:50.970 03:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.970 03:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:50.970 03:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.970 03:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:50.970 03:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:51.229 nvme0n1 00:25:51.488 03:34:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:51.488 03:34:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.488 03:34:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:51.488 03:34:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.488 03:34:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:51.488 03:34:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:51.488 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:51.488 Zero copy mechanism will not be used. 00:25:51.488 Running I/O for 2 seconds... 00:25:51.488 [2024-12-06 03:34:11.481644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.488 [2024-12-06 03:34:11.481683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.488 [2024-12-06 03:34:11.481694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.488 [2024-12-06 03:34:11.488567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.488 [2024-12-06 03:34:11.488596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.488 [2024-12-06 03:34:11.488610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.488 [2024-12-06 03:34:11.496086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.488 [2024-12-06 03:34:11.496111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.488 [2024-12-06 03:34:11.496120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.488 [2024-12-06 03:34:11.504795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.488 [2024-12-06 03:34:11.504819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.488 [2024-12-06 03:34:11.504828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.488 [2024-12-06 03:34:11.512449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.488 [2024-12-06 03:34:11.512473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.488 [2024-12-06 03:34:11.512482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.488 [2024-12-06 03:34:11.519990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.488 [2024-12-06 03:34:11.520014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.488 [2024-12-06 03:34:11.520023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.488 [2024-12-06 03:34:11.526564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.488 [2024-12-06 03:34:11.526587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.488 [2024-12-06 03:34:11.526595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.488 [2024-12-06 03:34:11.532765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.488 [2024-12-06 03:34:11.532788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.488 [2024-12-06 03:34:11.532797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.488 [2024-12-06 03:34:11.539149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.489 [2024-12-06 03:34:11.539171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.489 [2024-12-06 03:34:11.539180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.489 [2024-12-06 03:34:11.545169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.489 [2024-12-06 03:34:11.545192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.489 [2024-12-06 03:34:11.545201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.489 [2024-12-06 03:34:11.551527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.489 [2024-12-06 03:34:11.551553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.489 [2024-12-06 03:34:11.551561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.489 [2024-12-06 03:34:11.557437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.489 [2024-12-06 03:34:11.557461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.489 [2024-12-06 03:34:11.557470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.489 [2024-12-06 03:34:11.563679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.489 [2024-12-06 03:34:11.563701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.489 [2024-12-06 03:34:11.563709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.489 [2024-12-06 03:34:11.569829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.489 [2024-12-06 03:34:11.569850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.489 [2024-12-06 03:34:11.569858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.489 [2024-12-06 03:34:11.575811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.489 [2024-12-06 03:34:11.575835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.489 [2024-12-06 03:34:11.575843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.489 [2024-12-06 03:34:11.581878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.489 [2024-12-06 03:34:11.581900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.489 [2024-12-06 03:34:11.581908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.489 [2024-12-06 03:34:11.588173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.489 [2024-12-06 03:34:11.588195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.489 [2024-12-06 03:34:11.588204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.489 [2024-12-06 03:34:11.594152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.489 [2024-12-06 03:34:11.594174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.489 [2024-12-06 03:34:11.594182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.489 [2024-12-06 03:34:11.597642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.489 [2024-12-06 03:34:11.597663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.489 [2024-12-06 03:34:11.597671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.489 [2024-12-06 03:34:11.603667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.489 [2024-12-06 03:34:11.603689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.489 [2024-12-06 03:34:11.603697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.489 [2024-12-06 03:34:11.609354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.489 [2024-12-06 03:34:11.609375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.489 [2024-12-06 03:34:11.609383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.489 [2024-12-06 03:34:11.615309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.489 [2024-12-06 03:34:11.615331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.489 [2024-12-06 03:34:11.615340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.489 [2024-12-06 03:34:11.621242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.489 [2024-12-06 03:34:11.621264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.489 [2024-12-06 03:34:11.621272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.750 [2024-12-06 03:34:11.627312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.750 [2024-12-06 03:34:11.627337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.750 [2024-12-06 03:34:11.627346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.750 [2024-12-06 03:34:11.633363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.750 [2024-12-06 03:34:11.633387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.750 [2024-12-06 03:34:11.633396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.750 [2024-12-06 03:34:11.639306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.750 [2024-12-06 03:34:11.639327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.750 [2024-12-06 03:34:11.639335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.750 [2024-12-06 03:34:11.645042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.750 [2024-12-06 03:34:11.645063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.750 [2024-12-06 03:34:11.645070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.750 [2024-12-06 03:34:11.650793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.750 [2024-12-06 03:34:11.650814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.750 [2024-12-06 03:34:11.650825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.750 [2024-12-06 03:34:11.656701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.750 [2024-12-06 03:34:11.656723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.750 [2024-12-06 03:34:11.656731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.750 [2024-12-06 03:34:11.662457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.750 [2024-12-06 03:34:11.662480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.750 [2024-12-06 03:34:11.662488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.750 [2024-12-06 03:34:11.668281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.750 [2024-12-06 03:34:11.668303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.750 [2024-12-06 03:34:11.668311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.750 [2024-12-06 03:34:11.673419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.750 [2024-12-06 03:34:11.673442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.750 [2024-12-06 03:34:11.673450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.750 [2024-12-06 03:34:11.679043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.750 [2024-12-06 03:34:11.679066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.750 [2024-12-06 03:34:11.679074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.750 [2024-12-06 03:34:11.684653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.750 [2024-12-06 03:34:11.684676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.750 [2024-12-06 03:34:11.684684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.750 [2024-12-06 03:34:11.690316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.750 [2024-12-06 03:34:11.690339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.750 [2024-12-06 03:34:11.690348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.750 [2024-12-06 03:34:11.696061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.750 [2024-12-06 03:34:11.696084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.750 [2024-12-06 03:34:11.696093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.750 [2024-12-06 03:34:11.701822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.750 [2024-12-06 03:34:11.701845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.750 [2024-12-06 03:34:11.701854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.750 [2024-12-06 03:34:11.707656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.750 [2024-12-06 03:34:11.707679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.750 [2024-12-06 03:34:11.707687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.750 [2024-12-06 03:34:11.714021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.750 [2024-12-06 03:34:11.714043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.750 [2024-12-06 03:34:11.714051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.750 [2024-12-06 03:34:11.719796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.750 [2024-12-06 03:34:11.719819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.750 [2024-12-06 03:34:11.719828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.750 [2024-12-06 03:34:11.725873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.750 [2024-12-06 03:34:11.725895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.750 [2024-12-06 03:34:11.725903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.750 [2024-12-06 03:34:11.731715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.750 [2024-12-06 03:34:11.731738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.750 [2024-12-06 03:34:11.731746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.750 [2024-12-06 03:34:11.737372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.750 [2024-12-06 03:34:11.737393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.750 [2024-12-06 03:34:11.737401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.750 [2024-12-06 03:34:11.743230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.750 [2024-12-06 03:34:11.743253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.750 [2024-12-06 03:34:11.743261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.750 [2024-12-06 03:34:11.748632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.750 [2024-12-06 03:34:11.748654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.751 [2024-12-06 03:34:11.748668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.751 [2024-12-06 03:34:11.754797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.751 [2024-12-06 03:34:11.754820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.751 [2024-12-06 03:34:11.754829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.751 [2024-12-06 03:34:11.761283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.751 [2024-12-06 03:34:11.761304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.751 [2024-12-06 03:34:11.761312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.751 [2024-12-06 03:34:11.767515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.751 [2024-12-06 03:34:11.767537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.751 [2024-12-06 03:34:11.767545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.751 [2024-12-06 03:34:11.773541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.751 [2024-12-06 03:34:11.773562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.751 [2024-12-06 03:34:11.773571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.751 [2024-12-06 03:34:11.779814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.751 [2024-12-06 03:34:11.779836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.751 [2024-12-06 03:34:11.779845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.751 [2024-12-06 03:34:11.786577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.751 [2024-12-06 03:34:11.786599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.751 [2024-12-06 03:34:11.786607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.751 [2024-12-06 03:34:11.793023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.751 [2024-12-06 03:34:11.793046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.751 [2024-12-06 03:34:11.793054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.751 [2024-12-06 03:34:11.799390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.751 [2024-12-06 03:34:11.799411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.751 [2024-12-06 03:34:11.799419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.751 [2024-12-06 03:34:11.805622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.751 [2024-12-06 03:34:11.805648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.751 [2024-12-06 03:34:11.805657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.751 [2024-12-06 03:34:11.811482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.751 [2024-12-06 03:34:11.811503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.751 [2024-12-06 03:34:11.811511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.751 [2024-12-06 03:34:11.817277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.751 [2024-12-06 03:34:11.817299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.751 [2024-12-06 03:34:11.817307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.751 [2024-12-06 03:34:11.823011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.751 [2024-12-06 03:34:11.823032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.751 [2024-12-06 03:34:11.823040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.751 [2024-12-06 03:34:11.829172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.751 [2024-12-06 03:34:11.829194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.751 [2024-12-06 03:34:11.829202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.751 [2024-12-06 03:34:11.835182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.751 [2024-12-06 03:34:11.835204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.751 [2024-12-06 03:34:11.835212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.751 [2024-12-06 03:34:11.840445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.751 [2024-12-06 03:34:11.840465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.751 [2024-12-06 03:34:11.840473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.751 [2024-12-06 03:34:11.843814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.751 [2024-12-06 03:34:11.843834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.751 [2024-12-06 03:34:11.843842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.751 [2024-12-06 03:34:11.850047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.751 [2024-12-06 03:34:11.850068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.751 [2024-12-06 03:34:11.850076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.751 [2024-12-06 03:34:11.855805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.751 [2024-12-06 03:34:11.855827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.751 [2024-12-06 03:34:11.855835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.751 [2024-12-06 03:34:11.861657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.751 [2024-12-06 03:34:11.861677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.751 [2024-12-06 03:34:11.861685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.751 [2024-12-06 03:34:11.867405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.751 [2024-12-06 03:34:11.867426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.751 [2024-12-06 03:34:11.867434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.751 [2024-12-06 03:34:11.874641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.751 [2024-12-06 03:34:11.874661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.751 [2024-12-06 03:34:11.874669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.751 [2024-12-06 03:34:11.881065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:51.751 [2024-12-06 03:34:11.881085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.751 [2024-12-06 03:34:11.881093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.012 [2024-12-06 03:34:11.887722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.012 [2024-12-06 03:34:11.887746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.012 [2024-12-06 03:34:11.887755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.012 [2024-12-06 03:34:11.894083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.012 [2024-12-06 03:34:11.894104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.012 [2024-12-06 03:34:11.894112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.012 [2024-12-06 03:34:11.900557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.012 [2024-12-06 03:34:11.900579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.012 [2024-12-06 03:34:11.900587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.012 [2024-12-06 03:34:11.906741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.012 [2024-12-06 03:34:11.906762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.012 [2024-12-06 03:34:11.906774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.012 [2024-12-06 03:34:11.912780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.012 [2024-12-06 03:34:11.912801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.012 [2024-12-06 03:34:11.912809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.012 [2024-12-06 03:34:11.918757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.012 [2024-12-06 03:34:11.918777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.012 [2024-12-06 03:34:11.918785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.012 [2024-12-06 03:34:11.924668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.012 [2024-12-06 03:34:11.924689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.012 [2024-12-06 03:34:11.924696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.012 [2024-12-06 03:34:11.930547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.012 [2024-12-06 03:34:11.930568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.013 [2024-12-06 03:34:11.930576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.013 [2024-12-06 03:34:11.936478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.013 [2024-12-06 03:34:11.936499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.013 [2024-12-06 03:34:11.936508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.013 [2024-12-06 03:34:11.942484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.013 [2024-12-06 03:34:11.942505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.013 [2024-12-06 03:34:11.942513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.013 [2024-12-06 03:34:11.948835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.013 [2024-12-06 03:34:11.948856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.013 [2024-12-06 03:34:11.948863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.013 [2024-12-06 03:34:11.955373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.013 [2024-12-06 03:34:11.955394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.013 [2024-12-06 03:34:11.955402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.013 [2024-12-06 03:34:11.961582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.013 [2024-12-06 03:34:11.961607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.013 [2024-12-06 03:34:11.961616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.013 [2024-12-06 03:34:11.967734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.013 [2024-12-06 03:34:11.967754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.013 [2024-12-06 03:34:11.967762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.013 [2024-12-06 03:34:11.973818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.013 [2024-12-06 03:34:11.973837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.013 [2024-12-06 03:34:11.973845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.013 [2024-12-06 03:34:11.979694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.013 [2024-12-06 03:34:11.979714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.013 [2024-12-06 03:34:11.979722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.013 [2024-12-06 03:34:11.985337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.013 [2024-12-06 03:34:11.985358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.013 [2024-12-06 03:34:11.985366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.013 [2024-12-06 03:34:11.991320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.013 [2024-12-06 03:34:11.991342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.013 [2024-12-06 03:34:11.991350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.013 [2024-12-06 03:34:11.997114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.013 [2024-12-06 03:34:11.997134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.013 [2024-12-06 03:34:11.997143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.013 [2024-12-06 03:34:12.003616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.013 [2024-12-06 03:34:12.003637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.013 [2024-12-06 03:34:12.003644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.013 [2024-12-06 03:34:12.009793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.013 [2024-12-06 03:34:12.009814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.013 [2024-12-06 03:34:12.009822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.013 [2024-12-06 03:34:12.016223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.013 [2024-12-06 03:34:12.016243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.013 [2024-12-06 03:34:12.016251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.013 [2024-12-06 03:34:12.022182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.013 [2024-12-06 03:34:12.022204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.013 [2024-12-06 03:34:12.022212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.013 [2024-12-06 03:34:12.028040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.013 [2024-12-06 03:34:12.028061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.013 [2024-12-06 03:34:12.028069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.013 [2024-12-06 03:34:12.033992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.013 [2024-12-06 03:34:12.034014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.013 [2024-12-06 03:34:12.034022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.013 [2024-12-06 03:34:12.040184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.013 [2024-12-06 03:34:12.040205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.013 [2024-12-06 03:34:12.040214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.013 [2024-12-06 03:34:12.045868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.013 [2024-12-06 03:34:12.045890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.013 [2024-12-06 03:34:12.045898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.013 [2024-12-06 03:34:12.051641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.013 [2024-12-06 03:34:12.051663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.013 [2024-12-06 03:34:12.051670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.013 [2024-12-06 03:34:12.057681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.013 [2024-12-06 03:34:12.057704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.013 [2024-12-06 03:34:12.057712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.013 [2024-12-06 03:34:12.063726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.013 [2024-12-06 03:34:12.063748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.013 [2024-12-06 03:34:12.063760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.013 [2024-12-06 03:34:12.070498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.013 [2024-12-06 03:34:12.070520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.013 [2024-12-06 03:34:12.070527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.013 [2024-12-06 03:34:12.076842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.013 [2024-12-06 03:34:12.076862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.013 [2024-12-06 03:34:12.076870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.013 [2024-12-06 03:34:12.083179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.013 [2024-12-06 03:34:12.083200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.013 [2024-12-06 03:34:12.083208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.013 [2024-12-06 03:34:12.089353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.013 [2024-12-06 03:34:12.089375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.013 [2024-12-06 03:34:12.089383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.014 [2024-12-06 03:34:12.095376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.014 [2024-12-06 03:34:12.095397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.014 [2024-12-06 03:34:12.095405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.014 [2024-12-06 03:34:12.101496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.014 [2024-12-06 03:34:12.101518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.014 [2024-12-06 03:34:12.101527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.014 [2024-12-06 03:34:12.107721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.014 [2024-12-06 03:34:12.107742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.014 [2024-12-06 03:34:12.107750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.014 [2024-12-06 03:34:12.113868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.014 [2024-12-06 03:34:12.113888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.014 [2024-12-06 03:34:12.113895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.014 [2024-12-06 03:34:12.120221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.014 [2024-12-06 03:34:12.120242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.014 [2024-12-06 03:34:12.120250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.014 [2024-12-06 03:34:12.126354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.014 [2024-12-06 03:34:12.126375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.014 [2024-12-06 03:34:12.126382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.014 [2024-12-06 03:34:12.132268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.014 [2024-12-06 03:34:12.132289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.014 [2024-12-06 03:34:12.132296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.014 [2024-12-06 03:34:12.138441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.014 [2024-12-06 03:34:12.138462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.014 [2024-12-06 03:34:12.138469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.014 [2024-12-06 03:34:12.144530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.014 [2024-12-06 03:34:12.144551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.014 [2024-12-06 03:34:12.144559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.275 [2024-12-06 03:34:12.151146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.275 [2024-12-06 03:34:12.151168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.275 [2024-12-06 03:34:12.151176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.275 [2024-12-06 03:34:12.158369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.275 [2024-12-06 03:34:12.158393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.275 [2024-12-06 03:34:12.158401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.275 [2024-12-06 03:34:12.166852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.275 [2024-12-06 03:34:12.166875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.275 [2024-12-06 03:34:12.166884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.275 [2024-12-06 03:34:12.174023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.275 [2024-12-06 03:34:12.174044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.275 [2024-12-06 03:34:12.174055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.275 [2024-12-06 03:34:12.180166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.275 [2024-12-06 03:34:12.180188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.275 [2024-12-06 03:34:12.180196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.275 [2024-12-06 03:34:12.186206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.275 [2024-12-06 03:34:12.186227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.275 [2024-12-06 03:34:12.186235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.275 [2024-12-06 03:34:12.192381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.275 [2024-12-06 03:34:12.192401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.275 [2024-12-06 03:34:12.192409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.275 [2024-12-06 03:34:12.198515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.275 [2024-12-06 03:34:12.198537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.275 [2024-12-06 03:34:12.198545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.275 [2024-12-06 03:34:12.204722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.275 [2024-12-06 03:34:12.204743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.275 [2024-12-06 03:34:12.204751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.275 [2024-12-06 03:34:12.210858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.275 [2024-12-06 03:34:12.210880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.275 [2024-12-06 03:34:12.210888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.275 [2024-12-06 03:34:12.217676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.275 [2024-12-06 03:34:12.217698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.275 [2024-12-06 03:34:12.217707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.275 [2024-12-06 03:34:12.224319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.275 [2024-12-06 03:34:12.224341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.275 [2024-12-06 03:34:12.224349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.275 [2024-12-06 03:34:12.231098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.275 [2024-12-06 03:34:12.231123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.275 [2024-12-06 03:34:12.231131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.275 [2024-12-06 03:34:12.237393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.275 [2024-12-06 03:34:12.237416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.275 [2024-12-06 03:34:12.237424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.275 [2024-12-06 03:34:12.244508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.275 [2024-12-06 03:34:12.244532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.275 [2024-12-06 03:34:12.244540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.275 [2024-12-06 03:34:12.251463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.275 [2024-12-06 03:34:12.251485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.275 [2024-12-06 03:34:12.251493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.275 [2024-12-06 03:34:12.258962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.275 [2024-12-06 03:34:12.258984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.275 [2024-12-06 03:34:12.258993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.275 [2024-12-06 03:34:12.266331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.275 [2024-12-06 03:34:12.266353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.275 [2024-12-06 03:34:12.266362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.275 [2024-12-06 03:34:12.272769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.275 [2024-12-06 03:34:12.272791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.275 [2024-12-06 03:34:12.272799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.275 [2024-12-06 03:34:12.279083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.275 [2024-12-06 03:34:12.279104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.275 [2024-12-06 03:34:12.279112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.275 [2024-12-06 03:34:12.285428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.275 [2024-12-06 03:34:12.285449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.275 [2024-12-06 03:34:12.285457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.275 [2024-12-06 03:34:12.292094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.275 [2024-12-06 03:34:12.292115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.275 [2024-12-06 03:34:12.292123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.275 [2024-12-06 03:34:12.300341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.275 [2024-12-06 03:34:12.300364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.275 [2024-12-06 03:34:12.300372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.275 [2024-12-06 03:34:12.307583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.275 [2024-12-06 03:34:12.307605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.275 [2024-12-06 03:34:12.307613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.275 [2024-12-06 03:34:12.315340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.276 [2024-12-06 03:34:12.315362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.276 [2024-12-06 03:34:12.315371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.276 [2024-12-06 03:34:12.323411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.276 [2024-12-06 03:34:12.323433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.276 [2024-12-06 03:34:12.323441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.276 [2024-12-06 03:34:12.331582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.276 [2024-12-06 03:34:12.331605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.276 [2024-12-06 03:34:12.331613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.276 [2024-12-06 03:34:12.339226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.276 [2024-12-06 03:34:12.339249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.276 [2024-12-06 03:34:12.339257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.276 [2024-12-06 03:34:12.348227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.276 [2024-12-06 03:34:12.348249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.276 [2024-12-06 03:34:12.348258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.276 [2024-12-06 03:34:12.355125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.276 [2024-12-06 03:34:12.355147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.276 [2024-12-06 03:34:12.355160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.276 [2024-12-06 03:34:12.361519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.276 [2024-12-06 03:34:12.361541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.276 [2024-12-06 03:34:12.361550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.276 [2024-12-06 03:34:12.367787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.276 [2024-12-06 03:34:12.367808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.276 [2024-12-06 03:34:12.367816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.276 [2024-12-06 03:34:12.373985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.276 [2024-12-06 03:34:12.374006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.276 [2024-12-06 03:34:12.374014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.276 [2024-12-06 03:34:12.379276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.276 [2024-12-06 03:34:12.379297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.276 [2024-12-06 03:34:12.379305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.276 [2024-12-06 03:34:12.385118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.276 [2024-12-06 03:34:12.385139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.276 [2024-12-06 03:34:12.385147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.276 [2024-12-06 03:34:12.390866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.276 [2024-12-06 03:34:12.390888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.276 [2024-12-06 03:34:12.390896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.276 [2024-12-06 03:34:12.396791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.276 [2024-12-06 03:34:12.396812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.276 [2024-12-06 03:34:12.396820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.276 [2024-12-06 03:34:12.402572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.276 [2024-12-06 03:34:12.402594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.276 [2024-12-06 03:34:12.402602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.276 [2024-12-06 03:34:12.408670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.276 [2024-12-06 03:34:12.408697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.276 [2024-12-06 03:34:12.408706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.536 [2024-12-06 03:34:12.415084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.536 [2024-12-06 03:34:12.415107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.536 [2024-12-06 03:34:12.415116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.536 [2024-12-06 03:34:12.420894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.536 [2024-12-06 03:34:12.420917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.536 [2024-12-06 03:34:12.420925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.536 [2024-12-06 03:34:12.426918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.536 [2024-12-06 03:34:12.426939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.536 [2024-12-06 03:34:12.426952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.536 [2024-12-06 03:34:12.432918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.536 [2024-12-06 03:34:12.432940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.536 [2024-12-06 03:34:12.432955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.536 [2024-12-06 03:34:12.438878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.536 [2024-12-06 03:34:12.438900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.536 [2024-12-06 03:34:12.438909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.536 [2024-12-06 03:34:12.445060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.536 [2024-12-06 03:34:12.445082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.536 [2024-12-06 03:34:12.445090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.536 [2024-12-06 03:34:12.451387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.536 [2024-12-06 03:34:12.451409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.536 [2024-12-06 03:34:12.451417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.536 [2024-12-06 03:34:12.457708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.536 [2024-12-06 03:34:12.457730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.536 [2024-12-06 03:34:12.457742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.536 [2024-12-06 03:34:12.464117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.536 [2024-12-06 03:34:12.464138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.536 [2024-12-06 03:34:12.464147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.536 [2024-12-06 03:34:12.470108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.536 [2024-12-06 03:34:12.470129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.536 [2024-12-06 03:34:12.470138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.537 [2024-12-06 03:34:12.476349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.537 [2024-12-06 03:34:12.476370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.537 [2024-12-06 03:34:12.476379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.537 4944.00 IOPS, 618.00 MiB/s [2024-12-06T02:34:12.678Z] [2024-12-06 03:34:12.483869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.537 [2024-12-06 03:34:12.483891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.537 [2024-12-06 03:34:12.483899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.537 [2024-12-06 03:34:12.491019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.537 [2024-12-06 03:34:12.491042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.537 [2024-12-06 03:34:12.491050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.537 [2024-12-06 03:34:12.497476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.537 [2024-12-06 03:34:12.497498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.537 [2024-12-06 03:34:12.497508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.537 [2024-12-06 03:34:12.503880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.537 [2024-12-06 03:34:12.503902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.537 [2024-12-06 03:34:12.503910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.537 [2024-12-06 03:34:12.510185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.537 [2024-12-06 03:34:12.510208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.537 [2024-12-06 03:34:12.510216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.537 [2024-12-06 03:34:12.516285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.537 [2024-12-06 03:34:12.516313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.537 [2024-12-06 03:34:12.516321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.537 [2024-12-06 03:34:12.522556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.537 [2024-12-06 03:34:12.522578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.537 [2024-12-06 03:34:12.522586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.537 [2024-12-06 03:34:12.528851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.537 [2024-12-06 03:34:12.528873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.537 [2024-12-06 03:34:12.528881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.537 [2024-12-06 03:34:12.534863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.537 [2024-12-06 03:34:12.534885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.537 [2024-12-06 03:34:12.534893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.537 [2024-12-06 03:34:12.540790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.537 [2024-12-06 03:34:12.540811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.537 [2024-12-06 03:34:12.540818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.537 [2024-12-06 03:34:12.546608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.537 [2024-12-06 03:34:12.546629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.537 [2024-12-06 03:34:12.546637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.537 [2024-12-06 03:34:12.552370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.537 [2024-12-06 03:34:12.552392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.537 [2024-12-06 03:34:12.552400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.537 [2024-12-06 03:34:12.558101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.537 [2024-12-06 03:34:12.558122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.537 [2024-12-06 03:34:12.558130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.537 [2024-12-06 03:34:12.564062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.537 [2024-12-06 03:34:12.564084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.537 [2024-12-06 03:34:12.564092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.537 [2024-12-06 03:34:12.569800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.537 [2024-12-06 03:34:12.569821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.537 [2024-12-06 03:34:12.569829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.537 [2024-12-06 03:34:12.575538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.537 [2024-12-06 03:34:12.575559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.537 [2024-12-06 03:34:12.575567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.537 [2024-12-06 03:34:12.581602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.537 [2024-12-06 03:34:12.581624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.537 [2024-12-06 03:34:12.581632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.537 [2024-12-06 03:34:12.587309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.537 [2024-12-06 03:34:12.587332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.537 [2024-12-06 03:34:12.587340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.537 [2024-12-06 03:34:12.592997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.537 [2024-12-06 03:34:12.593019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.537 [2024-12-06 03:34:12.593027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.537 [2024-12-06 03:34:12.598796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.537 [2024-12-06 03:34:12.598818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.537 [2024-12-06 03:34:12.598826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.537 [2024-12-06 03:34:12.604423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.537 [2024-12-06 03:34:12.604445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.537 [2024-12-06 03:34:12.604453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.537 [2024-12-06 03:34:12.610034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.537 [2024-12-06 03:34:12.610056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.537 [2024-12-06 03:34:12.610065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.537 [2024-12-06 03:34:12.615791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.537 [2024-12-06 03:34:12.615814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.537 [2024-12-06 03:34:12.615825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.537 [2024-12-06 03:34:12.621459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.537 [2024-12-06 03:34:12.621481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.537 [2024-12-06 03:34:12.621489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.537 [2024-12-06 03:34:12.627077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.537 [2024-12-06 03:34:12.627100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.537 [2024-12-06 03:34:12.627108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.537 [2024-12-06 03:34:12.632725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.537 [2024-12-06 03:34:12.632747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.538 [2024-12-06 03:34:12.632755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.538 [2024-12-06 03:34:12.638229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.538 [2024-12-06 03:34:12.638251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.538 [2024-12-06 03:34:12.638259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.538 [2024-12-06 03:34:12.643695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.538 [2024-12-06 03:34:12.643717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.538 [2024-12-06 03:34:12.643725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.538 [2024-12-06 03:34:12.649261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.538 [2024-12-06 03:34:12.649282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.538 [2024-12-06 03:34:12.649290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.538 [2024-12-06 03:34:12.654641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.538 [2024-12-06 03:34:12.654663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.538 [2024-12-06 03:34:12.654671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.538 [2024-12-06 03:34:12.659985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.538 [2024-12-06 03:34:12.660006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.538 [2024-12-06 03:34:12.660015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.538 [2024-12-06 03:34:12.665289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.538 [2024-12-06 03:34:12.665314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.538 [2024-12-06 03:34:12.665322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.538 [2024-12-06 03:34:12.670541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.538 [2024-12-06 03:34:12.670563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.538 [2024-12-06 03:34:12.670572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.798 [2024-12-06 03:34:12.676003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.798 [2024-12-06 03:34:12.676026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.798 [2024-12-06 03:34:12.676034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.798 [2024-12-06 03:34:12.681474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.798 [2024-12-06 03:34:12.681496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.798 [2024-12-06 03:34:12.681504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.798 [2024-12-06 03:34:12.687067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.798 [2024-12-06 03:34:12.687088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.798 [2024-12-06 03:34:12.687096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.798 [2024-12-06 03:34:12.692506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.798 [2024-12-06 03:34:12.692526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.798 [2024-12-06 03:34:12.692534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.798 [2024-12-06 03:34:12.697912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.798 [2024-12-06 03:34:12.697934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.798 [2024-12-06 03:34:12.697942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.798 [2024-12-06 03:34:12.703338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.798 [2024-12-06 03:34:12.703359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.798 [2024-12-06 03:34:12.703367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.798 [2024-12-06 03:34:12.708840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.798 [2024-12-06 03:34:12.708862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.798 [2024-12-06 03:34:12.708870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.798 [2024-12-06 03:34:12.714625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.798 [2024-12-06 03:34:12.714647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.798 [2024-12-06 03:34:12.714655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.798 [2024-12-06 03:34:12.720465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.798 [2024-12-06 03:34:12.720487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.798 [2024-12-06 03:34:12.720495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.798 [2024-12-06 03:34:12.725986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.798 [2024-12-06 03:34:12.726007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.798 [2024-12-06 03:34:12.726015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.798 [2024-12-06 03:34:12.731515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.798 [2024-12-06 03:34:12.731535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.798 [2024-12-06 03:34:12.731543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.798 [2024-12-06 03:34:12.737079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.798 [2024-12-06 03:34:12.737101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.798 [2024-12-06 03:34:12.737109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.798 [2024-12-06 03:34:12.742656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.798 [2024-12-06 03:34:12.742677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.798 [2024-12-06 03:34:12.742685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.798 [2024-12-06 03:34:12.748273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.798 [2024-12-06 03:34:12.748296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.798 [2024-12-06 03:34:12.748304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.798 [2024-12-06 03:34:12.753758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.798 [2024-12-06 03:34:12.753780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.798 [2024-12-06 03:34:12.753789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.798 [2024-12-06 03:34:12.759251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.798 [2024-12-06 03:34:12.759277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.798 [2024-12-06 03:34:12.759286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.798 [2024-12-06 03:34:12.764791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.798 [2024-12-06 03:34:12.764812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.798 [2024-12-06 03:34:12.764820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.798 [2024-12-06 03:34:12.770327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.798 [2024-12-06 03:34:12.770349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.798 [2024-12-06 03:34:12.770357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.798 [2024-12-06 03:34:12.775904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.798 [2024-12-06 03:34:12.775925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.798 [2024-12-06 03:34:12.775933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.798 [2024-12-06 03:34:12.781511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.798 [2024-12-06 03:34:12.781535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.798 [2024-12-06 03:34:12.781543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.798 [2024-12-06 03:34:12.787074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.798 [2024-12-06 03:34:12.787095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.798 [2024-12-06 03:34:12.787103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.798 [2024-12-06 03:34:12.792628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.798 [2024-12-06 03:34:12.792651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.798 [2024-12-06 03:34:12.792659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.798 [2024-12-06 03:34:12.798431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.798 [2024-12-06 03:34:12.798453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.798 [2024-12-06 03:34:12.798461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.798 [2024-12-06 03:34:12.804086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.798 [2024-12-06 03:34:12.804108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.798 [2024-12-06 03:34:12.804116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.798 [2024-12-06 03:34:12.809747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.798 [2024-12-06 03:34:12.809769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.799 [2024-12-06 03:34:12.809777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.799 [2024-12-06 03:34:12.815341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.799 [2024-12-06 03:34:12.815363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.799 [2024-12-06 03:34:12.815372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.799 [2024-12-06 03:34:12.820985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.799 [2024-12-06 03:34:12.821006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.799 [2024-12-06 03:34:12.821014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.799 [2024-12-06 03:34:12.826501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.799 [2024-12-06 03:34:12.826523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.799 [2024-12-06 03:34:12.826532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.799 [2024-12-06 03:34:12.832095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.799 [2024-12-06 03:34:12.832118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.799 [2024-12-06 03:34:12.832126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.799 [2024-12-06 03:34:12.837468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.799 [2024-12-06 03:34:12.837490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.799 [2024-12-06 03:34:12.837499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.799 [2024-12-06 03:34:12.842852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.799 [2024-12-06 03:34:12.842874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.799 [2024-12-06 03:34:12.842882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.799 [2024-12-06 03:34:12.848513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.799 [2024-12-06 03:34:12.848534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.799 [2024-12-06 03:34:12.848542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.799 [2024-12-06 03:34:12.854135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.799 [2024-12-06 03:34:12.854158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.799 [2024-12-06 03:34:12.854170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.799 [2024-12-06 03:34:12.859718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.799 [2024-12-06 03:34:12.859740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.799 [2024-12-06 03:34:12.859748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.799 [2024-12-06 03:34:12.865226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.799 [2024-12-06 03:34:12.865247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.799 [2024-12-06 03:34:12.865256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.799 [2024-12-06 03:34:12.870798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.799 [2024-12-06 03:34:12.870820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.799 [2024-12-06 03:34:12.870828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.799 [2024-12-06 03:34:12.876513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.799 [2024-12-06 03:34:12.876535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.799 [2024-12-06 03:34:12.876543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.799 [2024-12-06 03:34:12.882295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.799 [2024-12-06 03:34:12.882318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.799 [2024-12-06 03:34:12.882326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.799 [2024-12-06 03:34:12.888114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.799 [2024-12-06 03:34:12.888136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.799 [2024-12-06 03:34:12.888144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.799 [2024-12-06 03:34:12.893917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.799 [2024-12-06 03:34:12.893938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.799 [2024-12-06 03:34:12.893954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.799 [2024-12-06 03:34:12.899726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.799 [2024-12-06 03:34:12.899749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.799 [2024-12-06 03:34:12.899757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.799 [2024-12-06 03:34:12.905545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.799 [2024-12-06 03:34:12.905571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.799 [2024-12-06 03:34:12.905579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.799 [2024-12-06 03:34:12.911379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.799 [2024-12-06 03:34:12.911401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.799 [2024-12-06 03:34:12.911409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.799 [2024-12-06 03:34:12.917227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.799 [2024-12-06 03:34:12.917248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.799 [2024-12-06 03:34:12.917255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.799 [2024-12-06 03:34:12.923021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.799 [2024-12-06 03:34:12.923043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.799 [2024-12-06 03:34:12.923050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.799 [2024-12-06 03:34:12.928529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.799 [2024-12-06 03:34:12.928551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.799 [2024-12-06 03:34:12.928559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.799 [2024-12-06 03:34:12.934091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:52.799 [2024-12-06 03:34:12.934114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.799 [2024-12-06 03:34:12.934124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.060 [2024-12-06 03:34:12.939726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.060 [2024-12-06 03:34:12.939749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.060 [2024-12-06 03:34:12.939757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.060 [2024-12-06 03:34:12.945386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.060 [2024-12-06 03:34:12.945406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.060 [2024-12-06 03:34:12.945415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.060 [2024-12-06 03:34:12.951141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.060 [2024-12-06 03:34:12.951163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.060 [2024-12-06 03:34:12.951171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.060 [2024-12-06 03:34:12.956789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.060 [2024-12-06 03:34:12.956810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.060 [2024-12-06 03:34:12.956818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.060 [2024-12-06 03:34:12.962368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.060 [2024-12-06 03:34:12.962390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.060 [2024-12-06 03:34:12.962398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.060 [2024-12-06 03:34:12.968035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.060 [2024-12-06 03:34:12.968058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.060 [2024-12-06 03:34:12.968066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.060 [2024-12-06 03:34:12.973847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.060 [2024-12-06 03:34:12.973869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.060 [2024-12-06 03:34:12.973877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.060 [2024-12-06 03:34:12.979350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.060 [2024-12-06 03:34:12.979371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.060 [2024-12-06 03:34:12.979379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.060 [2024-12-06 03:34:12.984904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.060 [2024-12-06 03:34:12.984926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.060 [2024-12-06 03:34:12.984934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.060 [2024-12-06 03:34:12.990513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.060 [2024-12-06 03:34:12.990535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.060 [2024-12-06 03:34:12.990543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.060 [2024-12-06 03:34:12.996070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.060 [2024-12-06 03:34:12.996092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.060 [2024-12-06 03:34:12.996101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.060 [2024-12-06 03:34:13.001712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.060 [2024-12-06 03:34:13.001733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.060 [2024-12-06 03:34:13.001745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.060 [2024-12-06 03:34:13.007227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.060 [2024-12-06 03:34:13.007249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.060 [2024-12-06 03:34:13.007258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.060 [2024-12-06 03:34:13.012583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.060 [2024-12-06 03:34:13.012605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.060 [2024-12-06 03:34:13.012613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.060 [2024-12-06 03:34:13.018044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.060 [2024-12-06 03:34:13.018066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.060 [2024-12-06 03:34:13.018074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.060 [2024-12-06 03:34:13.023594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.060 [2024-12-06 03:34:13.023616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.060 [2024-12-06 03:34:13.023624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.060 [2024-12-06 03:34:13.029350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.060 [2024-12-06 03:34:13.029372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.060 [2024-12-06 03:34:13.029380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.060 [2024-12-06 03:34:13.035129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.060 [2024-12-06 03:34:13.035151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.060 [2024-12-06 03:34:13.035158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.060 [2024-12-06 03:34:13.040715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.060 [2024-12-06 03:34:13.040737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.060 [2024-12-06 03:34:13.040745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.060 [2024-12-06 03:34:13.046345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.060 [2024-12-06 03:34:13.046367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.060 [2024-12-06 03:34:13.046375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.060 [2024-12-06 03:34:13.052148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.061 [2024-12-06 03:34:13.052170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.061 [2024-12-06 03:34:13.052178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.061 [2024-12-06 03:34:13.057807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.061 [2024-12-06 03:34:13.057828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.061 [2024-12-06 03:34:13.057836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.061 [2024-12-06 03:34:13.063382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.061 [2024-12-06 03:34:13.063405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.061 [2024-12-06 03:34:13.063413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.061 [2024-12-06 03:34:13.068959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.061 [2024-12-06 03:34:13.068981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.061 [2024-12-06 03:34:13.068990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.061 [2024-12-06 03:34:13.074435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.061 [2024-12-06 03:34:13.074457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.061 [2024-12-06 03:34:13.074465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.061 [2024-12-06 03:34:13.080148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.061 [2024-12-06 03:34:13.080171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.061 [2024-12-06 03:34:13.080180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.061 [2024-12-06 03:34:13.085944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.061 [2024-12-06 03:34:13.085972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.061 [2024-12-06 03:34:13.085980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.061 [2024-12-06 03:34:13.091531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.061 [2024-12-06 03:34:13.091554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.061 [2024-12-06 03:34:13.091562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.061 [2024-12-06 03:34:13.097187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.061 [2024-12-06 03:34:13.097209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.061 [2024-12-06 03:34:13.097221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.061 [2024-12-06 03:34:13.102931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.061 [2024-12-06 03:34:13.102959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.061 [2024-12-06 03:34:13.102967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.061 [2024-12-06 03:34:13.108612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.061 [2024-12-06 03:34:13.108634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.061 [2024-12-06 03:34:13.108643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.061 [2024-12-06 03:34:13.114135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.061 [2024-12-06 03:34:13.114157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.061 [2024-12-06 03:34:13.114166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.061 [2024-12-06 03:34:13.119801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.061 [2024-12-06 03:34:13.119823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.061 [2024-12-06 03:34:13.119831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.061 [2024-12-06 03:34:13.125572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.061 [2024-12-06 03:34:13.125594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.061 [2024-12-06 03:34:13.125602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.061 [2024-12-06 03:34:13.131347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.061 [2024-12-06 03:34:13.131369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.061 [2024-12-06 03:34:13.131377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.061 [2024-12-06 03:34:13.137137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.061 [2024-12-06 03:34:13.137158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.061 [2024-12-06 03:34:13.137166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.061 [2024-12-06 03:34:13.142916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.061 [2024-12-06 03:34:13.142938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.061 [2024-12-06 03:34:13.142953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.061 [2024-12-06 03:34:13.148608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.061 [2024-12-06 03:34:13.148634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.061 [2024-12-06 03:34:13.148642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.061 [2024-12-06 03:34:13.154153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.061 [2024-12-06 03:34:13.154175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.061 [2024-12-06 03:34:13.154183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.061 [2024-12-06 03:34:13.159733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.061 [2024-12-06 03:34:13.159755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.061 [2024-12-06 03:34:13.159763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.061 [2024-12-06 03:34:13.165313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.061 [2024-12-06 03:34:13.165334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.061 [2024-12-06 03:34:13.165342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.061 [2024-12-06 03:34:13.170860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.061 [2024-12-06 03:34:13.170880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.061 [2024-12-06 03:34:13.170888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.061 [2024-12-06 03:34:13.176402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.061 [2024-12-06 03:34:13.176424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.061 [2024-12-06 03:34:13.176432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.062 [2024-12-06 03:34:13.181882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.062 [2024-12-06 03:34:13.181905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.062 [2024-12-06 03:34:13.181913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.062 [2024-12-06 03:34:13.187361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.062 [2024-12-06 03:34:13.187382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.062 [2024-12-06 03:34:13.187390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.062 [2024-12-06 03:34:13.193629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.062 [2024-12-06 03:34:13.193653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.062 [2024-12-06 03:34:13.193661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.322 [2024-12-06 03:34:13.201009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.322 [2024-12-06 03:34:13.201032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.322 [2024-12-06 03:34:13.201041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.322 [2024-12-06 03:34:13.208280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.322 [2024-12-06 03:34:13.208303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.322 [2024-12-06 03:34:13.208311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.322 [2024-12-06 03:34:13.212588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.322 [2024-12-06 03:34:13.212609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.322 [2024-12-06 03:34:13.212617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.322 [2024-12-06 03:34:13.217784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.322 [2024-12-06 03:34:13.217806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.322 [2024-12-06 03:34:13.217814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.322 [2024-12-06 03:34:13.223940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.322 [2024-12-06 03:34:13.223968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.322 [2024-12-06 03:34:13.223976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.322 [2024-12-06 03:34:13.230146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.322 [2024-12-06 03:34:13.230176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.322 [2024-12-06 03:34:13.230185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.322 [2024-12-06 03:34:13.236323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.322 [2024-12-06 03:34:13.236345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.322 [2024-12-06 03:34:13.236353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.322 [2024-12-06 03:34:13.241816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.322 [2024-12-06 03:34:13.241838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.322 [2024-12-06 03:34:13.241846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.322 [2024-12-06 03:34:13.247294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.322 [2024-12-06 03:34:13.247314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.322 [2024-12-06 03:34:13.247327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.322 [2024-12-06 03:34:13.252678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.322 [2024-12-06 03:34:13.252700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.322 [2024-12-06 03:34:13.252708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.322 [2024-12-06 03:34:13.258059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.322 [2024-12-06 03:34:13.258081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.322 [2024-12-06 03:34:13.258090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.322 [2024-12-06 03:34:13.263354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.322 [2024-12-06 03:34:13.263375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.322 [2024-12-06 03:34:13.263383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.322 [2024-12-06 03:34:13.268754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.322 [2024-12-06 03:34:13.268776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.322 [2024-12-06 03:34:13.268784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.322 [2024-12-06 03:34:13.274012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.322 [2024-12-06 03:34:13.274033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.322 [2024-12-06 03:34:13.274041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.322 [2024-12-06 03:34:13.279584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.322 [2024-12-06 03:34:13.279605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.323 [2024-12-06 03:34:13.279614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.323 [2024-12-06 03:34:13.285131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.323 [2024-12-06 03:34:13.285152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.323 [2024-12-06 03:34:13.285160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.323 [2024-12-06 03:34:13.290713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.323 [2024-12-06 03:34:13.290735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.323 [2024-12-06 03:34:13.290743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.323 [2024-12-06 03:34:13.294278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.323 [2024-12-06 03:34:13.294303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.323 [2024-12-06 03:34:13.294311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.323 [2024-12-06 03:34:13.298833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.323 [2024-12-06 03:34:13.298854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.323 [2024-12-06 03:34:13.298862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.323 [2024-12-06 03:34:13.304582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.323 [2024-12-06 03:34:13.304602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.323 [2024-12-06 03:34:13.304610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.323 [2024-12-06 03:34:13.310664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.323 [2024-12-06 03:34:13.310685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.323 [2024-12-06 03:34:13.310693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.323 [2024-12-06 03:34:13.316290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.323 [2024-12-06 03:34:13.316311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.323 [2024-12-06 03:34:13.316318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.323 [2024-12-06 03:34:13.321845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.323 [2024-12-06 03:34:13.321867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.323 [2024-12-06 03:34:13.321875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.323 [2024-12-06 03:34:13.327409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.323 [2024-12-06 03:34:13.327431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.323 [2024-12-06 03:34:13.327439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.323 [2024-12-06 03:34:13.333151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.323 [2024-12-06 03:34:13.333172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.323 [2024-12-06 03:34:13.333181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.323 [2024-12-06 03:34:13.338883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.323 [2024-12-06 03:34:13.338904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.323 [2024-12-06 03:34:13.338912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.323 [2024-12-06 03:34:13.344442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.323 [2024-12-06 03:34:13.344464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.323 [2024-12-06 03:34:13.344471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.323 [2024-12-06 03:34:13.349906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.323 [2024-12-06 03:34:13.349927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.323 [2024-12-06 03:34:13.349936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.323 [2024-12-06 03:34:13.355441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.323 [2024-12-06 03:34:13.355463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.323 [2024-12-06 03:34:13.355470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.323 [2024-12-06 03:34:13.360992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.323 [2024-12-06 03:34:13.361012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.323 [2024-12-06 03:34:13.361020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.323 [2024-12-06 03:34:13.366581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.323 [2024-12-06 03:34:13.366602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.323 [2024-12-06 03:34:13.366610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.323 [2024-12-06 03:34:13.372060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.323 [2024-12-06 03:34:13.372081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.323 [2024-12-06 03:34:13.372089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.323 [2024-12-06 03:34:13.377419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.323 [2024-12-06 03:34:13.377441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.323 [2024-12-06 03:34:13.377449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.323 [2024-12-06 03:34:13.382731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.323 [2024-12-06 03:34:13.382753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.323 [2024-12-06 03:34:13.382760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.323 [2024-12-06 03:34:13.388110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.323 [2024-12-06 03:34:13.388131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.323 [2024-12-06 03:34:13.388142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.323 [2024-12-06 03:34:13.393596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.323 [2024-12-06 03:34:13.393617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.323 [2024-12-06 03:34:13.393625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.323 [2024-12-06 03:34:13.398817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.323 [2024-12-06 03:34:13.398838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.323 [2024-12-06 03:34:13.398846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.323 [2024-12-06 03:34:13.403860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.323 [2024-12-06 03:34:13.403882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.323 [2024-12-06 03:34:13.403890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.323 [2024-12-06 03:34:13.409132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.323 [2024-12-06 03:34:13.409153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.323 [2024-12-06 03:34:13.409161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.323 [2024-12-06 03:34:13.414081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.323 [2024-12-06 03:34:13.414103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.323 [2024-12-06 03:34:13.414111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.323 [2024-12-06 03:34:13.419314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.323 [2024-12-06 03:34:13.419336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.323 [2024-12-06 03:34:13.419344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.323 [2024-12-06 03:34:13.424435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.323 [2024-12-06 03:34:13.424456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.324 [2024-12-06 03:34:13.424464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.324 [2024-12-06 03:34:13.429531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.324 [2024-12-06 03:34:13.429552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.324 [2024-12-06 03:34:13.429560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.324 [2024-12-06 03:34:13.434702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.324 [2024-12-06 03:34:13.434723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.324 [2024-12-06 03:34:13.434731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.324 [2024-12-06 03:34:13.439944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.324 [2024-12-06 03:34:13.439970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.324 [2024-12-06 03:34:13.439978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.324 [2024-12-06 03:34:13.445234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.324 [2024-12-06 03:34:13.445256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.324 [2024-12-06 03:34:13.445264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.324 [2024-12-06 03:34:13.450670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.324 [2024-12-06 03:34:13.450691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.324 [2024-12-06 03:34:13.450699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.324 [2024-12-06 03:34:13.456362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.324 [2024-12-06 03:34:13.456388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.324 [2024-12-06 03:34:13.456401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.583 [2024-12-06 03:34:13.462048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.583 [2024-12-06 03:34:13.462071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.583 [2024-12-06 03:34:13.462080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.583 [2024-12-06 03:34:13.467559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.583 [2024-12-06 03:34:13.467582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.583 [2024-12-06 03:34:13.467590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.583 [2024-12-06 03:34:13.473084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.583 [2024-12-06 03:34:13.473106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.583 [2024-12-06 03:34:13.473114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.583 [2024-12-06 03:34:13.478582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160edd0) 00:25:53.583 [2024-12-06 03:34:13.478603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.583 [2024-12-06 03:34:13.478614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.583 5231.00 IOPS, 653.88 MiB/s 00:25:53.583 Latency(us) 00:25:53.583 [2024-12-06T02:34:13.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:53.583 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:53.583 nvme0n1 : 2.00 5234.91 654.36 0.00 0.00 3053.49 655.36 8947.09 00:25:53.583 [2024-12-06T02:34:13.724Z] =================================================================================================================== 00:25:53.583 [2024-12-06T02:34:13.724Z] Total : 5234.91 654.36 0.00 0.00 3053.49 655.36 8947.09 00:25:53.583 { 00:25:53.583 "results": [ 00:25:53.583 { 00:25:53.583 "job": "nvme0n1", 00:25:53.583 "core_mask": "0x2", 00:25:53.583 "workload": "randread", 00:25:53.583 "status": "finished", 00:25:53.583 "queue_depth": 16, 00:25:53.583 "io_size": 131072, 00:25:53.583 "runtime": 2.001564, 00:25:53.583 "iops": 5234.906303270842, 00:25:53.583 "mibps": 654.3632879088552, 00:25:53.583 "io_failed": 0, 00:25:53.583 "io_timeout": 0, 00:25:53.583 "avg_latency_us": 3053.4920776450867, 00:25:53.583 "min_latency_us": 655.36, 00:25:53.583 "max_latency_us": 8947.088695652174 00:25:53.583 } 00:25:53.583 ], 00:25:53.583 "core_count": 1 00:25:53.583 } 00:25:53.583 03:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:53.583 03:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:53.583 03:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:53.583 03:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:53.583 | .driver_specific 00:25:53.583 | .nvme_error 00:25:53.583 | .status_code 00:25:53.583 | .command_transient_transport_error' 00:25:53.583 03:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 338 > 0 )) 00:25:53.583 03:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2760259 00:25:53.583 03:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2760259 ']' 00:25:53.583 03:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2760259 00:25:53.583 03:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:53.583 03:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:53.583 03:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2760259 00:25:53.843 03:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:53.843 03:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:53.843 03:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2760259' 00:25:53.843 killing process with pid 2760259 00:25:53.843 03:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2760259 00:25:53.843 Received shutdown signal, test time was about 2.000000 seconds 00:25:53.843 00:25:53.843 Latency(us) 00:25:53.843 [2024-12-06T02:34:13.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:53.843 [2024-12-06T02:34:13.984Z] =================================================================================================================== 00:25:53.843 [2024-12-06T02:34:13.984Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:53.843 03:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2760259 00:25:53.843 03:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:53.843 03:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:53.843 03:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:53.843 03:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:53.843 03:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:53.843 03:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2760732 00:25:53.843 03:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2760732 /var/tmp/bperf.sock 00:25:53.844 03:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:53.844 03:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2760732 ']' 00:25:53.844 03:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:53.844 03:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:53.844 03:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:53.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:53.844 03:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:53.844 03:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:53.844 [2024-12-06 03:34:13.965011] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:25:53.844 [2024-12-06 03:34:13.965060] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2760732 ] 00:25:54.104 [2024-12-06 03:34:14.027722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.104 [2024-12-06 03:34:14.070428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:54.104 03:34:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:54.104 03:34:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:54.104 03:34:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:54.104 03:34:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:54.363 03:34:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:54.363 03:34:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.363 03:34:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:54.363 03:34:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.363 03:34:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:54.363 03:34:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:54.622 nvme0n1 00:25:54.623 03:34:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:54.623 03:34:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.623 03:34:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:54.623 03:34:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.623 03:34:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:54.623 03:34:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:54.883 Running I/O for 2 seconds... 00:25:54.883 [2024-12-06 03:34:14.784897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:54.883 [2024-12-06 03:34:14.785080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.883 [2024-12-06 03:34:14.785108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:54.883 [2024-12-06 03:34:14.794680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:54.883 [2024-12-06 03:34:14.794840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.883 [2024-12-06 03:34:14.794862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:54.883 [2024-12-06 03:34:14.804375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:54.883 [2024-12-06 03:34:14.804534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.883 [2024-12-06 03:34:14.804553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:54.883 [2024-12-06 03:34:14.814050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:54.883 [2024-12-06 03:34:14.814207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.883 [2024-12-06 03:34:14.814226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:54.883 [2024-12-06 03:34:14.823736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:54.883 [2024-12-06 03:34:14.823894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.883 [2024-12-06 03:34:14.823912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:54.883 [2024-12-06 03:34:14.833353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:54.883 [2024-12-06 03:34:14.833513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.883 [2024-12-06 03:34:14.833532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:54.883 [2024-12-06 03:34:14.842977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:54.883 [2024-12-06 03:34:14.843135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.883 [2024-12-06 03:34:14.843154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:54.883 [2024-12-06 03:34:14.852573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:54.883 [2024-12-06 03:34:14.852729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.883 [2024-12-06 03:34:14.852751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:54.883 [2024-12-06 03:34:14.862234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:54.883 [2024-12-06 03:34:14.862393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.883 [2024-12-06 03:34:14.862412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:54.883 [2024-12-06 03:34:14.871881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:54.883 [2024-12-06 03:34:14.872046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.883 [2024-12-06 03:34:14.872065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:54.883 [2024-12-06 03:34:14.881509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:54.884 [2024-12-06 03:34:14.881665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.884 [2024-12-06 03:34:14.881684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:54.884 [2024-12-06 03:34:14.891114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:54.884 [2024-12-06 03:34:14.891269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.884 [2024-12-06 03:34:14.891287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:54.884 [2024-12-06 03:34:14.900726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:54.884 [2024-12-06 03:34:14.900881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.884 [2024-12-06 03:34:14.900899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:54.884 [2024-12-06 03:34:14.910311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:54.884 [2024-12-06 03:34:14.910469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.884 [2024-12-06 03:34:14.910488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:54.884 [2024-12-06 03:34:14.919923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:54.884 [2024-12-06 03:34:14.920086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.884 [2024-12-06 03:34:14.920104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:54.884 [2024-12-06 03:34:14.929533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:54.884 [2024-12-06 03:34:14.929691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.884 [2024-12-06 03:34:14.929708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:54.884 [2024-12-06 03:34:14.939164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:54.884 [2024-12-06 03:34:14.939328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.884 [2024-12-06 03:34:14.939346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:54.884 [2024-12-06 03:34:14.948926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:54.884 [2024-12-06 03:34:14.949087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.884 [2024-12-06 03:34:14.949105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:54.884 [2024-12-06 03:34:14.958548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:54.884 [2024-12-06 03:34:14.958703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.884 [2024-12-06 03:34:14.958720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:54.884 [2024-12-06 03:34:14.968148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:54.884 [2024-12-06 03:34:14.968305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.884 [2024-12-06 03:34:14.968323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:54.884 [2024-12-06 03:34:14.977829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:54.884 [2024-12-06 03:34:14.977992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.884 [2024-12-06 03:34:14.978011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:54.884 [2024-12-06 03:34:14.987445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:54.884 [2024-12-06 03:34:14.987598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.884 [2024-12-06 03:34:14.987616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:54.884 [2024-12-06 03:34:14.997035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:54.884 [2024-12-06 03:34:14.997188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.884 [2024-12-06 03:34:14.997205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:54.884 [2024-12-06 03:34:15.006697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:54.884 [2024-12-06 03:34:15.006872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.884 [2024-12-06 03:34:15.006890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:54.884 [2024-12-06 03:34:15.016568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:54.884 [2024-12-06 03:34:15.016729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.884 [2024-12-06 03:34:15.016750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.142 [2024-12-06 03:34:15.026548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.142 [2024-12-06 03:34:15.026722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.142 [2024-12-06 03:34:15.026743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.142 [2024-12-06 03:34:15.036293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.142 [2024-12-06 03:34:15.036467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.142 [2024-12-06 03:34:15.036487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.142 [2024-12-06 03:34:15.046134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.142 [2024-12-06 03:34:15.046310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.142 [2024-12-06 03:34:15.046329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.142 [2024-12-06 03:34:15.055808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.142 [2024-12-06 03:34:15.055988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.142 [2024-12-06 03:34:15.056007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.142 [2024-12-06 03:34:15.065542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.142 [2024-12-06 03:34:15.065699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.142 [2024-12-06 03:34:15.065717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.142 [2024-12-06 03:34:15.075160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.142 [2024-12-06 03:34:15.075312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.142 [2024-12-06 03:34:15.075330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.142 [2024-12-06 03:34:15.084757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.142 [2024-12-06 03:34:15.084913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.142 [2024-12-06 03:34:15.084931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.142 [2024-12-06 03:34:15.094354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.142 [2024-12-06 03:34:15.094509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.142 [2024-12-06 03:34:15.094527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.142 [2024-12-06 03:34:15.103957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.142 [2024-12-06 03:34:15.104113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.142 [2024-12-06 03:34:15.104134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.142 [2024-12-06 03:34:15.113549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.142 [2024-12-06 03:34:15.113706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.142 [2024-12-06 03:34:15.113723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.142 [2024-12-06 03:34:15.123139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.142 [2024-12-06 03:34:15.123295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.142 [2024-12-06 03:34:15.123313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.142 [2024-12-06 03:34:15.132735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.142 [2024-12-06 03:34:15.132890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.142 [2024-12-06 03:34:15.132909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.142 [2024-12-06 03:34:15.142382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.142 [2024-12-06 03:34:15.142539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.142 [2024-12-06 03:34:15.142558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.142 [2024-12-06 03:34:15.152001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.142 [2024-12-06 03:34:15.152158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.142 [2024-12-06 03:34:15.152177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.142 [2024-12-06 03:34:15.161593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.142 [2024-12-06 03:34:15.161767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.142 [2024-12-06 03:34:15.161785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.142 [2024-12-06 03:34:15.171243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.142 [2024-12-06 03:34:15.171398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.142 [2024-12-06 03:34:15.171416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.142 [2024-12-06 03:34:15.180833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.142 [2024-12-06 03:34:15.180996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.142 [2024-12-06 03:34:15.181014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.142 [2024-12-06 03:34:15.190354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.142 [2024-12-06 03:34:15.190528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.142 [2024-12-06 03:34:15.190546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.142 [2024-12-06 03:34:15.200061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.142 [2024-12-06 03:34:15.200219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.142 [2024-12-06 03:34:15.200237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.142 [2024-12-06 03:34:15.209759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.142 [2024-12-06 03:34:15.209914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.142 [2024-12-06 03:34:15.209932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.142 [2024-12-06 03:34:15.219365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.142 [2024-12-06 03:34:15.219519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.142 [2024-12-06 03:34:15.219536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.142 [2024-12-06 03:34:15.228965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.142 [2024-12-06 03:34:15.229120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.142 [2024-12-06 03:34:15.229138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.143 [2024-12-06 03:34:15.238549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.143 [2024-12-06 03:34:15.238893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.143 [2024-12-06 03:34:15.238939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.143 [2024-12-06 03:34:15.248528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.143 [2024-12-06 03:34:15.248684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.143 [2024-12-06 03:34:15.248702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.143 [2024-12-06 03:34:15.258143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.143 [2024-12-06 03:34:15.258318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.143 [2024-12-06 03:34:15.258337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.143 [2024-12-06 03:34:15.267803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.143 [2024-12-06 03:34:15.267962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.143 [2024-12-06 03:34:15.267980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.143 [2024-12-06 03:34:15.277450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.143 [2024-12-06 03:34:15.277612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.143 [2024-12-06 03:34:15.277633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.401 [2024-12-06 03:34:15.287366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.401 [2024-12-06 03:34:15.287541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.401 [2024-12-06 03:34:15.287562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.401 [2024-12-06 03:34:15.297107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.401 [2024-12-06 03:34:15.297282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.401 [2024-12-06 03:34:15.297300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.401 [2024-12-06 03:34:15.306809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.401 [2024-12-06 03:34:15.306988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.401 [2024-12-06 03:34:15.307007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.401 [2024-12-06 03:34:15.316512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.401 [2024-12-06 03:34:15.316668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.401 [2024-12-06 03:34:15.316687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.401 [2024-12-06 03:34:15.326099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.401 [2024-12-06 03:34:15.326257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.401 [2024-12-06 03:34:15.326274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.401 [2024-12-06 03:34:15.335704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.401 [2024-12-06 03:34:15.335858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.401 [2024-12-06 03:34:15.335876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.401 [2024-12-06 03:34:15.345296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.401 [2024-12-06 03:34:15.345449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.401 [2024-12-06 03:34:15.345465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.401 [2024-12-06 03:34:15.354904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.401 [2024-12-06 03:34:15.355063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.401 [2024-12-06 03:34:15.355087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.401 [2024-12-06 03:34:15.364501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.401 [2024-12-06 03:34:15.364653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.401 [2024-12-06 03:34:15.364671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.401 [2024-12-06 03:34:15.374130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.401 [2024-12-06 03:34:15.374288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.401 [2024-12-06 03:34:15.374306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.401 [2024-12-06 03:34:15.383676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.401 [2024-12-06 03:34:15.383830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.401 [2024-12-06 03:34:15.383848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.401 [2024-12-06 03:34:15.393260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.401 [2024-12-06 03:34:15.393419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.401 [2024-12-06 03:34:15.393437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.401 [2024-12-06 03:34:15.402857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.401 [2024-12-06 03:34:15.403014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.401 [2024-12-06 03:34:15.403033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.401 [2024-12-06 03:34:15.412484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.401 [2024-12-06 03:34:15.412640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.401 [2024-12-06 03:34:15.412658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.401 [2024-12-06 03:34:15.422109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.401 [2024-12-06 03:34:15.422262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.401 [2024-12-06 03:34:15.422281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.401 [2024-12-06 03:34:15.431696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.401 [2024-12-06 03:34:15.431851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.401 [2024-12-06 03:34:15.431869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.401 [2024-12-06 03:34:15.441290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.401 [2024-12-06 03:34:15.441450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.401 [2024-12-06 03:34:15.441469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.401 [2024-12-06 03:34:15.450868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.401 [2024-12-06 03:34:15.451032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.401 [2024-12-06 03:34:15.451049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.401 [2024-12-06 03:34:15.460472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.401 [2024-12-06 03:34:15.460624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.401 [2024-12-06 03:34:15.460642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.401 [2024-12-06 03:34:15.470111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.401 [2024-12-06 03:34:15.470265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.401 [2024-12-06 03:34:15.470283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.401 [2024-12-06 03:34:15.479704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.401 [2024-12-06 03:34:15.479859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.401 [2024-12-06 03:34:15.479877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.401 [2024-12-06 03:34:15.489303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.401 [2024-12-06 03:34:15.489457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.401 [2024-12-06 03:34:15.489476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.401 [2024-12-06 03:34:15.498829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.401 [2024-12-06 03:34:15.498990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.401 [2024-12-06 03:34:15.499008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.401 [2024-12-06 03:34:15.508358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.401 [2024-12-06 03:34:15.508513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.401 [2024-12-06 03:34:15.508530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.401 [2024-12-06 03:34:15.517967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.401 [2024-12-06 03:34:15.518148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.401 [2024-12-06 03:34:15.518166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.401 [2024-12-06 03:34:15.527622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.401 [2024-12-06 03:34:15.527781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.401 [2024-12-06 03:34:15.527799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.401 [2024-12-06 03:34:15.537390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.401 [2024-12-06 03:34:15.537550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.401 [2024-12-06 03:34:15.537572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.660 [2024-12-06 03:34:15.547228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.660 [2024-12-06 03:34:15.547404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.660 [2024-12-06 03:34:15.547424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.660 [2024-12-06 03:34:15.556914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.660 [2024-12-06 03:34:15.557079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.660 [2024-12-06 03:34:15.557098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.660 [2024-12-06 03:34:15.566523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.660 [2024-12-06 03:34:15.566677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.660 [2024-12-06 03:34:15.566697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.660 [2024-12-06 03:34:15.576111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.660 [2024-12-06 03:34:15.576267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.660 [2024-12-06 03:34:15.576285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.660 [2024-12-06 03:34:15.585693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.660 [2024-12-06 03:34:15.585854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.660 [2024-12-06 03:34:15.585872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.660 [2024-12-06 03:34:15.595290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.660 [2024-12-06 03:34:15.595444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.660 [2024-12-06 03:34:15.595462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.660 [2024-12-06 03:34:15.604870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.660 [2024-12-06 03:34:15.605031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.660 [2024-12-06 03:34:15.605053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.660 [2024-12-06 03:34:15.614471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.660 [2024-12-06 03:34:15.614641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.660 [2024-12-06 03:34:15.614660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.661 [2024-12-06 03:34:15.624101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.661 [2024-12-06 03:34:15.624255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.661 [2024-12-06 03:34:15.624274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.661 [2024-12-06 03:34:15.633709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.661 [2024-12-06 03:34:15.633862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.661 [2024-12-06 03:34:15.633880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.661 [2024-12-06 03:34:15.643354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.661 [2024-12-06 03:34:15.643514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.661 [2024-12-06 03:34:15.643531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.661 [2024-12-06 03:34:15.652969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.661 [2024-12-06 03:34:15.653124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.661 [2024-12-06 03:34:15.653142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.661 [2024-12-06 03:34:15.662565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.661 [2024-12-06 03:34:15.662721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.661 [2024-12-06 03:34:15.662740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.661 [2024-12-06 03:34:15.672378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.661 [2024-12-06 03:34:15.672537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.661 [2024-12-06 03:34:15.672556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.661 [2024-12-06 03:34:15.682057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.661 [2024-12-06 03:34:15.682210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.661 [2024-12-06 03:34:15.682228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.661 [2024-12-06 03:34:15.691664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.661 [2024-12-06 03:34:15.691823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.661 [2024-12-06 03:34:15.691841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.661 [2024-12-06 03:34:15.701255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.661 [2024-12-06 03:34:15.701409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.661 [2024-12-06 03:34:15.701427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.661 [2024-12-06 03:34:15.710867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.661 [2024-12-06 03:34:15.711028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.661 [2024-12-06 03:34:15.711048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.661 [2024-12-06 03:34:15.720450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.661 [2024-12-06 03:34:15.720606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.661 [2024-12-06 03:34:15.720624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.661 [2024-12-06 03:34:15.730344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.661 [2024-12-06 03:34:15.730499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.661 [2024-12-06 03:34:15.730517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.661 [2024-12-06 03:34:15.739917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.661 [2024-12-06 03:34:15.740082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.661 [2024-12-06 03:34:15.740101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.661 [2024-12-06 03:34:15.749567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.661 [2024-12-06 03:34:15.749723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.661 [2024-12-06 03:34:15.749741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.661 [2024-12-06 03:34:15.759188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.661 [2024-12-06 03:34:15.759343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.661 [2024-12-06 03:34:15.759361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.661 [2024-12-06 03:34:15.768814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.661 [2024-12-06 03:34:15.768970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.661 [2024-12-06 03:34:15.768988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.661 26431.00 IOPS, 103.25 MiB/s [2024-12-06T02:34:15.802Z] [2024-12-06 03:34:15.778480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.661 [2024-12-06 03:34:15.778707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.661 [2024-12-06 03:34:15.778726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.661 [2024-12-06 03:34:15.788091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.661 [2024-12-06 03:34:15.788264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.661 [2024-12-06 03:34:15.788283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.661 [2024-12-06 03:34:15.798035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.921 [2024-12-06 03:34:15.798193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.921 [2024-12-06 03:34:15.798214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.921 [2024-12-06 03:34:15.807843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.921 [2024-12-06 03:34:15.808024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.921 [2024-12-06 03:34:15.808045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.921 [2024-12-06 03:34:15.817505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.921 [2024-12-06 03:34:15.817660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.921 [2024-12-06 03:34:15.817679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.921 [2024-12-06 03:34:15.827146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.921 [2024-12-06 03:34:15.827301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.921 [2024-12-06 03:34:15.827319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.921 [2024-12-06 03:34:15.836752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.921 [2024-12-06 03:34:15.836909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.921 [2024-12-06 03:34:15.836927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.921 [2024-12-06 03:34:15.846386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.921 [2024-12-06 03:34:15.846542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.921 [2024-12-06 03:34:15.846562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.921 [2024-12-06 03:34:15.856003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.921 [2024-12-06 03:34:15.856158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.921 [2024-12-06 03:34:15.856180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.921 [2024-12-06 03:34:15.865585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.921 [2024-12-06 03:34:15.865740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.921 [2024-12-06 03:34:15.865759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.921 [2024-12-06 03:34:15.875191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.921 [2024-12-06 03:34:15.875346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.921 [2024-12-06 03:34:15.875364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.921 [2024-12-06 03:34:15.884809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.921 [2024-12-06 03:34:15.884964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.921 [2024-12-06 03:34:15.884983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.921 [2024-12-06 03:34:15.894411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.921 [2024-12-06 03:34:15.894567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.921 [2024-12-06 03:34:15.894586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.921 [2024-12-06 03:34:15.904021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.921 [2024-12-06 03:34:15.904178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:23272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.921 [2024-12-06 03:34:15.904196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.921 [2024-12-06 03:34:15.913620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.921 [2024-12-06 03:34:15.913775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.921 [2024-12-06 03:34:15.913794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.921 [2024-12-06 03:34:15.923240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.921 [2024-12-06 03:34:15.923395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.921 [2024-12-06 03:34:15.923413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.921 [2024-12-06 03:34:15.932847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.921 [2024-12-06 03:34:15.933011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.921 [2024-12-06 03:34:15.933029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.921 [2024-12-06 03:34:15.942492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.921 [2024-12-06 03:34:15.942650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.921 [2024-12-06 03:34:15.942668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.921 [2024-12-06 03:34:15.952258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.921 [2024-12-06 03:34:15.952414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.921 [2024-12-06 03:34:15.952433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.921 [2024-12-06 03:34:15.961862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.921 [2024-12-06 03:34:15.962028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.921 [2024-12-06 03:34:15.962046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.921 [2024-12-06 03:34:15.971489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.921 [2024-12-06 03:34:15.971644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.921 [2024-12-06 03:34:15.971663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.921 [2024-12-06 03:34:15.981108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.921 [2024-12-06 03:34:15.981264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.922 [2024-12-06 03:34:15.981282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.922 [2024-12-06 03:34:15.990709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.922 [2024-12-06 03:34:15.990862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.922 [2024-12-06 03:34:15.990880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.922 [2024-12-06 03:34:16.000319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.922 [2024-12-06 03:34:16.000471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.922 [2024-12-06 03:34:16.000489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.922 [2024-12-06 03:34:16.009931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.922 [2024-12-06 03:34:16.010092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.922 [2024-12-06 03:34:16.010110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.922 [2024-12-06 03:34:16.019560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.922 [2024-12-06 03:34:16.019715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.922 [2024-12-06 03:34:16.019733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.922 [2024-12-06 03:34:16.029162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.922 [2024-12-06 03:34:16.029317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.922 [2024-12-06 03:34:16.029336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.922 [2024-12-06 03:34:16.038788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.922 [2024-12-06 03:34:16.038964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.922 [2024-12-06 03:34:16.038983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.922 [2024-12-06 03:34:16.048559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.922 [2024-12-06 03:34:16.048731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.922 [2024-12-06 03:34:16.048749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.922 [2024-12-06 03:34:16.058472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:55.922 [2024-12-06 03:34:16.058632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.922 [2024-12-06 03:34:16.058652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.238 [2024-12-06 03:34:16.068356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.238 [2024-12-06 03:34:16.068530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.238 [2024-12-06 03:34:16.068550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.238 [2024-12-06 03:34:16.078151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.238 [2024-12-06 03:34:16.078309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:23139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.238 [2024-12-06 03:34:16.078327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.238 [2024-12-06 03:34:16.088047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.238 [2024-12-06 03:34:16.088204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.238 [2024-12-06 03:34:16.088222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.238 [2024-12-06 03:34:16.097723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.238 [2024-12-06 03:34:16.097897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.238 [2024-12-06 03:34:16.097915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.238 [2024-12-06 03:34:16.107478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.238 [2024-12-06 03:34:16.107651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.238 [2024-12-06 03:34:16.107673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.238 [2024-12-06 03:34:16.117117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.238 [2024-12-06 03:34:16.117272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.238 [2024-12-06 03:34:16.117290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.238 [2024-12-06 03:34:16.126698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.238 [2024-12-06 03:34:16.126854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.238 [2024-12-06 03:34:16.126872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.238 [2024-12-06 03:34:16.136289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.238 [2024-12-06 03:34:16.136442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.238 [2024-12-06 03:34:16.136460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.238 [2024-12-06 03:34:16.145889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.238 [2024-12-06 03:34:16.146053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.238 [2024-12-06 03:34:16.146071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.238 [2024-12-06 03:34:16.155477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.238 [2024-12-06 03:34:16.155632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.238 [2024-12-06 03:34:16.155651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.238 [2024-12-06 03:34:16.165089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.238 [2024-12-06 03:34:16.165245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.238 [2024-12-06 03:34:16.165264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.238 [2024-12-06 03:34:16.174733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.238 [2024-12-06 03:34:16.174888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.238 [2024-12-06 03:34:16.174905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.238 [2024-12-06 03:34:16.184328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.238 [2024-12-06 03:34:16.184483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.238 [2024-12-06 03:34:16.184500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.238 [2024-12-06 03:34:16.193902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.238 [2024-12-06 03:34:16.194064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.238 [2024-12-06 03:34:16.194086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.238 [2024-12-06 03:34:16.203489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.238 [2024-12-06 03:34:16.203644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.238 [2024-12-06 03:34:16.203662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.238 [2024-12-06 03:34:16.213080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.238 [2024-12-06 03:34:16.213233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.238 [2024-12-06 03:34:16.213251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.238 [2024-12-06 03:34:16.222646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.238 [2024-12-06 03:34:16.222800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.238 [2024-12-06 03:34:16.222817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.238 [2024-12-06 03:34:16.232229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.238 [2024-12-06 03:34:16.232382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.238 [2024-12-06 03:34:16.232400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.238 [2024-12-06 03:34:16.241999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.238 [2024-12-06 03:34:16.242155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.238 [2024-12-06 03:34:16.242174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.238 [2024-12-06 03:34:16.251595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.238 [2024-12-06 03:34:16.251749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.238 [2024-12-06 03:34:16.251767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.238 [2024-12-06 03:34:16.261175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.239 [2024-12-06 03:34:16.261330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.239 [2024-12-06 03:34:16.261348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.239 [2024-12-06 03:34:16.270757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.239 [2024-12-06 03:34:16.270911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.239 [2024-12-06 03:34:16.270929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.239 [2024-12-06 03:34:16.280315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.239 [2024-12-06 03:34:16.280470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.239 [2024-12-06 03:34:16.280488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.239 [2024-12-06 03:34:16.289892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.239 [2024-12-06 03:34:16.290067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.239 [2024-12-06 03:34:16.290085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.239 [2024-12-06 03:34:16.299776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.239 [2024-12-06 03:34:16.299934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.239 [2024-12-06 03:34:16.299956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.239 [2024-12-06 03:34:16.309428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.239 [2024-12-06 03:34:16.309599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.239 [2024-12-06 03:34:16.309617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.239 [2024-12-06 03:34:16.319499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.239 [2024-12-06 03:34:16.319665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.239 [2024-12-06 03:34:16.319687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.239 [2024-12-06 03:34:16.329441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.239 [2024-12-06 03:34:16.329604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.239 [2024-12-06 03:34:16.329624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.239 [2024-12-06 03:34:16.339348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.239 [2024-12-06 03:34:16.339507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.239 [2024-12-06 03:34:16.339527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.239 [2024-12-06 03:34:16.349264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.239 [2024-12-06 03:34:16.349439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.239 [2024-12-06 03:34:16.349458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.525 [2024-12-06 03:34:16.359154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.525 [2024-12-06 03:34:16.359312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.525 [2024-12-06 03:34:16.359330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.525 [2024-12-06 03:34:16.369011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.525 [2024-12-06 03:34:16.369175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.525 [2024-12-06 03:34:16.369194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.525 [2024-12-06 03:34:16.378828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.525 [2024-12-06 03:34:16.378994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.525 [2024-12-06 03:34:16.379013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.525 [2024-12-06 03:34:16.388726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.525 [2024-12-06 03:34:16.388884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.525 [2024-12-06 03:34:16.388902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.525 [2024-12-06 03:34:16.398580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.525 [2024-12-06 03:34:16.398754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.525 [2024-12-06 03:34:16.398773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.525 [2024-12-06 03:34:16.408233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.525 [2024-12-06 03:34:16.408405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.525 [2024-12-06 03:34:16.408424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.525 [2024-12-06 03:34:16.417879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.525 [2024-12-06 03:34:16.418042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.525 [2024-12-06 03:34:16.418061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.525 [2024-12-06 03:34:16.427457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.525 [2024-12-06 03:34:16.427611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:23110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.526 [2024-12-06 03:34:16.427629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.526 [2024-12-06 03:34:16.437051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.526 [2024-12-06 03:34:16.437205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.526 [2024-12-06 03:34:16.437222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.526 [2024-12-06 03:34:16.446672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.526 [2024-12-06 03:34:16.446826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.526 [2024-12-06 03:34:16.446846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.526 [2024-12-06 03:34:16.456246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.526 [2024-12-06 03:34:16.456398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.526 [2024-12-06 03:34:16.456416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.526 [2024-12-06 03:34:16.465820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.526 [2024-12-06 03:34:16.465975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.526 [2024-12-06 03:34:16.465992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.526 [2024-12-06 03:34:16.475384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.526 [2024-12-06 03:34:16.475537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.526 [2024-12-06 03:34:16.475555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.526 [2024-12-06 03:34:16.485000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.526 [2024-12-06 03:34:16.485153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.526 [2024-12-06 03:34:16.485171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.526 [2024-12-06 03:34:16.494597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.526 [2024-12-06 03:34:16.494751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.526 [2024-12-06 03:34:16.494769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.526 [2024-12-06 03:34:16.504167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.526 [2024-12-06 03:34:16.504322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.526 [2024-12-06 03:34:16.504339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.526 [2024-12-06 03:34:16.513727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.526 [2024-12-06 03:34:16.513881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.526 [2024-12-06 03:34:16.513899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.526 [2024-12-06 03:34:16.523331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.526 [2024-12-06 03:34:16.523483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.526 [2024-12-06 03:34:16.523501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.526 [2024-12-06 03:34:16.532894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.526 [2024-12-06 03:34:16.533059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.526 [2024-12-06 03:34:16.533077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.526 [2024-12-06 03:34:16.542425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.526 [2024-12-06 03:34:16.542594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.526 [2024-12-06 03:34:16.542611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.526 [2024-12-06 03:34:16.552151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.526 [2024-12-06 03:34:16.552324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.526 [2024-12-06 03:34:16.552341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.526 [2024-12-06 03:34:16.561797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.526 [2024-12-06 03:34:16.561971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.526 [2024-12-06 03:34:16.561990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.526 [2024-12-06 03:34:16.571442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.526 [2024-12-06 03:34:16.571596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.526 [2024-12-06 03:34:16.571614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.526 [2024-12-06 03:34:16.581016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.526 [2024-12-06 03:34:16.581171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.526 [2024-12-06 03:34:16.581188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.526 [2024-12-06 03:34:16.590599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.526 [2024-12-06 03:34:16.590751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.526 [2024-12-06 03:34:16.590768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.526 [2024-12-06 03:34:16.600099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.526 [2024-12-06 03:34:16.600253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.526 [2024-12-06 03:34:16.600271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.526 [2024-12-06 03:34:16.609677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.526 [2024-12-06 03:34:16.609831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.526 [2024-12-06 03:34:16.609849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.526 [2024-12-06 03:34:16.619258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.526 [2024-12-06 03:34:16.619413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.526 [2024-12-06 03:34:16.619431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.526 [2024-12-06 03:34:16.628842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.526 [2024-12-06 03:34:16.629004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.526 [2024-12-06 03:34:16.629022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.526 [2024-12-06 03:34:16.638433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.526 [2024-12-06 03:34:16.638590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.526 [2024-12-06 03:34:16.638608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.526 [2024-12-06 03:34:16.648018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.526 [2024-12-06 03:34:16.648174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.526 [2024-12-06 03:34:16.648192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.864 [2024-12-06 03:34:16.657875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.864 [2024-12-06 03:34:16.658041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:23437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.864 [2024-12-06 03:34:16.658060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.864 [2024-12-06 03:34:16.667738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.864 [2024-12-06 03:34:16.667898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.864 [2024-12-06 03:34:16.667917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.864 [2024-12-06 03:34:16.677602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.864 [2024-12-06 03:34:16.677760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.864 [2024-12-06 03:34:16.677779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.864 [2024-12-06 03:34:16.687453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.864 [2024-12-06 03:34:16.687613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.864 [2024-12-06 03:34:16.687630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.864 [2024-12-06 03:34:16.697284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.864 [2024-12-06 03:34:16.697459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.864 [2024-12-06 03:34:16.697479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.864 [2024-12-06 03:34:16.706991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.864 [2024-12-06 03:34:16.707146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.864 [2024-12-06 03:34:16.707164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.864 [2024-12-06 03:34:16.716580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.864 [2024-12-06 03:34:16.716735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.864 [2024-12-06 03:34:16.716753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.864 [2024-12-06 03:34:16.726153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.864 [2024-12-06 03:34:16.726307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.864 [2024-12-06 03:34:16.726325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.864 [2024-12-06 03:34:16.735721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.864 [2024-12-06 03:34:16.735879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.864 [2024-12-06 03:34:16.735896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.864 [2024-12-06 03:34:16.745311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.864 [2024-12-06 03:34:16.745467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.864 [2024-12-06 03:34:16.745485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.864 [2024-12-06 03:34:16.755081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.864 [2024-12-06 03:34:16.755235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.864 [2024-12-06 03:34:16.755254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.864 [2024-12-06 03:34:16.764656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.864 [2024-12-06 03:34:16.764812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.864 [2024-12-06 03:34:16.764829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.864 [2024-12-06 03:34:16.774338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.864 [2024-12-06 03:34:16.774512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.864 [2024-12-06 03:34:16.774531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.864 26435.50 IOPS, 103.26 MiB/s [2024-12-06T02:34:17.005Z] [2024-12-06 03:34:16.783937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110bd90) with pdu=0x200016efda78 00:25:56.864 [2024-12-06 03:34:16.784103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.864 [2024-12-06 03:34:16.784120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.864 00:25:56.864 Latency(us) 00:25:56.864 [2024-12-06T02:34:17.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:56.864 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:56.864 nvme0n1 : 2.01 26435.94 103.27 0.00 0.00 4833.31 2849.39 10656.72 00:25:56.864 [2024-12-06T02:34:17.005Z] =================================================================================================================== 00:25:56.864 [2024-12-06T02:34:17.005Z] Total : 26435.94 103.27 0.00 0.00 4833.31 2849.39 10656.72 00:25:56.864 { 00:25:56.864 "results": [ 00:25:56.864 { 00:25:56.864 "job": "nvme0n1", 00:25:56.864 "core_mask": "0x2", 00:25:56.864 "workload": "randwrite", 00:25:56.864 "status": "finished", 00:25:56.864 "queue_depth": 128, 00:25:56.864 "io_size": 4096, 00:25:56.864 "runtime": 2.006019, 00:25:56.864 "iops": 26435.9410354538, 00:25:56.864 "mibps": 103.26539466974141, 00:25:56.864 "io_failed": 0, 00:25:56.864 "io_timeout": 0, 00:25:56.864 "avg_latency_us": 4833.310693646784, 00:25:56.864 "min_latency_us": 2849.391304347826, 00:25:56.864 "max_latency_us": 10656.72347826087 00:25:56.864 } 00:25:56.864 ], 00:25:56.864 "core_count": 1 00:25:56.864 } 00:25:56.865 03:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:56.865 03:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:56.865 03:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:56.865 | .driver_specific 00:25:56.865 | .nvme_error 00:25:56.865 | .status_code 00:25:56.865 | .command_transient_transport_error' 00:25:56.865 03:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:57.132 03:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 208 > 0 )) 00:25:57.132 03:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2760732 00:25:57.132 03:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2760732 ']' 00:25:57.132 03:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2760732 00:25:57.132 03:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:57.132 03:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:57.132 03:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2760732 00:25:57.132 03:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:57.132 03:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:57.132 03:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2760732' 00:25:57.132 killing process with pid 2760732 00:25:57.132 03:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2760732 00:25:57.132 Received shutdown signal, test time was about 2.000000 seconds 00:25:57.132 00:25:57.132 Latency(us) 00:25:57.132 [2024-12-06T02:34:17.273Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:57.132 [2024-12-06T02:34:17.273Z] =================================================================================================================== 00:25:57.132 [2024-12-06T02:34:17.273Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:57.132 03:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2760732 00:25:57.132 03:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:57.132 03:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:57.132 03:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:57.132 03:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:57.132 03:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:57.132 03:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2761308 00:25:57.132 03:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2761308 /var/tmp/bperf.sock 00:25:57.132 03:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:57.132 03:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2761308 ']' 00:25:57.132 03:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:57.132 03:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:57.132 03:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:57.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:57.132 03:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:57.132 03:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:57.391 [2024-12-06 03:34:17.272170] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:25:57.391 [2024-12-06 03:34:17.272222] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2761308 ] 00:25:57.391 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:57.391 Zero copy mechanism will not be used. 00:25:57.391 [2024-12-06 03:34:17.335126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.391 [2024-12-06 03:34:17.377042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:57.391 03:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:57.391 03:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:57.391 03:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:57.391 03:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:57.650 03:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:57.650 03:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.650 03:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:57.650 03:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.650 03:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:57.650 03:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:57.909 nvme0n1 00:25:57.909 03:34:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:58.168 03:34:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.168 03:34:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:58.168 03:34:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.168 03:34:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:58.168 03:34:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:58.168 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:58.168 Zero copy mechanism will not be used. 00:25:58.168 Running I/O for 2 seconds... 00:25:58.168 [2024-12-06 03:34:18.150110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.168 [2024-12-06 03:34:18.150203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.168 [2024-12-06 03:34:18.150234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.168 [2024-12-06 03:34:18.156440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.168 [2024-12-06 03:34:18.156556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.168 [2024-12-06 03:34:18.156580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.168 [2024-12-06 03:34:18.162902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.168 [2024-12-06 03:34:18.163060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.168 [2024-12-06 03:34:18.163082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.168 [2024-12-06 03:34:18.170127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.168 [2024-12-06 03:34:18.170245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.168 [2024-12-06 03:34:18.170266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.168 [2024-12-06 03:34:18.176447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.168 [2024-12-06 03:34:18.176578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.168 [2024-12-06 03:34:18.176599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.168 [2024-12-06 03:34:18.183124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.168 [2024-12-06 03:34:18.183284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.168 [2024-12-06 03:34:18.183305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.168 [2024-12-06 03:34:18.189975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.168 [2024-12-06 03:34:18.190066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.168 [2024-12-06 03:34:18.190090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.168 [2024-12-06 03:34:18.195828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.168 [2024-12-06 03:34:18.195926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.168 [2024-12-06 03:34:18.195954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.168 [2024-12-06 03:34:18.202166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.168 [2024-12-06 03:34:18.202252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.168 [2024-12-06 03:34:18.202273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.168 [2024-12-06 03:34:18.208805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.168 [2024-12-06 03:34:18.208920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.168 [2024-12-06 03:34:18.208939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.168 [2024-12-06 03:34:18.216095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.168 [2024-12-06 03:34:18.216278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.168 [2024-12-06 03:34:18.216297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.168 [2024-12-06 03:34:18.223060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.168 [2024-12-06 03:34:18.223160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.168 [2024-12-06 03:34:18.223180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.168 [2024-12-06 03:34:18.230029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.169 [2024-12-06 03:34:18.230124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.169 [2024-12-06 03:34:18.230143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.169 [2024-12-06 03:34:18.235865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.169 [2024-12-06 03:34:18.235929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.169 [2024-12-06 03:34:18.235955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.169 [2024-12-06 03:34:18.240996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.169 [2024-12-06 03:34:18.241079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.169 [2024-12-06 03:34:18.241099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.169 [2024-12-06 03:34:18.245595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.169 [2024-12-06 03:34:18.245663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.169 [2024-12-06 03:34:18.245682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.169 [2024-12-06 03:34:18.249935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.169 [2024-12-06 03:34:18.250018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.169 [2024-12-06 03:34:18.250036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.169 [2024-12-06 03:34:18.254300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.169 [2024-12-06 03:34:18.254378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.169 [2024-12-06 03:34:18.254397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.169 [2024-12-06 03:34:18.258582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.169 [2024-12-06 03:34:18.258656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.169 [2024-12-06 03:34:18.258675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.169 [2024-12-06 03:34:18.262879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.169 [2024-12-06 03:34:18.262967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.169 [2024-12-06 03:34:18.262986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.169 [2024-12-06 03:34:18.267171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.169 [2024-12-06 03:34:18.267264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.169 [2024-12-06 03:34:18.267282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.169 [2024-12-06 03:34:18.271391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.169 [2024-12-06 03:34:18.271483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.169 [2024-12-06 03:34:18.271501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.169 [2024-12-06 03:34:18.275713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.169 [2024-12-06 03:34:18.275808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.169 [2024-12-06 03:34:18.275827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.169 [2024-12-06 03:34:18.280339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.169 [2024-12-06 03:34:18.280402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.169 [2024-12-06 03:34:18.280421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.169 [2024-12-06 03:34:18.285105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.169 [2024-12-06 03:34:18.285168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.169 [2024-12-06 03:34:18.285187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.169 [2024-12-06 03:34:18.289591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.169 [2024-12-06 03:34:18.289657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.169 [2024-12-06 03:34:18.289676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.169 [2024-12-06 03:34:18.294204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.169 [2024-12-06 03:34:18.294296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.169 [2024-12-06 03:34:18.294314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.169 [2024-12-06 03:34:18.299512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.169 [2024-12-06 03:34:18.299655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.169 [2024-12-06 03:34:18.299674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.169 [2024-12-06 03:34:18.305111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.169 [2024-12-06 03:34:18.305220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.169 [2024-12-06 03:34:18.305239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.429 [2024-12-06 03:34:18.310218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.429 [2024-12-06 03:34:18.310284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.429 [2024-12-06 03:34:18.310303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.429 [2024-12-06 03:34:18.315347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.429 [2024-12-06 03:34:18.315422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.429 [2024-12-06 03:34:18.315441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.429 [2024-12-06 03:34:18.320182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.429 [2024-12-06 03:34:18.320266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.429 [2024-12-06 03:34:18.320284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.429 [2024-12-06 03:34:18.324815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.429 [2024-12-06 03:34:18.324878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.429 [2024-12-06 03:34:18.324900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.429 [2024-12-06 03:34:18.329575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.429 [2024-12-06 03:34:18.329640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.429 [2024-12-06 03:34:18.329659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.429 [2024-12-06 03:34:18.333859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.429 [2024-12-06 03:34:18.333932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.429 [2024-12-06 03:34:18.333957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.429 [2024-12-06 03:34:18.338536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.429 [2024-12-06 03:34:18.338598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.429 [2024-12-06 03:34:18.338616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.429 [2024-12-06 03:34:18.343296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.429 [2024-12-06 03:34:18.343369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.429 [2024-12-06 03:34:18.343387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.429 [2024-12-06 03:34:18.348599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.430 [2024-12-06 03:34:18.348660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.430 [2024-12-06 03:34:18.348679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.430 [2024-12-06 03:34:18.354128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.430 [2024-12-06 03:34:18.354192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.430 [2024-12-06 03:34:18.354210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.430 [2024-12-06 03:34:18.358821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.430 [2024-12-06 03:34:18.358929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.430 [2024-12-06 03:34:18.358953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.430 [2024-12-06 03:34:18.363643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.430 [2024-12-06 03:34:18.363750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.430 [2024-12-06 03:34:18.363768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.430 [2024-12-06 03:34:18.368293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.430 [2024-12-06 03:34:18.368381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.430 [2024-12-06 03:34:18.368403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.430 [2024-12-06 03:34:18.373034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.430 [2024-12-06 03:34:18.373155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.430 [2024-12-06 03:34:18.373173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.430 [2024-12-06 03:34:18.377715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.430 [2024-12-06 03:34:18.377789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.430 [2024-12-06 03:34:18.377807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.430 [2024-12-06 03:34:18.381983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.430 [2024-12-06 03:34:18.382054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.430 [2024-12-06 03:34:18.382072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.430 [2024-12-06 03:34:18.386326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.430 [2024-12-06 03:34:18.386394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.430 [2024-12-06 03:34:18.386412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.430 [2024-12-06 03:34:18.390593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.430 [2024-12-06 03:34:18.390666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.430 [2024-12-06 03:34:18.390685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.430 [2024-12-06 03:34:18.394922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.430 [2024-12-06 03:34:18.395013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.430 [2024-12-06 03:34:18.395031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.430 [2024-12-06 03:34:18.399268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.430 [2024-12-06 03:34:18.399327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.430 [2024-12-06 03:34:18.399345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.430 [2024-12-06 03:34:18.403899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.430 [2024-12-06 03:34:18.403964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.430 [2024-12-06 03:34:18.403982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.430 [2024-12-06 03:34:18.408555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.430 [2024-12-06 03:34:18.408622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.430 [2024-12-06 03:34:18.408641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.430 [2024-12-06 03:34:18.412943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.430 [2024-12-06 03:34:18.413031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.430 [2024-12-06 03:34:18.413050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.430 [2024-12-06 03:34:18.417326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.430 [2024-12-06 03:34:18.417415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.430 [2024-12-06 03:34:18.417435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.430 [2024-12-06 03:34:18.421649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.430 [2024-12-06 03:34:18.421726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.430 [2024-12-06 03:34:18.421744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.430 [2024-12-06 03:34:18.426016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.430 [2024-12-06 03:34:18.426092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.430 [2024-12-06 03:34:18.426110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.430 [2024-12-06 03:34:18.430644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.430 [2024-12-06 03:34:18.430706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.430 [2024-12-06 03:34:18.430724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.430 [2024-12-06 03:34:18.435957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.430 [2024-12-06 03:34:18.436058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.430 [2024-12-06 03:34:18.436078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.430 [2024-12-06 03:34:18.441790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.430 [2024-12-06 03:34:18.441854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.431 [2024-12-06 03:34:18.441872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.431 [2024-12-06 03:34:18.446874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.431 [2024-12-06 03:34:18.446945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.431 [2024-12-06 03:34:18.446976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.431 [2024-12-06 03:34:18.451571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.431 [2024-12-06 03:34:18.451653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.431 [2024-12-06 03:34:18.451672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.431 [2024-12-06 03:34:18.456171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.431 [2024-12-06 03:34:18.456250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.431 [2024-12-06 03:34:18.456268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.431 [2024-12-06 03:34:18.460824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.431 [2024-12-06 03:34:18.460930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.431 [2024-12-06 03:34:18.460955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.431 [2024-12-06 03:34:18.465590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.431 [2024-12-06 03:34:18.465676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.431 [2024-12-06 03:34:18.465695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.431 [2024-12-06 03:34:18.470437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.431 [2024-12-06 03:34:18.470497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.431 [2024-12-06 03:34:18.470515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.431 [2024-12-06 03:34:18.475141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.431 [2024-12-06 03:34:18.475208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.431 [2024-12-06 03:34:18.475226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.431 [2024-12-06 03:34:18.479771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.431 [2024-12-06 03:34:18.479855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.431 [2024-12-06 03:34:18.479874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.431 [2024-12-06 03:34:18.484381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.431 [2024-12-06 03:34:18.484454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.431 [2024-12-06 03:34:18.484474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.431 [2024-12-06 03:34:18.489016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.431 [2024-12-06 03:34:18.489125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.431 [2024-12-06 03:34:18.489147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.431 [2024-12-06 03:34:18.493856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.431 [2024-12-06 03:34:18.493971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.431 [2024-12-06 03:34:18.493991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.431 [2024-12-06 03:34:18.498805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.431 [2024-12-06 03:34:18.498901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.431 [2024-12-06 03:34:18.498920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.431 [2024-12-06 03:34:18.503626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.431 [2024-12-06 03:34:18.503702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.431 [2024-12-06 03:34:18.503721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.431 [2024-12-06 03:34:18.508311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.431 [2024-12-06 03:34:18.508372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.431 [2024-12-06 03:34:18.508391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.431 [2024-12-06 03:34:18.513144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.431 [2024-12-06 03:34:18.513240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.431 [2024-12-06 03:34:18.513259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.431 [2024-12-06 03:34:18.517888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.431 [2024-12-06 03:34:18.517983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.431 [2024-12-06 03:34:18.518002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.431 [2024-12-06 03:34:18.522758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.431 [2024-12-06 03:34:18.522859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.431 [2024-12-06 03:34:18.522877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.431 [2024-12-06 03:34:18.527614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.431 [2024-12-06 03:34:18.527727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.431 [2024-12-06 03:34:18.527745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.431 [2024-12-06 03:34:18.532345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.431 [2024-12-06 03:34:18.532429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.431 [2024-12-06 03:34:18.532448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.431 [2024-12-06 03:34:18.536808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.431 [2024-12-06 03:34:18.536875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.431 [2024-12-06 03:34:18.536896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.431 [2024-12-06 03:34:18.541398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.432 [2024-12-06 03:34:18.541489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.432 [2024-12-06 03:34:18.541508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.432 [2024-12-06 03:34:18.546200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.432 [2024-12-06 03:34:18.546348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.432 [2024-12-06 03:34:18.546366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.432 [2024-12-06 03:34:18.551389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.432 [2024-12-06 03:34:18.551464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.432 [2024-12-06 03:34:18.551482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.432 [2024-12-06 03:34:18.556747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.432 [2024-12-06 03:34:18.556811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.432 [2024-12-06 03:34:18.556829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.432 [2024-12-06 03:34:18.561492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.432 [2024-12-06 03:34:18.561556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.432 [2024-12-06 03:34:18.561575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.691 [2024-12-06 03:34:18.566252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.691 [2024-12-06 03:34:18.566363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.691 [2024-12-06 03:34:18.566382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.691 [2024-12-06 03:34:18.570772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.691 [2024-12-06 03:34:18.570854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.691 [2024-12-06 03:34:18.570877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.691 [2024-12-06 03:34:18.575148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.691 [2024-12-06 03:34:18.575228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.691 [2024-12-06 03:34:18.575247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.691 [2024-12-06 03:34:18.579790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.692 [2024-12-06 03:34:18.579863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.692 [2024-12-06 03:34:18.579881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.692 [2024-12-06 03:34:18.585059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.692 [2024-12-06 03:34:18.585170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.692 [2024-12-06 03:34:18.585188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.692 [2024-12-06 03:34:18.590398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.692 [2024-12-06 03:34:18.590474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.692 [2024-12-06 03:34:18.590492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.692 [2024-12-06 03:34:18.595779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.692 [2024-12-06 03:34:18.595894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.692 [2024-12-06 03:34:18.595912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.692 [2024-12-06 03:34:18.603425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.692 [2024-12-06 03:34:18.603588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.692 [2024-12-06 03:34:18.603606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.692 [2024-12-06 03:34:18.609854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.692 [2024-12-06 03:34:18.609969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.692 [2024-12-06 03:34:18.609988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.692 [2024-12-06 03:34:18.615048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.692 [2024-12-06 03:34:18.615107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.692 [2024-12-06 03:34:18.615125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.692 [2024-12-06 03:34:18.620467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.692 [2024-12-06 03:34:18.620604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.692 [2024-12-06 03:34:18.620627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.692 [2024-12-06 03:34:18.626678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.692 [2024-12-06 03:34:18.626824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.692 [2024-12-06 03:34:18.626843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.692 [2024-12-06 03:34:18.632331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.692 [2024-12-06 03:34:18.632453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.692 [2024-12-06 03:34:18.632471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.692 [2024-12-06 03:34:18.637850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.692 [2024-12-06 03:34:18.637922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.692 [2024-12-06 03:34:18.637941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.692 [2024-12-06 03:34:18.642244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.692 [2024-12-06 03:34:18.642315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.692 [2024-12-06 03:34:18.642333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.692 [2024-12-06 03:34:18.646671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.692 [2024-12-06 03:34:18.646769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.692 [2024-12-06 03:34:18.646788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.692 [2024-12-06 03:34:18.650961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.692 [2024-12-06 03:34:18.651037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.692 [2024-12-06 03:34:18.651056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.692 [2024-12-06 03:34:18.655191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.692 [2024-12-06 03:34:18.655272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.692 [2024-12-06 03:34:18.655291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.692 [2024-12-06 03:34:18.659541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.692 [2024-12-06 03:34:18.659620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.692 [2024-12-06 03:34:18.659638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.692 [2024-12-06 03:34:18.663832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.692 [2024-12-06 03:34:18.663894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.692 [2024-12-06 03:34:18.663913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.692 [2024-12-06 03:34:18.668129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.692 [2024-12-06 03:34:18.668200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.692 [2024-12-06 03:34:18.668218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.692 [2024-12-06 03:34:18.672466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.692 [2024-12-06 03:34:18.672551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.692 [2024-12-06 03:34:18.672570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.692 [2024-12-06 03:34:18.676703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.692 [2024-12-06 03:34:18.676789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.692 [2024-12-06 03:34:18.676808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.692 [2024-12-06 03:34:18.680967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.693 [2024-12-06 03:34:18.681046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.693 [2024-12-06 03:34:18.681065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.693 [2024-12-06 03:34:18.685171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.693 [2024-12-06 03:34:18.685245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.693 [2024-12-06 03:34:18.685263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.693 [2024-12-06 03:34:18.689383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.693 [2024-12-06 03:34:18.689451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.693 [2024-12-06 03:34:18.689469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.693 [2024-12-06 03:34:18.693622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.693 [2024-12-06 03:34:18.693700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.693 [2024-12-06 03:34:18.693718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.693 [2024-12-06 03:34:18.697804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.693 [2024-12-06 03:34:18.697866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.693 [2024-12-06 03:34:18.697888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.693 [2024-12-06 03:34:18.702618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.693 [2024-12-06 03:34:18.702693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.693 [2024-12-06 03:34:18.702712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.693 [2024-12-06 03:34:18.707215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.693 [2024-12-06 03:34:18.707273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.693 [2024-12-06 03:34:18.707291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.693 [2024-12-06 03:34:18.711510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.693 [2024-12-06 03:34:18.711587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.693 [2024-12-06 03:34:18.711605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.693 [2024-12-06 03:34:18.715731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.693 [2024-12-06 03:34:18.715796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.693 [2024-12-06 03:34:18.715816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.693 [2024-12-06 03:34:18.720108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.693 [2024-12-06 03:34:18.720169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.693 [2024-12-06 03:34:18.720187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.693 [2024-12-06 03:34:18.724664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.693 [2024-12-06 03:34:18.724734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.693 [2024-12-06 03:34:18.724753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.693 [2024-12-06 03:34:18.729090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.693 [2024-12-06 03:34:18.729168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.693 [2024-12-06 03:34:18.729186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.693 [2024-12-06 03:34:18.733809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.693 [2024-12-06 03:34:18.733889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.693 [2024-12-06 03:34:18.733907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.693 [2024-12-06 03:34:18.738375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.693 [2024-12-06 03:34:18.738451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.693 [2024-12-06 03:34:18.738473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.693 [2024-12-06 03:34:18.743124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.693 [2024-12-06 03:34:18.743210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.693 [2024-12-06 03:34:18.743228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.693 [2024-12-06 03:34:18.748288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.693 [2024-12-06 03:34:18.748351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.693 [2024-12-06 03:34:18.748368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.693 [2024-12-06 03:34:18.753687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.693 [2024-12-06 03:34:18.753747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.693 [2024-12-06 03:34:18.753765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.693 [2024-12-06 03:34:18.759378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.693 [2024-12-06 03:34:18.759440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.693 [2024-12-06 03:34:18.759458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.693 [2024-12-06 03:34:18.764492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.693 [2024-12-06 03:34:18.764555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.693 [2024-12-06 03:34:18.764574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.693 [2024-12-06 03:34:18.769921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.693 [2024-12-06 03:34:18.769992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.693 [2024-12-06 03:34:18.770011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.693 [2024-12-06 03:34:18.774821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.693 [2024-12-06 03:34:18.774884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.693 [2024-12-06 03:34:18.774903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.693 [2024-12-06 03:34:18.779863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.694 [2024-12-06 03:34:18.779924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-12-06 03:34:18.779943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.694 [2024-12-06 03:34:18.785275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.694 [2024-12-06 03:34:18.785368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-12-06 03:34:18.785387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.694 [2024-12-06 03:34:18.790668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.694 [2024-12-06 03:34:18.790745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-12-06 03:34:18.790763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.694 [2024-12-06 03:34:18.795850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.694 [2024-12-06 03:34:18.795969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-12-06 03:34:18.795988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.694 [2024-12-06 03:34:18.800879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.694 [2024-12-06 03:34:18.800957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-12-06 03:34:18.800975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.694 [2024-12-06 03:34:18.806510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.694 [2024-12-06 03:34:18.806572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-12-06 03:34:18.806590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.694 [2024-12-06 03:34:18.812112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.694 [2024-12-06 03:34:18.812180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-12-06 03:34:18.812198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.694 [2024-12-06 03:34:18.817032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.694 [2024-12-06 03:34:18.817091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-12-06 03:34:18.817109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.694 [2024-12-06 03:34:18.821707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.694 [2024-12-06 03:34:18.821778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-12-06 03:34:18.821797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.694 [2024-12-06 03:34:18.826068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.694 [2024-12-06 03:34:18.826130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-12-06 03:34:18.826152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.954 [2024-12-06 03:34:18.830661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.954 [2024-12-06 03:34:18.830742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.954 [2024-12-06 03:34:18.830762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.954 [2024-12-06 03:34:18.835268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.954 [2024-12-06 03:34:18.835341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.954 [2024-12-06 03:34:18.835359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.954 [2024-12-06 03:34:18.839924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.954 [2024-12-06 03:34:18.839995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.954 [2024-12-06 03:34:18.840013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.954 [2024-12-06 03:34:18.844754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.954 [2024-12-06 03:34:18.844825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.954 [2024-12-06 03:34:18.844844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.954 [2024-12-06 03:34:18.849488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.954 [2024-12-06 03:34:18.849567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.954 [2024-12-06 03:34:18.849585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.955 [2024-12-06 03:34:18.853664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.955 [2024-12-06 03:34:18.853742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.955 [2024-12-06 03:34:18.853760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.955 [2024-12-06 03:34:18.857816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.955 [2024-12-06 03:34:18.857876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.955 [2024-12-06 03:34:18.857894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.955 [2024-12-06 03:34:18.861907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.955 [2024-12-06 03:34:18.861993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.955 [2024-12-06 03:34:18.862012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.955 [2024-12-06 03:34:18.866068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.955 [2024-12-06 03:34:18.866150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.955 [2024-12-06 03:34:18.866172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.955 [2024-12-06 03:34:18.870175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.955 [2024-12-06 03:34:18.870249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.955 [2024-12-06 03:34:18.870268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.955 [2024-12-06 03:34:18.874324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.955 [2024-12-06 03:34:18.874398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.955 [2024-12-06 03:34:18.874417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.955 [2024-12-06 03:34:18.878467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.955 [2024-12-06 03:34:18.878548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.955 [2024-12-06 03:34:18.878566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.955 [2024-12-06 03:34:18.882573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.955 [2024-12-06 03:34:18.882643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.955 [2024-12-06 03:34:18.882662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.955 [2024-12-06 03:34:18.886637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.955 [2024-12-06 03:34:18.886698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.955 [2024-12-06 03:34:18.886716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.955 [2024-12-06 03:34:18.890708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.955 [2024-12-06 03:34:18.890815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.955 [2024-12-06 03:34:18.890833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.955 [2024-12-06 03:34:18.894799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.955 [2024-12-06 03:34:18.894886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.955 [2024-12-06 03:34:18.894905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.955 [2024-12-06 03:34:18.898902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.955 [2024-12-06 03:34:18.898981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.955 [2024-12-06 03:34:18.898999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.955 [2024-12-06 03:34:18.903037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.955 [2024-12-06 03:34:18.903122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.955 [2024-12-06 03:34:18.903141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.956 [2024-12-06 03:34:18.907184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.956 [2024-12-06 03:34:18.907261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.956 [2024-12-06 03:34:18.907280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.956 [2024-12-06 03:34:18.911435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.956 [2024-12-06 03:34:18.911534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.956 [2024-12-06 03:34:18.911552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.956 [2024-12-06 03:34:18.915565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.956 [2024-12-06 03:34:18.915637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.956 [2024-12-06 03:34:18.915657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.956 [2024-12-06 03:34:18.919671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.956 [2024-12-06 03:34:18.919746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.956 [2024-12-06 03:34:18.919765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.956 [2024-12-06 03:34:18.923954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.956 [2024-12-06 03:34:18.924028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.956 [2024-12-06 03:34:18.924046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.956 [2024-12-06 03:34:18.928089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.956 [2024-12-06 03:34:18.928177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.956 [2024-12-06 03:34:18.928196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.956 [2024-12-06 03:34:18.932234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.956 [2024-12-06 03:34:18.932325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.956 [2024-12-06 03:34:18.932344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.956 [2024-12-06 03:34:18.936417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.956 [2024-12-06 03:34:18.936491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.956 [2024-12-06 03:34:18.936513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.956 [2024-12-06 03:34:18.940546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.956 [2024-12-06 03:34:18.940612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.956 [2024-12-06 03:34:18.940630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.956 [2024-12-06 03:34:18.944719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.956 [2024-12-06 03:34:18.944801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.956 [2024-12-06 03:34:18.944819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.956 [2024-12-06 03:34:18.948852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.956 [2024-12-06 03:34:18.948924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.956 [2024-12-06 03:34:18.948942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.956 [2024-12-06 03:34:18.952993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.956 [2024-12-06 03:34:18.953053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.956 [2024-12-06 03:34:18.953071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.956 [2024-12-06 03:34:18.957113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.956 [2024-12-06 03:34:18.957188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.956 [2024-12-06 03:34:18.957207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.956 [2024-12-06 03:34:18.961163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.956 [2024-12-06 03:34:18.961234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.957 [2024-12-06 03:34:18.961252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.957 [2024-12-06 03:34:18.965264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.957 [2024-12-06 03:34:18.965325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.957 [2024-12-06 03:34:18.965343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.957 [2024-12-06 03:34:18.969475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.957 [2024-12-06 03:34:18.969548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.957 [2024-12-06 03:34:18.969567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.957 [2024-12-06 03:34:18.973606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.957 [2024-12-06 03:34:18.973716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.957 [2024-12-06 03:34:18.973738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.957 [2024-12-06 03:34:18.977703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.957 [2024-12-06 03:34:18.977768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.957 [2024-12-06 03:34:18.977786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.957 [2024-12-06 03:34:18.981802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.957 [2024-12-06 03:34:18.981869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.957 [2024-12-06 03:34:18.981887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.957 [2024-12-06 03:34:18.986085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.957 [2024-12-06 03:34:18.986152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.957 [2024-12-06 03:34:18.986170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.957 [2024-12-06 03:34:18.990786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.957 [2024-12-06 03:34:18.990860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.957 [2024-12-06 03:34:18.990878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.957 [2024-12-06 03:34:18.995328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.957 [2024-12-06 03:34:18.995408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.957 [2024-12-06 03:34:18.995426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.957 [2024-12-06 03:34:19.001282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.957 [2024-12-06 03:34:19.001351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.957 [2024-12-06 03:34:19.001369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.957 [2024-12-06 03:34:19.006800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.957 [2024-12-06 03:34:19.006926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.957 [2024-12-06 03:34:19.006943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.957 [2024-12-06 03:34:19.011681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.957 [2024-12-06 03:34:19.011751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.957 [2024-12-06 03:34:19.011769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.957 [2024-12-06 03:34:19.016301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.957 [2024-12-06 03:34:19.016369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.957 [2024-12-06 03:34:19.016387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.957 [2024-12-06 03:34:19.020969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.957 [2024-12-06 03:34:19.021056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.957 [2024-12-06 03:34:19.021074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.957 [2024-12-06 03:34:19.025675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.957 [2024-12-06 03:34:19.025756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.957 [2024-12-06 03:34:19.025775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.957 [2024-12-06 03:34:19.030009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.958 [2024-12-06 03:34:19.030076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.958 [2024-12-06 03:34:19.030093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.958 [2024-12-06 03:34:19.034324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.958 [2024-12-06 03:34:19.034442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.958 [2024-12-06 03:34:19.034460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.958 [2024-12-06 03:34:19.038620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.958 [2024-12-06 03:34:19.038678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.958 [2024-12-06 03:34:19.038696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.958 [2024-12-06 03:34:19.042882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.958 [2024-12-06 03:34:19.042962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.958 [2024-12-06 03:34:19.042980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.958 [2024-12-06 03:34:19.047229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.958 [2024-12-06 03:34:19.047286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.958 [2024-12-06 03:34:19.047305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.958 [2024-12-06 03:34:19.051451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.958 [2024-12-06 03:34:19.051525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.958 [2024-12-06 03:34:19.051546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.958 [2024-12-06 03:34:19.055872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.958 [2024-12-06 03:34:19.055931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.958 [2024-12-06 03:34:19.055955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.958 [2024-12-06 03:34:19.060617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.958 [2024-12-06 03:34:19.060676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.958 [2024-12-06 03:34:19.060695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.958 [2024-12-06 03:34:19.065016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.958 [2024-12-06 03:34:19.065100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.958 [2024-12-06 03:34:19.065118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.958 [2024-12-06 03:34:19.070131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.958 [2024-12-06 03:34:19.070193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.958 [2024-12-06 03:34:19.070212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.958 [2024-12-06 03:34:19.074930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.958 [2024-12-06 03:34:19.075005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.958 [2024-12-06 03:34:19.075024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.958 [2024-12-06 03:34:19.079919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.958 [2024-12-06 03:34:19.079989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.958 [2024-12-06 03:34:19.080007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.958 [2024-12-06 03:34:19.084906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.958 [2024-12-06 03:34:19.084989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.959 [2024-12-06 03:34:19.085007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.959 [2024-12-06 03:34:19.090023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:58.959 [2024-12-06 03:34:19.090123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.959 [2024-12-06 03:34:19.090142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.218 [2024-12-06 03:34:19.094739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.218 [2024-12-06 03:34:19.094824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.218 [2024-12-06 03:34:19.094846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.218 [2024-12-06 03:34:19.099478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.218 [2024-12-06 03:34:19.099538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.218 [2024-12-06 03:34:19.099557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.218 [2024-12-06 03:34:19.104214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.218 [2024-12-06 03:34:19.104313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.218 [2024-12-06 03:34:19.104331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.218 [2024-12-06 03:34:19.108856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.218 [2024-12-06 03:34:19.108914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.218 [2024-12-06 03:34:19.108933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.218 [2024-12-06 03:34:19.113500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.218 [2024-12-06 03:34:19.113604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.218 [2024-12-06 03:34:19.113622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.218 [2024-12-06 03:34:19.118339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.218 [2024-12-06 03:34:19.118419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.218 [2024-12-06 03:34:19.118438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.218 [2024-12-06 03:34:19.123035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.218 [2024-12-06 03:34:19.123138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.218 [2024-12-06 03:34:19.123156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.218 [2024-12-06 03:34:19.127778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.218 [2024-12-06 03:34:19.127868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.218 [2024-12-06 03:34:19.127887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.218 [2024-12-06 03:34:19.132587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.218 [2024-12-06 03:34:19.132682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.218 [2024-12-06 03:34:19.132700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.218 [2024-12-06 03:34:19.137154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.218 [2024-12-06 03:34:19.137216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.218 [2024-12-06 03:34:19.137234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.218 [2024-12-06 03:34:19.141718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.218 [2024-12-06 03:34:19.141779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.218 [2024-12-06 03:34:19.141798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.218 6429.00 IOPS, 803.62 MiB/s [2024-12-06T02:34:19.359Z] [2024-12-06 03:34:19.147294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.218 [2024-12-06 03:34:19.147375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.218 [2024-12-06 03:34:19.147394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.218 [2024-12-06 03:34:19.152508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.218 [2024-12-06 03:34:19.152649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.218 [2024-12-06 03:34:19.152667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.218 [2024-12-06 03:34:19.159121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.218 [2024-12-06 03:34:19.159258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.218 [2024-12-06 03:34:19.159294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.218 [2024-12-06 03:34:19.164925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.218 [2024-12-06 03:34:19.165065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.218 [2024-12-06 03:34:19.165085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.218 [2024-12-06 03:34:19.170635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.218 [2024-12-06 03:34:19.170698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.218 [2024-12-06 03:34:19.170716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.218 [2024-12-06 03:34:19.175736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.218 [2024-12-06 03:34:19.175827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.218 [2024-12-06 03:34:19.175845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.218 [2024-12-06 03:34:19.181079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.218 [2024-12-06 03:34:19.181153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.218 [2024-12-06 03:34:19.181174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.218 [2024-12-06 03:34:19.186877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.218 [2024-12-06 03:34:19.186983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.218 [2024-12-06 03:34:19.187002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.218 [2024-12-06 03:34:19.192489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.218 [2024-12-06 03:34:19.192559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.218 [2024-12-06 03:34:19.192577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.218 [2024-12-06 03:34:19.197343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.218 [2024-12-06 03:34:19.197426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.218 [2024-12-06 03:34:19.197444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.218 [2024-12-06 03:34:19.202025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.218 [2024-12-06 03:34:19.202099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.218 [2024-12-06 03:34:19.202117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.218 [2024-12-06 03:34:19.206335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.218 [2024-12-06 03:34:19.206401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.219 [2024-12-06 03:34:19.206420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.219 [2024-12-06 03:34:19.210644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.219 [2024-12-06 03:34:19.210713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.219 [2024-12-06 03:34:19.210732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.219 [2024-12-06 03:34:19.214895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.219 [2024-12-06 03:34:19.214986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.219 [2024-12-06 03:34:19.215005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.219 [2024-12-06 03:34:19.219090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.219 [2024-12-06 03:34:19.219159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.219 [2024-12-06 03:34:19.219177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.219 [2024-12-06 03:34:19.223518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.219 [2024-12-06 03:34:19.223598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.219 [2024-12-06 03:34:19.223617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.219 [2024-12-06 03:34:19.228263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.219 [2024-12-06 03:34:19.228325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.219 [2024-12-06 03:34:19.228344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.219 [2024-12-06 03:34:19.233678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.219 [2024-12-06 03:34:19.233735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.219 [2024-12-06 03:34:19.233754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.219 [2024-12-06 03:34:19.239115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.219 [2024-12-06 03:34:19.239178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.219 [2024-12-06 03:34:19.239198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.219 [2024-12-06 03:34:19.244020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.219 [2024-12-06 03:34:19.244086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.219 [2024-12-06 03:34:19.244105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.219 [2024-12-06 03:34:19.249120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.219 [2024-12-06 03:34:19.249185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.219 [2024-12-06 03:34:19.249204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.219 [2024-12-06 03:34:19.254761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.219 [2024-12-06 03:34:19.254824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.219 [2024-12-06 03:34:19.254844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.219 [2024-12-06 03:34:19.260055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.219 [2024-12-06 03:34:19.260117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.219 [2024-12-06 03:34:19.260136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.219 [2024-12-06 03:34:19.265974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.219 [2024-12-06 03:34:19.266052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.219 [2024-12-06 03:34:19.266071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.219 [2024-12-06 03:34:19.271999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.219 [2024-12-06 03:34:19.272085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.219 [2024-12-06 03:34:19.272103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.219 [2024-12-06 03:34:19.277173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.219 [2024-12-06 03:34:19.277255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.219 [2024-12-06 03:34:19.277274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.219 [2024-12-06 03:34:19.282471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.219 [2024-12-06 03:34:19.282546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.219 [2024-12-06 03:34:19.282565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.219 [2024-12-06 03:34:19.288666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.219 [2024-12-06 03:34:19.288732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.219 [2024-12-06 03:34:19.288751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.219 [2024-12-06 03:34:19.294345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.219 [2024-12-06 03:34:19.294414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.219 [2024-12-06 03:34:19.294432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.219 [2024-12-06 03:34:19.299670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.219 [2024-12-06 03:34:19.299774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.219 [2024-12-06 03:34:19.299793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.219 [2024-12-06 03:34:19.305042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.219 [2024-12-06 03:34:19.305109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.219 [2024-12-06 03:34:19.305128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.219 [2024-12-06 03:34:19.310601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.219 [2024-12-06 03:34:19.310681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.219 [2024-12-06 03:34:19.310700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.219 [2024-12-06 03:34:19.316091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.219 [2024-12-06 03:34:19.316198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.219 [2024-12-06 03:34:19.316219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.219 [2024-12-06 03:34:19.321574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.219 [2024-12-06 03:34:19.321634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.219 [2024-12-06 03:34:19.321653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.219 [2024-12-06 03:34:19.326900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.219 [2024-12-06 03:34:19.327037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.219 [2024-12-06 03:34:19.327055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.219 [2024-12-06 03:34:19.332876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.219 [2024-12-06 03:34:19.332977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.219 [2024-12-06 03:34:19.332995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.219 [2024-12-06 03:34:19.338270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.219 [2024-12-06 03:34:19.338351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.219 [2024-12-06 03:34:19.338369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.219 [2024-12-06 03:34:19.343535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.219 [2024-12-06 03:34:19.343609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.219 [2024-12-06 03:34:19.343628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.219 [2024-12-06 03:34:19.348848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.219 [2024-12-06 03:34:19.348967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.219 [2024-12-06 03:34:19.348986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.219 [2024-12-06 03:34:19.354007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.219 [2024-12-06 03:34:19.354090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.219 [2024-12-06 03:34:19.354109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.477 [2024-12-06 03:34:19.359550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.477 [2024-12-06 03:34:19.359616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.477 [2024-12-06 03:34:19.359636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.477 [2024-12-06 03:34:19.364752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.477 [2024-12-06 03:34:19.364832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.477 [2024-12-06 03:34:19.364851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.477 [2024-12-06 03:34:19.369655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.477 [2024-12-06 03:34:19.369732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.477 [2024-12-06 03:34:19.369750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.477 [2024-12-06 03:34:19.374741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.477 [2024-12-06 03:34:19.374871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.477 [2024-12-06 03:34:19.374889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.477 [2024-12-06 03:34:19.379605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.478 [2024-12-06 03:34:19.379698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.478 [2024-12-06 03:34:19.379717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.478 [2024-12-06 03:34:19.384378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.478 [2024-12-06 03:34:19.384476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.478 [2024-12-06 03:34:19.384495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.478 [2024-12-06 03:34:19.389936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.478 [2024-12-06 03:34:19.390006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.478 [2024-12-06 03:34:19.390024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.478 [2024-12-06 03:34:19.395515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.478 [2024-12-06 03:34:19.395581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.478 [2024-12-06 03:34:19.395600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.478 [2024-12-06 03:34:19.400764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.478 [2024-12-06 03:34:19.400843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.478 [2024-12-06 03:34:19.400863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.478 [2024-12-06 03:34:19.405963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.478 [2024-12-06 03:34:19.406038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.478 [2024-12-06 03:34:19.406058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.478 [2024-12-06 03:34:19.411619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.478 [2024-12-06 03:34:19.411702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.478 [2024-12-06 03:34:19.411720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.478 [2024-12-06 03:34:19.416708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.478 [2024-12-06 03:34:19.416771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.478 [2024-12-06 03:34:19.416790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.478 [2024-12-06 03:34:19.421440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.478 [2024-12-06 03:34:19.421515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.478 [2024-12-06 03:34:19.421534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.478 [2024-12-06 03:34:19.425794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.478 [2024-12-06 03:34:19.425867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.478 [2024-12-06 03:34:19.425886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.478 [2024-12-06 03:34:19.430069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.478 [2024-12-06 03:34:19.430139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.478 [2024-12-06 03:34:19.430158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.478 [2024-12-06 03:34:19.434348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.478 [2024-12-06 03:34:19.434409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.478 [2024-12-06 03:34:19.434427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.478 [2024-12-06 03:34:19.438661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.478 [2024-12-06 03:34:19.438726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.478 [2024-12-06 03:34:19.438745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.478 [2024-12-06 03:34:19.442922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.478 [2024-12-06 03:34:19.442997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.478 [2024-12-06 03:34:19.443016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.478 [2024-12-06 03:34:19.447181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.478 [2024-12-06 03:34:19.447258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.478 [2024-12-06 03:34:19.447281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.478 [2024-12-06 03:34:19.451760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.478 [2024-12-06 03:34:19.451863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.478 [2024-12-06 03:34:19.451881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.478 [2024-12-06 03:34:19.456230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.478 [2024-12-06 03:34:19.456295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.478 [2024-12-06 03:34:19.456314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.478 [2024-12-06 03:34:19.460716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.478 [2024-12-06 03:34:19.460777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.478 [2024-12-06 03:34:19.460795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.478 [2024-12-06 03:34:19.465516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.478 [2024-12-06 03:34:19.465592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.478 [2024-12-06 03:34:19.465611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.478 [2024-12-06 03:34:19.470719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.478 [2024-12-06 03:34:19.470781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.478 [2024-12-06 03:34:19.470800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.478 [2024-12-06 03:34:19.476517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.478 [2024-12-06 03:34:19.476643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.478 [2024-12-06 03:34:19.476661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.478 [2024-12-06 03:34:19.481504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.478 [2024-12-06 03:34:19.481566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.478 [2024-12-06 03:34:19.481585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.478 [2024-12-06 03:34:19.486759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.478 [2024-12-06 03:34:19.486822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.478 [2024-12-06 03:34:19.486841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.478 [2024-12-06 03:34:19.492135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.478 [2024-12-06 03:34:19.492201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.478 [2024-12-06 03:34:19.492220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.478 [2024-12-06 03:34:19.497243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.478 [2024-12-06 03:34:19.497310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.478 [2024-12-06 03:34:19.497329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.478 [2024-12-06 03:34:19.502641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.478 [2024-12-06 03:34:19.502726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.478 [2024-12-06 03:34:19.502746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.478 [2024-12-06 03:34:19.507754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.478 [2024-12-06 03:34:19.507847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.479 [2024-12-06 03:34:19.507865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.479 [2024-12-06 03:34:19.512905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.479 [2024-12-06 03:34:19.513034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.479 [2024-12-06 03:34:19.513053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.479 [2024-12-06 03:34:19.518145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.479 [2024-12-06 03:34:19.518208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.479 [2024-12-06 03:34:19.518226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.479 [2024-12-06 03:34:19.523546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.479 [2024-12-06 03:34:19.523617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.479 [2024-12-06 03:34:19.523635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.479 [2024-12-06 03:34:19.529009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.479 [2024-12-06 03:34:19.529073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.479 [2024-12-06 03:34:19.529092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.479 [2024-12-06 03:34:19.534152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.479 [2024-12-06 03:34:19.534247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.479 [2024-12-06 03:34:19.534266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.479 [2024-12-06 03:34:19.539086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.479 [2024-12-06 03:34:19.539204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.479 [2024-12-06 03:34:19.539222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.479 [2024-12-06 03:34:19.543696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.479 [2024-12-06 03:34:19.543770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.479 [2024-12-06 03:34:19.543788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.479 [2024-12-06 03:34:19.548103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.479 [2024-12-06 03:34:19.548164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.479 [2024-12-06 03:34:19.548182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.479 [2024-12-06 03:34:19.552342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.479 [2024-12-06 03:34:19.552420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.479 [2024-12-06 03:34:19.552439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.479 [2024-12-06 03:34:19.556614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.479 [2024-12-06 03:34:19.556700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.479 [2024-12-06 03:34:19.556719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.479 [2024-12-06 03:34:19.560841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.479 [2024-12-06 03:34:19.560972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.479 [2024-12-06 03:34:19.560991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.479 [2024-12-06 03:34:19.565086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.479 [2024-12-06 03:34:19.565157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.479 [2024-12-06 03:34:19.565174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.479 [2024-12-06 03:34:19.569311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.479 [2024-12-06 03:34:19.569373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.479 [2024-12-06 03:34:19.569392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.479 [2024-12-06 03:34:19.573525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.479 [2024-12-06 03:34:19.573601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.479 [2024-12-06 03:34:19.573624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.479 [2024-12-06 03:34:19.577727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.479 [2024-12-06 03:34:19.577803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.479 [2024-12-06 03:34:19.577822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.479 [2024-12-06 03:34:19.581884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.479 [2024-12-06 03:34:19.581980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.479 [2024-12-06 03:34:19.581999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.479 [2024-12-06 03:34:19.586069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.479 [2024-12-06 03:34:19.586148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.479 [2024-12-06 03:34:19.586166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.479 [2024-12-06 03:34:19.590266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.479 [2024-12-06 03:34:19.590344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.479 [2024-12-06 03:34:19.590364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.479 [2024-12-06 03:34:19.594470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.479 [2024-12-06 03:34:19.594533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.479 [2024-12-06 03:34:19.594552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.479 [2024-12-06 03:34:19.598648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.479 [2024-12-06 03:34:19.598725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.479 [2024-12-06 03:34:19.598744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.479 [2024-12-06 03:34:19.602840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.479 [2024-12-06 03:34:19.602921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.479 [2024-12-06 03:34:19.602940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.479 [2024-12-06 03:34:19.607366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.479 [2024-12-06 03:34:19.607428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.479 [2024-12-06 03:34:19.607447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.479 [2024-12-06 03:34:19.612378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.479 [2024-12-06 03:34:19.612509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.479 [2024-12-06 03:34:19.612529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.738 [2024-12-06 03:34:19.618083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.738 [2024-12-06 03:34:19.618267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.738 [2024-12-06 03:34:19.618285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.738 [2024-12-06 03:34:19.623709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.738 [2024-12-06 03:34:19.623806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.738 [2024-12-06 03:34:19.623825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.738 [2024-12-06 03:34:19.628844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.738 [2024-12-06 03:34:19.628983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.738 [2024-12-06 03:34:19.629002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.738 [2024-12-06 03:34:19.634269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.738 [2024-12-06 03:34:19.634380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.738 [2024-12-06 03:34:19.634399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.738 [2024-12-06 03:34:19.639390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.738 [2024-12-06 03:34:19.639468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.738 [2024-12-06 03:34:19.639487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.738 [2024-12-06 03:34:19.644467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.738 [2024-12-06 03:34:19.644548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.738 [2024-12-06 03:34:19.644567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.738 [2024-12-06 03:34:19.650277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.738 [2024-12-06 03:34:19.650382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.738 [2024-12-06 03:34:19.650401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.738 [2024-12-06 03:34:19.655270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.738 [2024-12-06 03:34:19.655340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.739 [2024-12-06 03:34:19.655359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.739 [2024-12-06 03:34:19.660310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.739 [2024-12-06 03:34:19.660394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.739 [2024-12-06 03:34:19.660413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.739 [2024-12-06 03:34:19.665492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.739 [2024-12-06 03:34:19.665565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.739 [2024-12-06 03:34:19.665584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.739 [2024-12-06 03:34:19.671179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.739 [2024-12-06 03:34:19.671243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.739 [2024-12-06 03:34:19.671262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.739 [2024-12-06 03:34:19.676629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.739 [2024-12-06 03:34:19.676693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.739 [2024-12-06 03:34:19.676712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.739 [2024-12-06 03:34:19.681918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.739 [2024-12-06 03:34:19.682026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.739 [2024-12-06 03:34:19.682045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.739 [2024-12-06 03:34:19.687207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.739 [2024-12-06 03:34:19.687270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.739 [2024-12-06 03:34:19.687289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.739 [2024-12-06 03:34:19.692221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.739 [2024-12-06 03:34:19.692300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.739 [2024-12-06 03:34:19.692319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.739 [2024-12-06 03:34:19.696831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.739 [2024-12-06 03:34:19.696901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.739 [2024-12-06 03:34:19.696920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.739 [2024-12-06 03:34:19.702480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.739 [2024-12-06 03:34:19.702541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.739 [2024-12-06 03:34:19.702564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.739 [2024-12-06 03:34:19.707756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.739 [2024-12-06 03:34:19.707821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.739 [2024-12-06 03:34:19.707840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.739 [2024-12-06 03:34:19.713010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.739 [2024-12-06 03:34:19.713130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.739 [2024-12-06 03:34:19.713148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.739 [2024-12-06 03:34:19.718725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.739 [2024-12-06 03:34:19.718810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.739 [2024-12-06 03:34:19.718829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.739 [2024-12-06 03:34:19.724268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.739 [2024-12-06 03:34:19.724340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.739 [2024-12-06 03:34:19.724359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.739 [2024-12-06 03:34:19.729500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.739 [2024-12-06 03:34:19.729605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.739 [2024-12-06 03:34:19.729624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.739 [2024-12-06 03:34:19.735191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.739 [2024-12-06 03:34:19.735267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.739 [2024-12-06 03:34:19.735285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.739 [2024-12-06 03:34:19.740944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.739 [2024-12-06 03:34:19.741014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.739 [2024-12-06 03:34:19.741032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.739 [2024-12-06 03:34:19.746654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.739 [2024-12-06 03:34:19.746744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.739 [2024-12-06 03:34:19.746763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.739 [2024-12-06 03:34:19.751407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.739 [2024-12-06 03:34:19.751474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.739 [2024-12-06 03:34:19.751492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.739 [2024-12-06 03:34:19.756021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.739 [2024-12-06 03:34:19.756092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.739 [2024-12-06 03:34:19.756111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.739 [2024-12-06 03:34:19.760497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.739 [2024-12-06 03:34:19.760561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.739 [2024-12-06 03:34:19.760579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.739 [2024-12-06 03:34:19.765103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.739 [2024-12-06 03:34:19.765177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.739 [2024-12-06 03:34:19.765195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.739 [2024-12-06 03:34:19.769617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.739 [2024-12-06 03:34:19.769693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.739 [2024-12-06 03:34:19.769712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.739 [2024-12-06 03:34:19.774179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.739 [2024-12-06 03:34:19.774270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.739 [2024-12-06 03:34:19.774289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.739 [2024-12-06 03:34:19.778796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.739 [2024-12-06 03:34:19.778880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.739 [2024-12-06 03:34:19.778899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.739 [2024-12-06 03:34:19.783291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.739 [2024-12-06 03:34:19.783366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.739 [2024-12-06 03:34:19.783385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.739 [2024-12-06 03:34:19.787704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.739 [2024-12-06 03:34:19.787764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.739 [2024-12-06 03:34:19.787782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.739 [2024-12-06 03:34:19.792313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.739 [2024-12-06 03:34:19.792438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.740 [2024-12-06 03:34:19.792457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.740 [2024-12-06 03:34:19.797435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.740 [2024-12-06 03:34:19.797568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.740 [2024-12-06 03:34:19.797586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.740 [2024-12-06 03:34:19.802528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.740 [2024-12-06 03:34:19.802602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.740 [2024-12-06 03:34:19.802621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.740 [2024-12-06 03:34:19.807579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.740 [2024-12-06 03:34:19.807649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.740 [2024-12-06 03:34:19.807669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.740 [2024-12-06 03:34:19.813532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.740 [2024-12-06 03:34:19.813604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.740 [2024-12-06 03:34:19.813623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.740 [2024-12-06 03:34:19.818872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.740 [2024-12-06 03:34:19.818946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.740 [2024-12-06 03:34:19.818972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.740 [2024-12-06 03:34:19.824382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.740 [2024-12-06 03:34:19.824450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.740 [2024-12-06 03:34:19.824469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.740 [2024-12-06 03:34:19.829790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.740 [2024-12-06 03:34:19.829869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.740 [2024-12-06 03:34:19.829889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.740 [2024-12-06 03:34:19.834853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.740 [2024-12-06 03:34:19.834945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.740 [2024-12-06 03:34:19.834973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.740 [2024-12-06 03:34:19.839654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.740 [2024-12-06 03:34:19.839714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.740 [2024-12-06 03:34:19.839733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.740 [2024-12-06 03:34:19.845198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.740 [2024-12-06 03:34:19.845276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.740 [2024-12-06 03:34:19.845295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.740 [2024-12-06 03:34:19.850297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.740 [2024-12-06 03:34:19.850361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.740 [2024-12-06 03:34:19.850379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.740 [2024-12-06 03:34:19.855679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.740 [2024-12-06 03:34:19.855742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.740 [2024-12-06 03:34:19.855760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.740 [2024-12-06 03:34:19.861063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.740 [2024-12-06 03:34:19.861121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.740 [2024-12-06 03:34:19.861139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.740 [2024-12-06 03:34:19.866650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.740 [2024-12-06 03:34:19.866733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.740 [2024-12-06 03:34:19.866751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.740 [2024-12-06 03:34:19.871624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.740 [2024-12-06 03:34:19.871693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.740 [2024-12-06 03:34:19.871712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.999 [2024-12-06 03:34:19.876451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.999 [2024-12-06 03:34:19.876516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.999 [2024-12-06 03:34:19.876534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.999 [2024-12-06 03:34:19.880964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.999 [2024-12-06 03:34:19.881041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.999 [2024-12-06 03:34:19.881061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.999 [2024-12-06 03:34:19.885263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:25:59.999 [2024-12-06 03:34:19.885357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.999 [2024-12-06 03:34:19.885376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.999 [2024-12-06 03:34:19.889837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.000 [2024-12-06 03:34:19.889926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.000 [2024-12-06 03:34:19.889945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:00.000 [2024-12-06 03:34:19.894496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.000 [2024-12-06 03:34:19.894568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.000 [2024-12-06 03:34:19.894587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:00.000 [2024-12-06 03:34:19.899099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.000 [2024-12-06 03:34:19.899197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.000 [2024-12-06 03:34:19.899215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:00.000 [2024-12-06 03:34:19.903817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.000 [2024-12-06 03:34:19.903917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.000 [2024-12-06 03:34:19.903936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:00.000 [2024-12-06 03:34:19.908472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.000 [2024-12-06 03:34:19.908554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.000 [2024-12-06 03:34:19.908574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:00.000 [2024-12-06 03:34:19.912671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.000 [2024-12-06 03:34:19.912748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.000 [2024-12-06 03:34:19.912767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:00.000 [2024-12-06 03:34:19.917065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.000 [2024-12-06 03:34:19.917138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.000 [2024-12-06 03:34:19.917157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:00.000 [2024-12-06 03:34:19.921359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.000 [2024-12-06 03:34:19.921437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.000 [2024-12-06 03:34:19.921457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:00.000 [2024-12-06 03:34:19.925572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.000 [2024-12-06 03:34:19.925639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.000 [2024-12-06 03:34:19.925658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:00.000 [2024-12-06 03:34:19.929834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.000 [2024-12-06 03:34:19.929896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.000 [2024-12-06 03:34:19.929915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:00.000 [2024-12-06 03:34:19.934400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.000 [2024-12-06 03:34:19.934460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.000 [2024-12-06 03:34:19.934479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:00.000 [2024-12-06 03:34:19.939098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.000 [2024-12-06 03:34:19.939163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.000 [2024-12-06 03:34:19.939182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:00.000 [2024-12-06 03:34:19.944722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.000 [2024-12-06 03:34:19.944784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.000 [2024-12-06 03:34:19.944803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:00.000 [2024-12-06 03:34:19.950064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.000 [2024-12-06 03:34:19.950137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.000 [2024-12-06 03:34:19.950155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:00.000 [2024-12-06 03:34:19.954797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.000 [2024-12-06 03:34:19.954860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.000 [2024-12-06 03:34:19.954878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:00.000 [2024-12-06 03:34:19.959554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.000 [2024-12-06 03:34:19.959647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.000 [2024-12-06 03:34:19.959668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:00.000 [2024-12-06 03:34:19.964248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.000 [2024-12-06 03:34:19.964310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.000 [2024-12-06 03:34:19.964329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:00.000 [2024-12-06 03:34:19.968582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.000 [2024-12-06 03:34:19.968671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.000 [2024-12-06 03:34:19.968690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:00.000 [2024-12-06 03:34:19.973287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.000 [2024-12-06 03:34:19.973356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.000 [2024-12-06 03:34:19.973375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:00.000 [2024-12-06 03:34:19.978209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.000 [2024-12-06 03:34:19.978291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.000 [2024-12-06 03:34:19.978310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:00.000 [2024-12-06 03:34:19.983584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.000 [2024-12-06 03:34:19.983646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.000 [2024-12-06 03:34:19.983664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:00.000 [2024-12-06 03:34:19.988878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.000 [2024-12-06 03:34:19.988940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.000 [2024-12-06 03:34:19.988965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:00.000 [2024-12-06 03:34:19.993547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.000 [2024-12-06 03:34:19.993623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.000 [2024-12-06 03:34:19.993642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:00.000 [2024-12-06 03:34:19.998442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.000 [2024-12-06 03:34:19.998501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.000 [2024-12-06 03:34:19.998519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:00.000 [2024-12-06 03:34:20.003049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.000 [2024-12-06 03:34:20.003155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.000 [2024-12-06 03:34:20.003175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:00.000 [2024-12-06 03:34:20.007519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.000 [2024-12-06 03:34:20.007578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.000 [2024-12-06 03:34:20.007598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:00.000 [2024-12-06 03:34:20.012219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.000 [2024-12-06 03:34:20.012340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.001 [2024-12-06 03:34:20.012359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:00.001 [2024-12-06 03:34:20.017565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.001 [2024-12-06 03:34:20.017630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.001 [2024-12-06 03:34:20.017651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:00.001 [2024-12-06 03:34:20.022830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.001 [2024-12-06 03:34:20.022905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.001 [2024-12-06 03:34:20.022924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:00.001 [2024-12-06 03:34:20.027885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.001 [2024-12-06 03:34:20.027990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.001 [2024-12-06 03:34:20.028009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:00.001 [2024-12-06 03:34:20.033970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.001 [2024-12-06 03:34:20.034032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.001 [2024-12-06 03:34:20.034051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:00.001 [2024-12-06 03:34:20.040721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.001 [2024-12-06 03:34:20.040835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.001 [2024-12-06 03:34:20.040854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:00.001 [2024-12-06 03:34:20.046697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.001 [2024-12-06 03:34:20.046876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.001 [2024-12-06 03:34:20.046899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:00.001 [2024-12-06 03:34:20.052083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.001 [2024-12-06 03:34:20.052218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.001 [2024-12-06 03:34:20.052238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:00.001 [2024-12-06 03:34:20.057408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.001 [2024-12-06 03:34:20.057514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.001 [2024-12-06 03:34:20.057534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:00.001 [2024-12-06 03:34:20.062743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.001 [2024-12-06 03:34:20.062863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.001 [2024-12-06 03:34:20.062882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:00.001 [2024-12-06 03:34:20.068114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.001 [2024-12-06 03:34:20.068260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.001 [2024-12-06 03:34:20.068281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:00.001 [2024-12-06 03:34:20.073535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.001 [2024-12-06 03:34:20.073645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.001 [2024-12-06 03:34:20.073664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:00.001 [2024-12-06 03:34:20.078805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.001 [2024-12-06 03:34:20.078929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.001 [2024-12-06 03:34:20.078955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:00.001 [2024-12-06 03:34:20.083794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.001 [2024-12-06 03:34:20.083869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.001 [2024-12-06 03:34:20.083889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:00.001 [2024-12-06 03:34:20.088070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.001 [2024-12-06 03:34:20.088149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.001 [2024-12-06 03:34:20.088169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:00.001 [2024-12-06 03:34:20.092326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.001 [2024-12-06 03:34:20.092397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.001 [2024-12-06 03:34:20.092420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:00.001 [2024-12-06 03:34:20.097075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.001 [2024-12-06 03:34:20.097144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.001 [2024-12-06 03:34:20.097167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:00.001 [2024-12-06 03:34:20.101357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.001 [2024-12-06 03:34:20.101430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.001 [2024-12-06 03:34:20.101451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:00.001 [2024-12-06 03:34:20.105618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.001 [2024-12-06 03:34:20.105735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.001 [2024-12-06 03:34:20.105754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:00.001 [2024-12-06 03:34:20.109869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.001 [2024-12-06 03:34:20.109932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.001 [2024-12-06 03:34:20.109958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:00.001 [2024-12-06 03:34:20.114629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.001 [2024-12-06 03:34:20.114736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.001 [2024-12-06 03:34:20.114755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:00.001 [2024-12-06 03:34:20.120225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.001 [2024-12-06 03:34:20.120292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.001 [2024-12-06 03:34:20.120311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:00.001 [2024-12-06 03:34:20.125822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.001 [2024-12-06 03:34:20.125885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.001 [2024-12-06 03:34:20.125904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:00.001 [2024-12-06 03:34:20.130727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.001 [2024-12-06 03:34:20.130805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.001 [2024-12-06 03:34:20.130826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:00.001 [2024-12-06 03:34:20.135378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.001 [2024-12-06 03:34:20.135444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.001 [2024-12-06 03:34:20.135466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:00.260 [2024-12-06 03:34:20.140140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.260 [2024-12-06 03:34:20.140219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.260 [2024-12-06 03:34:20.140238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:00.260 [2024-12-06 03:34:20.144903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x110c270) with pdu=0x200016efef90 00:26:00.260 [2024-12-06 03:34:20.146112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.260 [2024-12-06 03:34:20.146143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:00.260 6311.50 IOPS, 788.94 MiB/s 00:26:00.260 Latency(us) 00:26:00.260 [2024-12-06T02:34:20.401Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:00.260 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:00.260 nvme0n1 : 2.00 6310.10 788.76 0.00 0.00 2531.24 1652.65 13905.03 00:26:00.260 [2024-12-06T02:34:20.401Z] =================================================================================================================== 00:26:00.260 [2024-12-06T02:34:20.401Z] Total : 6310.10 788.76 0.00 0.00 2531.24 1652.65 13905.03 00:26:00.260 { 00:26:00.260 "results": [ 00:26:00.260 { 00:26:00.260 "job": "nvme0n1", 00:26:00.260 "core_mask": "0x2", 00:26:00.260 "workload": "randwrite", 00:26:00.260 "status": "finished", 00:26:00.260 "queue_depth": 16, 00:26:00.260 "io_size": 131072, 00:26:00.260 "runtime": 2.003614, 00:26:00.260 "iops": 6310.097653540053, 00:26:00.260 "mibps": 788.7622066925067, 00:26:00.260 "io_failed": 0, 00:26:00.260 "io_timeout": 0, 00:26:00.260 "avg_latency_us": 2531.24402037216, 00:26:00.260 "min_latency_us": 1652.6469565217392, 00:26:00.260 "max_latency_us": 13905.029565217392 00:26:00.260 } 00:26:00.260 ], 00:26:00.260 "core_count": 1 00:26:00.260 } 00:26:00.260 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:00.260 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:00.260 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:00.260 | .driver_specific 00:26:00.260 | .nvme_error 00:26:00.260 | .status_code 00:26:00.260 | .command_transient_transport_error' 00:26:00.260 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:00.260 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 408 > 0 )) 00:26:00.260 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2761308 00:26:00.260 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2761308 ']' 00:26:00.260 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2761308 00:26:00.260 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:00.260 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:00.260 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2761308 00:26:00.519 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:00.519 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:00.519 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2761308' 00:26:00.519 killing process with pid 2761308 00:26:00.519 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2761308 00:26:00.519 Received shutdown signal, test time was about 2.000000 seconds 00:26:00.519 00:26:00.519 Latency(us) 00:26:00.519 [2024-12-06T02:34:20.660Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:00.519 [2024-12-06T02:34:20.660Z] =================================================================================================================== 00:26:00.519 [2024-12-06T02:34:20.660Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:00.519 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2761308 00:26:00.519 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2759547 00:26:00.519 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2759547 ']' 00:26:00.519 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2759547 00:26:00.519 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:00.520 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:00.520 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2759547 00:26:00.520 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:00.520 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:00.520 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2759547' 00:26:00.520 killing process with pid 2759547 00:26:00.520 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2759547 00:26:00.520 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2759547 00:26:00.779 00:26:00.779 real 0m13.904s 00:26:00.779 user 0m26.590s 00:26:00.779 sys 0m4.563s 00:26:00.779 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:00.779 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:00.779 ************************************ 00:26:00.779 END TEST nvmf_digest_error 00:26:00.779 ************************************ 00:26:00.779 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:00.779 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:00.779 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:00.779 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:26:00.779 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:00.779 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:26:00.779 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:00.779 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:00.779 rmmod nvme_tcp 00:26:00.779 rmmod nvme_fabrics 00:26:00.779 rmmod nvme_keyring 00:26:00.779 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:00.779 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:26:00.779 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:26:00.779 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2759547 ']' 00:26:00.779 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2759547 00:26:00.779 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 2759547 ']' 00:26:00.779 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 2759547 00:26:00.779 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2759547) - No such process 00:26:00.779 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 2759547 is not found' 00:26:00.779 Process with pid 2759547 is not found 00:26:00.779 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:00.779 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:00.779 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:00.779 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:26:00.779 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:26:00.779 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:00.779 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:26:00.779 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:00.779 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:00.779 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:00.779 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:00.779 03:34:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:03.314 03:34:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:03.314 00:26:03.314 real 0m36.003s 00:26:03.314 user 0m55.091s 00:26:03.314 sys 0m13.454s 00:26:03.314 03:34:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:03.314 03:34:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:03.314 ************************************ 00:26:03.314 END TEST nvmf_digest 00:26:03.314 ************************************ 00:26:03.314 03:34:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:26:03.314 03:34:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:26:03.314 03:34:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:26:03.314 03:34:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:03.314 03:34:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:03.314 03:34:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:03.314 03:34:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.314 ************************************ 00:26:03.314 START TEST nvmf_bdevperf 00:26:03.314 ************************************ 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:03.314 * Looking for test storage... 00:26:03.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:03.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:03.314 --rc genhtml_branch_coverage=1 00:26:03.314 --rc genhtml_function_coverage=1 00:26:03.314 --rc genhtml_legend=1 00:26:03.314 --rc geninfo_all_blocks=1 00:26:03.314 --rc geninfo_unexecuted_blocks=1 00:26:03.314 00:26:03.314 ' 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:03.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:03.314 --rc genhtml_branch_coverage=1 00:26:03.314 --rc genhtml_function_coverage=1 00:26:03.314 --rc genhtml_legend=1 00:26:03.314 --rc geninfo_all_blocks=1 00:26:03.314 --rc geninfo_unexecuted_blocks=1 00:26:03.314 00:26:03.314 ' 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:03.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:03.314 --rc genhtml_branch_coverage=1 00:26:03.314 --rc genhtml_function_coverage=1 00:26:03.314 --rc genhtml_legend=1 00:26:03.314 --rc geninfo_all_blocks=1 00:26:03.314 --rc geninfo_unexecuted_blocks=1 00:26:03.314 00:26:03.314 ' 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:03.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:03.314 --rc genhtml_branch_coverage=1 00:26:03.314 --rc genhtml_function_coverage=1 00:26:03.314 --rc genhtml_legend=1 00:26:03.314 --rc geninfo_all_blocks=1 00:26:03.314 --rc geninfo_unexecuted_blocks=1 00:26:03.314 00:26:03.314 ' 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:03.314 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:03.315 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:03.315 03:34:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:09.874 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:09.874 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:09.874 Found net devices under 0000:86:00.0: cvl_0_0 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:09.874 Found net devices under 0000:86:00.1: cvl_0_1 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:09.874 03:34:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:09.874 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:09.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:09.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:26:09.874 00:26:09.874 --- 10.0.0.2 ping statistics --- 00:26:09.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:09.874 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:26:09.874 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:09.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:09.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:26:09.874 00:26:09.875 --- 10.0.0.1 ping statistics --- 00:26:09.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:09.875 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2765434 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2765434 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2765434 ']' 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:09.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:09.875 [2024-12-06 03:34:29.115939] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:26:09.875 [2024-12-06 03:34:29.115993] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:09.875 [2024-12-06 03:34:29.183391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:09.875 [2024-12-06 03:34:29.223773] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:09.875 [2024-12-06 03:34:29.223810] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:09.875 [2024-12-06 03:34:29.223818] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:09.875 [2024-12-06 03:34:29.223824] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:09.875 [2024-12-06 03:34:29.223829] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:09.875 [2024-12-06 03:34:29.225283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:09.875 [2024-12-06 03:34:29.225352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:09.875 [2024-12-06 03:34:29.225353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:09.875 [2024-12-06 03:34:29.374930] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:09.875 Malloc0 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:09.875 [2024-12-06 03:34:29.440002] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:09.875 { 00:26:09.875 "params": { 00:26:09.875 "name": "Nvme$subsystem", 00:26:09.875 "trtype": "$TEST_TRANSPORT", 00:26:09.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:09.875 "adrfam": "ipv4", 00:26:09.875 "trsvcid": "$NVMF_PORT", 00:26:09.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:09.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:09.875 "hdgst": ${hdgst:-false}, 00:26:09.875 "ddgst": ${ddgst:-false} 00:26:09.875 }, 00:26:09.875 "method": "bdev_nvme_attach_controller" 00:26:09.875 } 00:26:09.875 EOF 00:26:09.875 )") 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:09.875 03:34:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:09.875 "params": { 00:26:09.875 "name": "Nvme1", 00:26:09.875 "trtype": "tcp", 00:26:09.875 "traddr": "10.0.0.2", 00:26:09.875 "adrfam": "ipv4", 00:26:09.875 "trsvcid": "4420", 00:26:09.875 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:09.875 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:09.875 "hdgst": false, 00:26:09.875 "ddgst": false 00:26:09.875 }, 00:26:09.875 "method": "bdev_nvme_attach_controller" 00:26:09.875 }' 00:26:09.875 [2024-12-06 03:34:29.490164] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:26:09.875 [2024-12-06 03:34:29.490204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2765457 ] 00:26:09.875 [2024-12-06 03:34:29.554138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.875 [2024-12-06 03:34:29.595592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:09.875 Running I/O for 1 seconds... 00:26:10.808 10904.00 IOPS, 42.59 MiB/s 00:26:10.808 Latency(us) 00:26:10.808 [2024-12-06T02:34:30.949Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:10.808 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:10.808 Verification LBA range: start 0x0 length 0x4000 00:26:10.808 Nvme1n1 : 1.01 10958.80 42.81 0.00 0.00 11634.81 2364.99 13734.07 00:26:10.808 [2024-12-06T02:34:30.949Z] =================================================================================================================== 00:26:10.808 [2024-12-06T02:34:30.949Z] Total : 10958.80 42.81 0.00 0.00 11634.81 2364.99 13734.07 00:26:11.066 03:34:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2765695 00:26:11.066 03:34:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:11.066 03:34:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:11.066 03:34:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:11.066 03:34:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:11.066 03:34:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:11.066 03:34:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:11.066 03:34:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:11.066 { 00:26:11.066 "params": { 00:26:11.066 "name": "Nvme$subsystem", 00:26:11.066 "trtype": "$TEST_TRANSPORT", 00:26:11.066 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:11.066 "adrfam": "ipv4", 00:26:11.066 "trsvcid": "$NVMF_PORT", 00:26:11.066 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:11.066 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:11.066 "hdgst": ${hdgst:-false}, 00:26:11.066 "ddgst": ${ddgst:-false} 00:26:11.066 }, 00:26:11.066 "method": "bdev_nvme_attach_controller" 00:26:11.066 } 00:26:11.066 EOF 00:26:11.066 )") 00:26:11.066 03:34:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:11.066 03:34:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:11.066 03:34:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:11.066 03:34:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:11.066 "params": { 00:26:11.066 "name": "Nvme1", 00:26:11.066 "trtype": "tcp", 00:26:11.066 "traddr": "10.0.0.2", 00:26:11.066 "adrfam": "ipv4", 00:26:11.066 "trsvcid": "4420", 00:26:11.066 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:11.066 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:11.066 "hdgst": false, 00:26:11.066 "ddgst": false 00:26:11.066 }, 00:26:11.066 "method": "bdev_nvme_attach_controller" 00:26:11.066 }' 00:26:11.066 [2024-12-06 03:34:31.099077] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:26:11.066 [2024-12-06 03:34:31.099124] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2765695 ] 00:26:11.066 [2024-12-06 03:34:31.162752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.066 [2024-12-06 03:34:31.201280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.634 Running I/O for 15 seconds... 00:26:13.510 10793.00 IOPS, 42.16 MiB/s [2024-12-06T02:34:34.223Z] 10896.00 IOPS, 42.56 MiB/s [2024-12-06T02:34:34.223Z] 03:34:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2765434 00:26:14.082 03:34:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:14.082 [2024-12-06 03:34:34.072430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:89592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.082 [2024-12-06 03:34:34.072471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.082 [2024-12-06 03:34:34.072490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:89672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.082 [2024-12-06 03:34:34.072500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.082 [2024-12-06 03:34:34.072510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.082 [2024-12-06 03:34:34.072519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.082 [2024-12-06 03:34:34.072529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:89688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.082 [2024-12-06 03:34:34.072538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.082 [2024-12-06 03:34:34.072548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:89696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.082 [2024-12-06 03:34:34.072557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.082 [2024-12-06 03:34:34.072566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:89704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.082 [2024-12-06 03:34:34.072575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.082 [2024-12-06 03:34:34.072584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:89712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.082 [2024-12-06 03:34:34.072593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.082 [2024-12-06 03:34:34.072601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:89720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.082 [2024-12-06 03:34:34.072609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.082 [2024-12-06 03:34:34.072618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.082 [2024-12-06 03:34:34.072628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.082 [2024-12-06 03:34:34.072638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:89736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.082 [2024-12-06 03:34:34.072647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.082 [2024-12-06 03:34:34.072657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:89744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.082 [2024-12-06 03:34:34.072665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.082 [2024-12-06 03:34:34.072675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:89752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.082 [2024-12-06 03:34:34.072691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.082 [2024-12-06 03:34:34.072704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:89760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.082 [2024-12-06 03:34:34.072713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.082 [2024-12-06 03:34:34.072722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.082 [2024-12-06 03:34:34.072730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.082 [2024-12-06 03:34:34.072739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:89776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.082 [2024-12-06 03:34:34.072748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.082 [2024-12-06 03:34:34.072758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:89784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.082 [2024-12-06 03:34:34.072767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.082 [2024-12-06 03:34:34.072776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:89792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.082 [2024-12-06 03:34:34.072784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.082 [2024-12-06 03:34:34.072794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:89800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.082 [2024-12-06 03:34:34.072802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.082 [2024-12-06 03:34:34.072812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.082 [2024-12-06 03:34:34.072822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.082 [2024-12-06 03:34:34.072833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:89816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.082 [2024-12-06 03:34:34.072842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.082 [2024-12-06 03:34:34.072852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.082 [2024-12-06 03:34:34.072863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.082 [2024-12-06 03:34:34.072873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.082 [2024-12-06 03:34:34.072882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.082 [2024-12-06 03:34:34.072892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:89840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.082 [2024-12-06 03:34:34.072901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.082 [2024-12-06 03:34:34.072910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:89848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.082 [2024-12-06 03:34:34.072917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.082 [2024-12-06 03:34:34.072927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:89856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.082 [2024-12-06 03:34:34.072938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.082 [2024-12-06 03:34:34.072950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.082 [2024-12-06 03:34:34.072958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.082 [2024-12-06 03:34:34.072967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.082 [2024-12-06 03:34:34.072976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.082 [2024-12-06 03:34:34.072987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:89880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.082 [2024-12-06 03:34:34.072996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.082 [2024-12-06 03:34:34.073010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.082 [2024-12-06 03:34:34.073022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.082 [2024-12-06 03:34:34.073032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.082 [2024-12-06 03:34:34.073041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.082 [2024-12-06 03:34:34.073053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.082 [2024-12-06 03:34:34.073061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.082 [2024-12-06 03:34:34.073070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.082 [2024-12-06 03:34:34.073079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.082 [2024-12-06 03:34:34.073092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.082 [2024-12-06 03:34:34.073101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.082 [2024-12-06 03:34:34.073112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:89928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.082 [2024-12-06 03:34:34.073120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.082 [2024-12-06 03:34:34.073134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:89936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.082 [2024-12-06 03:34:34.073145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.082 [2024-12-06 03:34:34.073153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.082 [2024-12-06 03:34:34.073164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.082 [2024-12-06 03:34:34.073174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:89952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.083 [2024-12-06 03:34:34.073185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.083 [2024-12-06 03:34:34.073199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.083 [2024-12-06 03:34:34.073210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.083 [2024-12-06 03:34:34.073220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.083 [2024-12-06 03:34:34.073228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.083 [2024-12-06 03:34:34.073237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:89976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.083 [2024-12-06 03:34:34.073244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.083 [2024-12-06 03:34:34.073252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:89984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.083 [2024-12-06 03:34:34.073258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.083 [2024-12-06 03:34:34.073267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:89992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.083 [2024-12-06 03:34:34.073275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.083 [2024-12-06 03:34:34.073283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:90000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.083 [2024-12-06 03:34:34.073290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.083 [2024-12-06 03:34:34.073298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:90008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.083 [2024-12-06 03:34:34.073304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.083 [2024-12-06 03:34:34.073312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:90016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.083 [2024-12-06 03:34:34.073318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.083 [2024-12-06 03:34:34.073326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:90024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.083 [2024-12-06 03:34:34.073333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.083 [2024-12-06 03:34:34.073341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:90032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.083 [2024-12-06 03:34:34.073348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.083 [2024-12-06 03:34:34.073356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:90040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.083 [2024-12-06 03:34:34.073363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.083 [2024-12-06 03:34:34.073371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:90048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.083 [2024-12-06 03:34:34.073377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.083 [2024-12-06 03:34:34.073386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:90056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.083 [2024-12-06 03:34:34.073395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.083 [2024-12-06 03:34:34.073403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:90064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.083 [2024-12-06 03:34:34.073410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.083 [2024-12-06 03:34:34.073418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:90072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.083 [2024-12-06 03:34:34.073425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.083 [2024-12-06 03:34:34.073433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.083 [2024-12-06 03:34:34.073439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.083 [2024-12-06 03:34:34.073449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:90088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.083 [2024-12-06 03:34:34.073456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.083 [2024-12-06 03:34:34.073465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:89600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.083 [2024-12-06 03:34:34.073472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.083 [2024-12-06 03:34:34.073480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:89608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.083 [2024-12-06 03:34:34.073486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.083 [2024-12-06 03:34:34.073495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.083 [2024-12-06 03:34:34.073501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.083 [2024-12-06 03:34:34.073509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:90104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.083 [2024-12-06 03:34:34.073517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.083 [2024-12-06 03:34:34.073525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:90112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.083 [2024-12-06 03:34:34.073531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.083 [2024-12-06 03:34:34.073539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:90120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.083 [2024-12-06 03:34:34.073546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.083 [2024-12-06 03:34:34.073554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.083 [2024-12-06 03:34:34.073560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.083 [2024-12-06 03:34:34.073569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:90136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.083 [2024-12-06 03:34:34.073576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.083 [2024-12-06 03:34:34.073586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:90144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.083 [2024-12-06 03:34:34.073593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.083 [2024-12-06 03:34:34.073601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:90152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.083 [2024-12-06 03:34:34.073607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.083 [2024-12-06 03:34:34.073615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.083 [2024-12-06 03:34:34.073622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.083 [2024-12-06 03:34:34.073631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:90168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.083 [2024-12-06 03:34:34.073638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.083 [2024-12-06 03:34:34.073646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:90176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.083 [2024-12-06 03:34:34.073653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.083 [2024-12-06 03:34:34.073661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:90184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.083 [2024-12-06 03:34:34.073667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.083 [2024-12-06 03:34:34.073675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:90192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.083 [2024-12-06 03:34:34.073681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.083 [2024-12-06 03:34:34.073690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:90200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.083 [2024-12-06 03:34:34.073697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.083 [2024-12-06 03:34:34.073705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:90208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.083 [2024-12-06 03:34:34.073712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.083 [2024-12-06 03:34:34.073720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:90216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.083 [2024-12-06 03:34:34.073727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.083 [2024-12-06 03:34:34.073735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:90224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.083 [2024-12-06 03:34:34.073741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.083 [2024-12-06 03:34:34.073750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.083 [2024-12-06 03:34:34.073757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.083 [2024-12-06 03:34:34.073766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:90240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.083 [2024-12-06 03:34:34.073774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.083 [2024-12-06 03:34:34.073782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:90248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.083 [2024-12-06 03:34:34.073788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.083 [2024-12-06 03:34:34.073796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.084 [2024-12-06 03:34:34.073804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.084 [2024-12-06 03:34:34.073813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:90264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.084 [2024-12-06 03:34:34.073819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.084 [2024-12-06 03:34:34.073828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:90272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.084 [2024-12-06 03:34:34.073834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.084 [2024-12-06 03:34:34.073842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:90280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.084 [2024-12-06 03:34:34.073849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.084 [2024-12-06 03:34:34.073857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:90288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.084 [2024-12-06 03:34:34.073864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.084 [2024-12-06 03:34:34.073875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:90296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.084 [2024-12-06 03:34:34.073882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.084 [2024-12-06 03:34:34.073890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:90304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.084 [2024-12-06 03:34:34.073897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.084 [2024-12-06 03:34:34.073904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:90312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.084 [2024-12-06 03:34:34.073911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.084 [2024-12-06 03:34:34.073919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:90320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.084 [2024-12-06 03:34:34.073926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.084 [2024-12-06 03:34:34.073934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:90328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.084 [2024-12-06 03:34:34.073941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.084 [2024-12-06 03:34:34.073953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:90336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.084 [2024-12-06 03:34:34.073960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.084 [2024-12-06 03:34:34.073972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:90344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.084 [2024-12-06 03:34:34.073979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.084 [2024-12-06 03:34:34.073987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:90352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.084 [2024-12-06 03:34:34.073994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.084 [2024-12-06 03:34:34.074003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:90360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.084 [2024-12-06 03:34:34.074010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.084 [2024-12-06 03:34:34.074018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:90368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.084 [2024-12-06 03:34:34.074025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.084 [2024-12-06 03:34:34.074033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:90376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.084 [2024-12-06 03:34:34.074039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.084 [2024-12-06 03:34:34.074048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:90384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.084 [2024-12-06 03:34:34.074055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.084 [2024-12-06 03:34:34.074064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:90392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.084 [2024-12-06 03:34:34.074071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.084 [2024-12-06 03:34:34.074079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:90400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.084 [2024-12-06 03:34:34.074086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.084 [2024-12-06 03:34:34.074094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:90408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.084 [2024-12-06 03:34:34.074100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.084 [2024-12-06 03:34:34.074108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:90416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.084 [2024-12-06 03:34:34.074115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.084 [2024-12-06 03:34:34.074125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:90424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.084 [2024-12-06 03:34:34.074132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.084 [2024-12-06 03:34:34.074140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:90432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.084 [2024-12-06 03:34:34.074147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.084 [2024-12-06 03:34:34.074155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:90440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.084 [2024-12-06 03:34:34.074161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.084 [2024-12-06 03:34:34.074172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:90448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.084 [2024-12-06 03:34:34.074179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.084 [2024-12-06 03:34:34.074188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:90456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.084 [2024-12-06 03:34:34.074195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.084 [2024-12-06 03:34:34.074203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:90464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.084 [2024-12-06 03:34:34.074210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.084 [2024-12-06 03:34:34.074218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:90472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.084 [2024-12-06 03:34:34.074225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.084 [2024-12-06 03:34:34.074233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:90480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.084 [2024-12-06 03:34:34.074240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.084 [2024-12-06 03:34:34.074248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:90488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.084 [2024-12-06 03:34:34.074255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.084 [2024-12-06 03:34:34.074263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:90496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.084 [2024-12-06 03:34:34.074269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.084 [2024-12-06 03:34:34.074277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:90504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.084 [2024-12-06 03:34:34.074284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.084 [2024-12-06 03:34:34.074293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:90512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.084 [2024-12-06 03:34:34.074300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.084 [2024-12-06 03:34:34.074308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:90520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.084 [2024-12-06 03:34:34.074314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.084 [2024-12-06 03:34:34.074322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:90528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.084 [2024-12-06 03:34:34.074329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.084 [2024-12-06 03:34:34.074337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:90536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.084 [2024-12-06 03:34:34.074345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.084 [2024-12-06 03:34:34.074353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:90544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.084 [2024-12-06 03:34:34.074362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.084 [2024-12-06 03:34:34.074371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.084 [2024-12-06 03:34:34.074378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.084 [2024-12-06 03:34:34.074386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:90560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.084 [2024-12-06 03:34:34.074393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.084 [2024-12-06 03:34:34.074402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:90568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.084 [2024-12-06 03:34:34.074409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.085 [2024-12-06 03:34:34.074418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.085 [2024-12-06 03:34:34.074424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.085 [2024-12-06 03:34:34.074432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:90584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.085 [2024-12-06 03:34:34.074439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.085 [2024-12-06 03:34:34.074447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:90592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.085 [2024-12-06 03:34:34.074455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.085 [2024-12-06 03:34:34.074464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.085 [2024-12-06 03:34:34.074471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.085 [2024-12-06 03:34:34.074479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:90608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.085 [2024-12-06 03:34:34.074486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.085 [2024-12-06 03:34:34.074495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.085 [2024-12-06 03:34:34.074502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.085 [2024-12-06 03:34:34.074509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:89624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.085 [2024-12-06 03:34:34.074516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.085 [2024-12-06 03:34:34.074525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:89632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.085 [2024-12-06 03:34:34.074532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.085 [2024-12-06 03:34:34.074541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:89640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.085 [2024-12-06 03:34:34.074547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.085 [2024-12-06 03:34:34.074557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:89648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.085 [2024-12-06 03:34:34.074563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.085 [2024-12-06 03:34:34.074572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:89656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.085 [2024-12-06 03:34:34.074578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.085 [2024-12-06 03:34:34.074586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd3410 is same with the state(6) to be set 00:26:14.085 [2024-12-06 03:34:34.074596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.085 [2024-12-06 03:34:34.074602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.085 [2024-12-06 03:34:34.074608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89664 len:8 PRP1 0x0 PRP2 0x0 00:26:14.085 [2024-12-06 03:34:34.074616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.085 [2024-12-06 03:34:34.077781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.085 [2024-12-06 03:34:34.077835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.085 [2024-12-06 03:34:34.078385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.085 [2024-12-06 03:34:34.078430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.085 [2024-12-06 03:34:34.078455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.085 [2024-12-06 03:34:34.078965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.085 [2024-12-06 03:34:34.079141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.085 [2024-12-06 03:34:34.079150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.085 [2024-12-06 03:34:34.079159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.085 [2024-12-06 03:34:34.079168] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.085 [2024-12-06 03:34:34.091046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.085 [2024-12-06 03:34:34.091539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.085 [2024-12-06 03:34:34.091588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.085 [2024-12-06 03:34:34.091613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.085 [2024-12-06 03:34:34.092218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.085 [2024-12-06 03:34:34.092806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.085 [2024-12-06 03:34:34.092816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.085 [2024-12-06 03:34:34.092823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.085 [2024-12-06 03:34:34.092829] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.085 [2024-12-06 03:34:34.103925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.085 [2024-12-06 03:34:34.104373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.085 [2024-12-06 03:34:34.104421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.085 [2024-12-06 03:34:34.104446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.085 [2024-12-06 03:34:34.104862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.085 [2024-12-06 03:34:34.105035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.085 [2024-12-06 03:34:34.105045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.085 [2024-12-06 03:34:34.105051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.085 [2024-12-06 03:34:34.105059] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.085 [2024-12-06 03:34:34.116784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.085 [2024-12-06 03:34:34.117216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.085 [2024-12-06 03:34:34.117234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.085 [2024-12-06 03:34:34.117242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.085 [2024-12-06 03:34:34.117407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.085 [2024-12-06 03:34:34.117580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.085 [2024-12-06 03:34:34.117590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.085 [2024-12-06 03:34:34.117597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.085 [2024-12-06 03:34:34.117603] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.085 [2024-12-06 03:34:34.129648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.085 [2024-12-06 03:34:34.130071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.085 [2024-12-06 03:34:34.130089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.085 [2024-12-06 03:34:34.130097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.085 [2024-12-06 03:34:34.130261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.085 [2024-12-06 03:34:34.130427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.085 [2024-12-06 03:34:34.130437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.085 [2024-12-06 03:34:34.130443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.085 [2024-12-06 03:34:34.130450] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.085 [2024-12-06 03:34:34.142563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.085 [2024-12-06 03:34:34.143018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.085 [2024-12-06 03:34:34.143064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.085 [2024-12-06 03:34:34.143097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.085 [2024-12-06 03:34:34.143683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.085 [2024-12-06 03:34:34.143886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.085 [2024-12-06 03:34:34.143896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.085 [2024-12-06 03:34:34.143903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.085 [2024-12-06 03:34:34.143909] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.085 [2024-12-06 03:34:34.155506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.085 [2024-12-06 03:34:34.155929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.085 [2024-12-06 03:34:34.155988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.085 [2024-12-06 03:34:34.156012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.085 [2024-12-06 03:34:34.156596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.086 [2024-12-06 03:34:34.157022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.086 [2024-12-06 03:34:34.157033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.086 [2024-12-06 03:34:34.157040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.086 [2024-12-06 03:34:34.157046] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.086 [2024-12-06 03:34:34.168348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.086 [2024-12-06 03:34:34.168761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.086 [2024-12-06 03:34:34.168806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.086 [2024-12-06 03:34:34.168831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.086 [2024-12-06 03:34:34.169430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.086 [2024-12-06 03:34:34.170007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.086 [2024-12-06 03:34:34.170018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.086 [2024-12-06 03:34:34.170024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.086 [2024-12-06 03:34:34.170031] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.086 [2024-12-06 03:34:34.181297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.086 [2024-12-06 03:34:34.181717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.086 [2024-12-06 03:34:34.181735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.086 [2024-12-06 03:34:34.181742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.086 [2024-12-06 03:34:34.181906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.086 [2024-12-06 03:34:34.182079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.086 [2024-12-06 03:34:34.182093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.086 [2024-12-06 03:34:34.182099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.086 [2024-12-06 03:34:34.182106] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.086 [2024-12-06 03:34:34.194175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.086 [2024-12-06 03:34:34.194570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.086 [2024-12-06 03:34:34.194588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.086 [2024-12-06 03:34:34.194596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.086 [2024-12-06 03:34:34.194761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.086 [2024-12-06 03:34:34.194926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.086 [2024-12-06 03:34:34.194935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.086 [2024-12-06 03:34:34.194942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.086 [2024-12-06 03:34:34.194955] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.086 [2024-12-06 03:34:34.207046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.086 [2024-12-06 03:34:34.207451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.086 [2024-12-06 03:34:34.207468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.086 [2024-12-06 03:34:34.207475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.086 [2024-12-06 03:34:34.207639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.086 [2024-12-06 03:34:34.207805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.086 [2024-12-06 03:34:34.207814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.086 [2024-12-06 03:34:34.207820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.086 [2024-12-06 03:34:34.207826] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.347 [2024-12-06 03:34:34.220225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.347 [2024-12-06 03:34:34.220558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.347 [2024-12-06 03:34:34.220577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.347 [2024-12-06 03:34:34.220586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.347 [2024-12-06 03:34:34.220752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.347 [2024-12-06 03:34:34.220918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.347 [2024-12-06 03:34:34.220927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.347 [2024-12-06 03:34:34.220934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.347 [2024-12-06 03:34:34.220943] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.347 [2024-12-06 03:34:34.233214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.347 [2024-12-06 03:34:34.233679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.347 [2024-12-06 03:34:34.233723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.347 [2024-12-06 03:34:34.233747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.347 [2024-12-06 03:34:34.234180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.347 [2024-12-06 03:34:34.234346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.347 [2024-12-06 03:34:34.234356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.347 [2024-12-06 03:34:34.234362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.347 [2024-12-06 03:34:34.234369] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.347 [2024-12-06 03:34:34.246389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.347 [2024-12-06 03:34:34.246834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.347 [2024-12-06 03:34:34.246853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.347 [2024-12-06 03:34:34.246861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.347 [2024-12-06 03:34:34.247047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.347 [2024-12-06 03:34:34.247228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.347 [2024-12-06 03:34:34.247238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.347 [2024-12-06 03:34:34.247245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.347 [2024-12-06 03:34:34.247252] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.347 [2024-12-06 03:34:34.259446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.347 [2024-12-06 03:34:34.259879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.347 [2024-12-06 03:34:34.259925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.347 [2024-12-06 03:34:34.259967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.347 [2024-12-06 03:34:34.260554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.347 [2024-12-06 03:34:34.260766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.347 [2024-12-06 03:34:34.260775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.347 [2024-12-06 03:34:34.260782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.347 [2024-12-06 03:34:34.260788] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.347 [2024-12-06 03:34:34.272570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.347 [2024-12-06 03:34:34.273007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.347 [2024-12-06 03:34:34.273055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.347 [2024-12-06 03:34:34.273079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.347 [2024-12-06 03:34:34.273419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.347 [2024-12-06 03:34:34.273678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.347 [2024-12-06 03:34:34.273691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.347 [2024-12-06 03:34:34.273702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.347 [2024-12-06 03:34:34.273712] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.347 [2024-12-06 03:34:34.285735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.348 [2024-12-06 03:34:34.286157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.348 [2024-12-06 03:34:34.286175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.348 [2024-12-06 03:34:34.286183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.348 [2024-12-06 03:34:34.286352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.348 [2024-12-06 03:34:34.286522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.348 [2024-12-06 03:34:34.286532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.348 [2024-12-06 03:34:34.286538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.348 [2024-12-06 03:34:34.286545] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.348 [2024-12-06 03:34:34.298675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.348 [2024-12-06 03:34:34.299040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.348 [2024-12-06 03:34:34.299058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.348 [2024-12-06 03:34:34.299067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.348 [2024-12-06 03:34:34.299231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.348 [2024-12-06 03:34:34.299396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.348 [2024-12-06 03:34:34.299405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.348 [2024-12-06 03:34:34.299412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.348 [2024-12-06 03:34:34.299418] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.348 [2024-12-06 03:34:34.311499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.348 [2024-12-06 03:34:34.311867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.348 [2024-12-06 03:34:34.311884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.348 [2024-12-06 03:34:34.311892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.348 [2024-12-06 03:34:34.312067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.348 [2024-12-06 03:34:34.312233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.348 [2024-12-06 03:34:34.312243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.348 [2024-12-06 03:34:34.312249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.348 [2024-12-06 03:34:34.312256] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.348 [2024-12-06 03:34:34.324376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.348 [2024-12-06 03:34:34.324722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.348 [2024-12-06 03:34:34.324755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.348 [2024-12-06 03:34:34.324763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.348 [2024-12-06 03:34:34.324937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.348 [2024-12-06 03:34:34.325117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.348 [2024-12-06 03:34:34.325128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.348 [2024-12-06 03:34:34.325135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.348 [2024-12-06 03:34:34.325141] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.348 [2024-12-06 03:34:34.337466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.348 [2024-12-06 03:34:34.337815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.348 [2024-12-06 03:34:34.337833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.348 [2024-12-06 03:34:34.337841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.348 [2024-12-06 03:34:34.338027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.348 [2024-12-06 03:34:34.338208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.348 [2024-12-06 03:34:34.338218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.348 [2024-12-06 03:34:34.338225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.348 [2024-12-06 03:34:34.338233] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.348 [2024-12-06 03:34:34.350594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.348 [2024-12-06 03:34:34.351034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.348 [2024-12-06 03:34:34.351052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.348 [2024-12-06 03:34:34.351060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.348 [2024-12-06 03:34:34.351235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.348 [2024-12-06 03:34:34.351410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.348 [2024-12-06 03:34:34.351422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.348 [2024-12-06 03:34:34.351429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.348 [2024-12-06 03:34:34.351436] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.348 [2024-12-06 03:34:34.363416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.348 [2024-12-06 03:34:34.363817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.348 [2024-12-06 03:34:34.363835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.348 [2024-12-06 03:34:34.363843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.348 [2024-12-06 03:34:34.364013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.348 [2024-12-06 03:34:34.364179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.348 [2024-12-06 03:34:34.364189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.348 [2024-12-06 03:34:34.364195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.348 [2024-12-06 03:34:34.364202] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.348 [2024-12-06 03:34:34.376259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.348 [2024-12-06 03:34:34.376694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.348 [2024-12-06 03:34:34.376711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.348 [2024-12-06 03:34:34.376719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.348 [2024-12-06 03:34:34.376884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.348 [2024-12-06 03:34:34.377056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.348 [2024-12-06 03:34:34.377066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.348 [2024-12-06 03:34:34.377073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.348 [2024-12-06 03:34:34.377079] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.348 [2024-12-06 03:34:34.389139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.348 [2024-12-06 03:34:34.389464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.348 [2024-12-06 03:34:34.389481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.348 [2024-12-06 03:34:34.389489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.348 [2024-12-06 03:34:34.389652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.348 [2024-12-06 03:34:34.389817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.348 [2024-12-06 03:34:34.389827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.348 [2024-12-06 03:34:34.389833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.348 [2024-12-06 03:34:34.389842] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.348 [2024-12-06 03:34:34.402086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.348 [2024-12-06 03:34:34.402496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.348 [2024-12-06 03:34:34.402512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.348 [2024-12-06 03:34:34.402520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.348 [2024-12-06 03:34:34.402684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.348 [2024-12-06 03:34:34.402850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.348 [2024-12-06 03:34:34.402860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.348 [2024-12-06 03:34:34.402866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.349 [2024-12-06 03:34:34.402873] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.349 [2024-12-06 03:34:34.414975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.349 [2024-12-06 03:34:34.415251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.349 [2024-12-06 03:34:34.415269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.349 [2024-12-06 03:34:34.415277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.349 [2024-12-06 03:34:34.415442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.349 [2024-12-06 03:34:34.415607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.349 [2024-12-06 03:34:34.415617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.349 [2024-12-06 03:34:34.415624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.349 [2024-12-06 03:34:34.415630] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.349 [2024-12-06 03:34:34.427964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.349 [2024-12-06 03:34:34.428246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.349 [2024-12-06 03:34:34.428263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.349 [2024-12-06 03:34:34.428271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.349 [2024-12-06 03:34:34.428436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.349 [2024-12-06 03:34:34.428602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.349 [2024-12-06 03:34:34.428611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.349 [2024-12-06 03:34:34.428618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.349 [2024-12-06 03:34:34.428625] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.349 [2024-12-06 03:34:34.440851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.349 [2024-12-06 03:34:34.441206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.349 [2024-12-06 03:34:34.441227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.349 [2024-12-06 03:34:34.441236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.349 [2024-12-06 03:34:34.441401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.349 [2024-12-06 03:34:34.441567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.349 [2024-12-06 03:34:34.441576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.349 [2024-12-06 03:34:34.441583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.349 [2024-12-06 03:34:34.441589] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.349 [2024-12-06 03:34:34.453687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.349 [2024-12-06 03:34:34.454071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.349 [2024-12-06 03:34:34.454089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.349 [2024-12-06 03:34:34.454097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.349 [2024-12-06 03:34:34.454262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.349 [2024-12-06 03:34:34.454427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.349 [2024-12-06 03:34:34.454437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.349 [2024-12-06 03:34:34.454443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.349 [2024-12-06 03:34:34.454450] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.349 [2024-12-06 03:34:34.466775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.349 [2024-12-06 03:34:34.467119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.349 [2024-12-06 03:34:34.467136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.349 [2024-12-06 03:34:34.467144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.349 [2024-12-06 03:34:34.467318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.349 [2024-12-06 03:34:34.467493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.349 [2024-12-06 03:34:34.467502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.349 [2024-12-06 03:34:34.467509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.349 [2024-12-06 03:34:34.467515] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.349 [2024-12-06 03:34:34.479935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.349 [2024-12-06 03:34:34.480250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.349 [2024-12-06 03:34:34.480296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.349 [2024-12-06 03:34:34.480321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.349 [2024-12-06 03:34:34.480878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.349 [2024-12-06 03:34:34.481065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.349 [2024-12-06 03:34:34.481076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.349 [2024-12-06 03:34:34.481083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.349 [2024-12-06 03:34:34.481090] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.611 [2024-12-06 03:34:34.493061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.611 [2024-12-06 03:34:34.493361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.611 [2024-12-06 03:34:34.493380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.611 [2024-12-06 03:34:34.493388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.611 [2024-12-06 03:34:34.493566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.611 [2024-12-06 03:34:34.493747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.611 [2024-12-06 03:34:34.493757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.611 [2024-12-06 03:34:34.493763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.611 [2024-12-06 03:34:34.493770] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.611 [2024-12-06 03:34:34.506236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.611 [2024-12-06 03:34:34.506646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.611 [2024-12-06 03:34:34.506664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.611 [2024-12-06 03:34:34.506673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.611 [2024-12-06 03:34:34.506857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.611 [2024-12-06 03:34:34.507052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.611 [2024-12-06 03:34:34.507063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.611 [2024-12-06 03:34:34.507071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.611 [2024-12-06 03:34:34.507080] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.611 [2024-12-06 03:34:34.519462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.611 [2024-12-06 03:34:34.519834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.611 [2024-12-06 03:34:34.519853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.611 [2024-12-06 03:34:34.519861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.611 [2024-12-06 03:34:34.520052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.611 [2024-12-06 03:34:34.520240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.611 [2024-12-06 03:34:34.520253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.611 [2024-12-06 03:34:34.520261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.611 [2024-12-06 03:34:34.520269] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.611 9194.33 IOPS, 35.92 MiB/s [2024-12-06T02:34:34.752Z] [2024-12-06 03:34:34.533968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.611 [2024-12-06 03:34:34.534392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.611 [2024-12-06 03:34:34.534412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.611 [2024-12-06 03:34:34.534420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.611 [2024-12-06 03:34:34.534605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.611 [2024-12-06 03:34:34.534792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.611 [2024-12-06 03:34:34.534802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.611 [2024-12-06 03:34:34.534809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.611 [2024-12-06 03:34:34.534817] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.611 [2024-12-06 03:34:34.547283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.611 [2024-12-06 03:34:34.547724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.611 [2024-12-06 03:34:34.547742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.611 [2024-12-06 03:34:34.547750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.611 [2024-12-06 03:34:34.547934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.611 [2024-12-06 03:34:34.548127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.611 [2024-12-06 03:34:34.548138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.611 [2024-12-06 03:34:34.548145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.611 [2024-12-06 03:34:34.548153] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.611 [2024-12-06 03:34:34.560507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.611 [2024-12-06 03:34:34.560931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.611 [2024-12-06 03:34:34.560956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.611 [2024-12-06 03:34:34.560964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.611 [2024-12-06 03:34:34.561150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.611 [2024-12-06 03:34:34.561335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.611 [2024-12-06 03:34:34.561345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.611 [2024-12-06 03:34:34.561352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.611 [2024-12-06 03:34:34.561362] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.611 [2024-12-06 03:34:34.573815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.611 [2024-12-06 03:34:34.574198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.611 [2024-12-06 03:34:34.574217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.611 [2024-12-06 03:34:34.574226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.611 [2024-12-06 03:34:34.574410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.611 [2024-12-06 03:34:34.574596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.611 [2024-12-06 03:34:34.574606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.611 [2024-12-06 03:34:34.574613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.611 [2024-12-06 03:34:34.574620] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.612 [2024-12-06 03:34:34.587062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.612 [2024-12-06 03:34:34.587504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.612 [2024-12-06 03:34:34.587522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.612 [2024-12-06 03:34:34.587531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.612 [2024-12-06 03:34:34.587709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.612 [2024-12-06 03:34:34.587889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.612 [2024-12-06 03:34:34.587899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.612 [2024-12-06 03:34:34.587909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.612 [2024-12-06 03:34:34.587916] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.612 [2024-12-06 03:34:34.600243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.612 [2024-12-06 03:34:34.600543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.612 [2024-12-06 03:34:34.600560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.612 [2024-12-06 03:34:34.600568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.612 [2024-12-06 03:34:34.600747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.612 [2024-12-06 03:34:34.600927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.612 [2024-12-06 03:34:34.600937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.612 [2024-12-06 03:34:34.600944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.612 [2024-12-06 03:34:34.600959] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.612 [2024-12-06 03:34:34.613110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.612 [2024-12-06 03:34:34.613479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.612 [2024-12-06 03:34:34.613533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.612 [2024-12-06 03:34:34.613557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.612 [2024-12-06 03:34:34.614070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.612 [2024-12-06 03:34:34.614245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.612 [2024-12-06 03:34:34.614255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.612 [2024-12-06 03:34:34.614262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.612 [2024-12-06 03:34:34.614269] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.612 [2024-12-06 03:34:34.626058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.612 [2024-12-06 03:34:34.626456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.612 [2024-12-06 03:34:34.626473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.612 [2024-12-06 03:34:34.626481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.612 [2024-12-06 03:34:34.626646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.612 [2024-12-06 03:34:34.626812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.612 [2024-12-06 03:34:34.626822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.612 [2024-12-06 03:34:34.626828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.612 [2024-12-06 03:34:34.626835] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.612 [2024-12-06 03:34:34.639031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.612 [2024-12-06 03:34:34.639456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.612 [2024-12-06 03:34:34.639473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.612 [2024-12-06 03:34:34.639481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.612 [2024-12-06 03:34:34.639645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.612 [2024-12-06 03:34:34.639810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.612 [2024-12-06 03:34:34.639820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.612 [2024-12-06 03:34:34.639826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.612 [2024-12-06 03:34:34.639832] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.612 [2024-12-06 03:34:34.651866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.612 [2024-12-06 03:34:34.652300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.612 [2024-12-06 03:34:34.652345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.612 [2024-12-06 03:34:34.652369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.612 [2024-12-06 03:34:34.652975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.612 [2024-12-06 03:34:34.653368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.612 [2024-12-06 03:34:34.653377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.612 [2024-12-06 03:34:34.653384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.612 [2024-12-06 03:34:34.653390] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.612 [2024-12-06 03:34:34.664861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.612 [2024-12-06 03:34:34.665243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.612 [2024-12-06 03:34:34.665261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.612 [2024-12-06 03:34:34.665269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.612 [2024-12-06 03:34:34.665434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.612 [2024-12-06 03:34:34.665601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.612 [2024-12-06 03:34:34.665611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.612 [2024-12-06 03:34:34.665618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.612 [2024-12-06 03:34:34.665624] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.612 [2024-12-06 03:34:34.677705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.612 [2024-12-06 03:34:34.678152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.612 [2024-12-06 03:34:34.678198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.612 [2024-12-06 03:34:34.678222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.612 [2024-12-06 03:34:34.678808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.612 [2024-12-06 03:34:34.679343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.612 [2024-12-06 03:34:34.679354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.612 [2024-12-06 03:34:34.679360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.612 [2024-12-06 03:34:34.679367] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.612 [2024-12-06 03:34:34.690673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.612 [2024-12-06 03:34:34.691105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.612 [2024-12-06 03:34:34.691123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.612 [2024-12-06 03:34:34.691131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.612 [2024-12-06 03:34:34.691295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.612 [2024-12-06 03:34:34.691461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.612 [2024-12-06 03:34:34.691473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.612 [2024-12-06 03:34:34.691480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.612 [2024-12-06 03:34:34.691486] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.612 [2024-12-06 03:34:34.703665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.612 [2024-12-06 03:34:34.704093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.612 [2024-12-06 03:34:34.704111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.612 [2024-12-06 03:34:34.704118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.612 [2024-12-06 03:34:34.704282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.612 [2024-12-06 03:34:34.704447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.612 [2024-12-06 03:34:34.704457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.612 [2024-12-06 03:34:34.704463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.613 [2024-12-06 03:34:34.704470] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.613 [2024-12-06 03:34:34.716652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.613 [2024-12-06 03:34:34.717009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.613 [2024-12-06 03:34:34.717027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.613 [2024-12-06 03:34:34.717035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.613 [2024-12-06 03:34:34.717199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.613 [2024-12-06 03:34:34.717365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.613 [2024-12-06 03:34:34.717374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.613 [2024-12-06 03:34:34.717380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.613 [2024-12-06 03:34:34.717387] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.613 [2024-12-06 03:34:34.729532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.613 [2024-12-06 03:34:34.729959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.613 [2024-12-06 03:34:34.729976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.613 [2024-12-06 03:34:34.729984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.613 [2024-12-06 03:34:34.730147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.613 [2024-12-06 03:34:34.730312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.613 [2024-12-06 03:34:34.730322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.613 [2024-12-06 03:34:34.730328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.613 [2024-12-06 03:34:34.730334] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.613 [2024-12-06 03:34:34.742483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.613 [2024-12-06 03:34:34.742926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.613 [2024-12-06 03:34:34.742945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.613 [2024-12-06 03:34:34.742958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.613 [2024-12-06 03:34:34.743138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.613 [2024-12-06 03:34:34.743319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.613 [2024-12-06 03:34:34.743329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.613 [2024-12-06 03:34:34.743336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.613 [2024-12-06 03:34:34.743342] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.874 [2024-12-06 03:34:34.755663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.874 [2024-12-06 03:34:34.756088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.874 [2024-12-06 03:34:34.756106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.874 [2024-12-06 03:34:34.756114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.874 [2024-12-06 03:34:34.756278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.874 [2024-12-06 03:34:34.756443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.874 [2024-12-06 03:34:34.756453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.874 [2024-12-06 03:34:34.756459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.874 [2024-12-06 03:34:34.756465] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.874 [2024-12-06 03:34:34.768534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.874 [2024-12-06 03:34:34.768964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.874 [2024-12-06 03:34:34.768981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.874 [2024-12-06 03:34:34.768988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.874 [2024-12-06 03:34:34.769153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.874 [2024-12-06 03:34:34.769317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.874 [2024-12-06 03:34:34.769326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.874 [2024-12-06 03:34:34.769332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.874 [2024-12-06 03:34:34.769339] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.874 [2024-12-06 03:34:34.781512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.874 [2024-12-06 03:34:34.781931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.874 [2024-12-06 03:34:34.781956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.874 [2024-12-06 03:34:34.781965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.874 [2024-12-06 03:34:34.782140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.874 [2024-12-06 03:34:34.782314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.874 [2024-12-06 03:34:34.782324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.874 [2024-12-06 03:34:34.782331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.874 [2024-12-06 03:34:34.782337] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.874 [2024-12-06 03:34:34.794404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.874 [2024-12-06 03:34:34.794835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.874 [2024-12-06 03:34:34.794883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.874 [2024-12-06 03:34:34.794908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.874 [2024-12-06 03:34:34.795439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.874 [2024-12-06 03:34:34.795615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.874 [2024-12-06 03:34:34.795625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.874 [2024-12-06 03:34:34.795632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.874 [2024-12-06 03:34:34.795639] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.874 [2024-12-06 03:34:34.807273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.874 [2024-12-06 03:34:34.807697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.874 [2024-12-06 03:34:34.807753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.874 [2024-12-06 03:34:34.807777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.874 [2024-12-06 03:34:34.808326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.874 [2024-12-06 03:34:34.808584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.875 [2024-12-06 03:34:34.808597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.875 [2024-12-06 03:34:34.808607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.875 [2024-12-06 03:34:34.808617] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.875 [2024-12-06 03:34:34.820944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.875 [2024-12-06 03:34:34.821368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.875 [2024-12-06 03:34:34.821386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.875 [2024-12-06 03:34:34.821394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.875 [2024-12-06 03:34:34.821566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.875 [2024-12-06 03:34:34.821735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.875 [2024-12-06 03:34:34.821744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.875 [2024-12-06 03:34:34.821751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.875 [2024-12-06 03:34:34.821758] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.875 [2024-12-06 03:34:34.833837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.875 [2024-12-06 03:34:34.834283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.875 [2024-12-06 03:34:34.834301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.875 [2024-12-06 03:34:34.834309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.875 [2024-12-06 03:34:34.834483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.875 [2024-12-06 03:34:34.834658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.875 [2024-12-06 03:34:34.834684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.875 [2024-12-06 03:34:34.834691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.875 [2024-12-06 03:34:34.834699] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.875 [2024-12-06 03:34:34.846968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.875 [2024-12-06 03:34:34.847401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.875 [2024-12-06 03:34:34.847419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.875 [2024-12-06 03:34:34.847428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.875 [2024-12-06 03:34:34.847607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.875 [2024-12-06 03:34:34.847787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.875 [2024-12-06 03:34:34.847797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.875 [2024-12-06 03:34:34.847804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.875 [2024-12-06 03:34:34.847810] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.875 [2024-12-06 03:34:34.859900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.875 [2024-12-06 03:34:34.860326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.875 [2024-12-06 03:34:34.860343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.875 [2024-12-06 03:34:34.860351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.875 [2024-12-06 03:34:34.860516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.875 [2024-12-06 03:34:34.860680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.875 [2024-12-06 03:34:34.860689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.875 [2024-12-06 03:34:34.860699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.875 [2024-12-06 03:34:34.860705] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.875 [2024-12-06 03:34:34.872878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.875 [2024-12-06 03:34:34.873244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.875 [2024-12-06 03:34:34.873261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.875 [2024-12-06 03:34:34.873268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.875 [2024-12-06 03:34:34.873433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.875 [2024-12-06 03:34:34.873599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.875 [2024-12-06 03:34:34.873609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.875 [2024-12-06 03:34:34.873615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.875 [2024-12-06 03:34:34.873621] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.875 [2024-12-06 03:34:34.885711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.875 [2024-12-06 03:34:34.886130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.875 [2024-12-06 03:34:34.886147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.875 [2024-12-06 03:34:34.886154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.875 [2024-12-06 03:34:34.886320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.875 [2024-12-06 03:34:34.886484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.875 [2024-12-06 03:34:34.886494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.875 [2024-12-06 03:34:34.886500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.875 [2024-12-06 03:34:34.886507] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.875 [2024-12-06 03:34:34.898667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.875 [2024-12-06 03:34:34.899017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.875 [2024-12-06 03:34:34.899035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.875 [2024-12-06 03:34:34.899043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.875 [2024-12-06 03:34:34.899209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.875 [2024-12-06 03:34:34.899374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.875 [2024-12-06 03:34:34.899383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.875 [2024-12-06 03:34:34.899390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.875 [2024-12-06 03:34:34.899396] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.875 [2024-12-06 03:34:34.911491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.875 [2024-12-06 03:34:34.911900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.875 [2024-12-06 03:34:34.911917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.875 [2024-12-06 03:34:34.911925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.875 [2024-12-06 03:34:34.912118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.875 [2024-12-06 03:34:34.912293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.875 [2024-12-06 03:34:34.912303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.875 [2024-12-06 03:34:34.912310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.875 [2024-12-06 03:34:34.912317] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.876 [2024-12-06 03:34:34.924397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.876 [2024-12-06 03:34:34.924753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.876 [2024-12-06 03:34:34.924770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.876 [2024-12-06 03:34:34.924777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.876 [2024-12-06 03:34:34.924941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.876 [2024-12-06 03:34:34.925135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.876 [2024-12-06 03:34:34.925145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.876 [2024-12-06 03:34:34.925152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.876 [2024-12-06 03:34:34.925159] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.876 [2024-12-06 03:34:34.937345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.876 [2024-12-06 03:34:34.937751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.876 [2024-12-06 03:34:34.937768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.876 [2024-12-06 03:34:34.937776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.876 [2024-12-06 03:34:34.937941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.876 [2024-12-06 03:34:34.938135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.876 [2024-12-06 03:34:34.938146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.876 [2024-12-06 03:34:34.938152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.876 [2024-12-06 03:34:34.938159] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.876 [2024-12-06 03:34:34.950211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.876 [2024-12-06 03:34:34.950578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.876 [2024-12-06 03:34:34.950624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.876 [2024-12-06 03:34:34.950654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.876 [2024-12-06 03:34:34.951256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.876 [2024-12-06 03:34:34.951451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.876 [2024-12-06 03:34:34.951462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.876 [2024-12-06 03:34:34.951468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.876 [2024-12-06 03:34:34.951475] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.876 [2024-12-06 03:34:34.963049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.876 [2024-12-06 03:34:34.963472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.876 [2024-12-06 03:34:34.963489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.876 [2024-12-06 03:34:34.963496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.876 [2024-12-06 03:34:34.963660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.876 [2024-12-06 03:34:34.963825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.876 [2024-12-06 03:34:34.963834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.876 [2024-12-06 03:34:34.963840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.876 [2024-12-06 03:34:34.963847] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.876 [2024-12-06 03:34:34.975995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.876 [2024-12-06 03:34:34.976421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.876 [2024-12-06 03:34:34.976438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.876 [2024-12-06 03:34:34.976445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.876 [2024-12-06 03:34:34.976609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.876 [2024-12-06 03:34:34.976775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.876 [2024-12-06 03:34:34.976785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.876 [2024-12-06 03:34:34.976791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.876 [2024-12-06 03:34:34.976798] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.876 [2024-12-06 03:34:34.988873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.876 [2024-12-06 03:34:34.989293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.876 [2024-12-06 03:34:34.989310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.876 [2024-12-06 03:34:34.989318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.876 [2024-12-06 03:34:34.989482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.876 [2024-12-06 03:34:34.989651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.876 [2024-12-06 03:34:34.989660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.876 [2024-12-06 03:34:34.989666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.876 [2024-12-06 03:34:34.989673] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.876 [2024-12-06 03:34:35.001718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.876 [2024-12-06 03:34:35.002075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.876 [2024-12-06 03:34:35.002092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:14.876 [2024-12-06 03:34:35.002100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:14.876 [2024-12-06 03:34:35.002264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:14.876 [2024-12-06 03:34:35.002429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.876 [2024-12-06 03:34:35.002438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.876 [2024-12-06 03:34:35.002444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.876 [2024-12-06 03:34:35.002451] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.135 [2024-12-06 03:34:35.014737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.135 [2024-12-06 03:34:35.015170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.135 [2024-12-06 03:34:35.015215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.135 [2024-12-06 03:34:35.015239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.135 [2024-12-06 03:34:35.015823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.135 [2024-12-06 03:34:35.016023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.135 [2024-12-06 03:34:35.016033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.135 [2024-12-06 03:34:35.016041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.135 [2024-12-06 03:34:35.016048] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.135 [2024-12-06 03:34:35.027650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.135 [2024-12-06 03:34:35.028093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.135 [2024-12-06 03:34:35.028139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.135 [2024-12-06 03:34:35.028163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.135 [2024-12-06 03:34:35.028749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.135 [2024-12-06 03:34:35.029229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.135 [2024-12-06 03:34:35.029239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.135 [2024-12-06 03:34:35.029249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.135 [2024-12-06 03:34:35.029256] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.135 [2024-12-06 03:34:35.040530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.135 [2024-12-06 03:34:35.040959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.135 [2024-12-06 03:34:35.040977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.135 [2024-12-06 03:34:35.040984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.135 [2024-12-06 03:34:35.041148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.135 [2024-12-06 03:34:35.041314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.135 [2024-12-06 03:34:35.041323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.135 [2024-12-06 03:34:35.041329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.135 [2024-12-06 03:34:35.041335] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.135 [2024-12-06 03:34:35.053413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.136 [2024-12-06 03:34:35.053838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.136 [2024-12-06 03:34:35.053854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.136 [2024-12-06 03:34:35.053862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.136 [2024-12-06 03:34:35.054048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.136 [2024-12-06 03:34:35.054223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.136 [2024-12-06 03:34:35.054233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.136 [2024-12-06 03:34:35.054240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.136 [2024-12-06 03:34:35.054246] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.136 [2024-12-06 03:34:35.066361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.136 [2024-12-06 03:34:35.066787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.136 [2024-12-06 03:34:35.066845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.136 [2024-12-06 03:34:35.066869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.136 [2024-12-06 03:34:35.067468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.136 [2024-12-06 03:34:35.068067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.136 [2024-12-06 03:34:35.068095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.136 [2024-12-06 03:34:35.068118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.136 [2024-12-06 03:34:35.068150] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.136 [2024-12-06 03:34:35.079300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.136 [2024-12-06 03:34:35.079754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.136 [2024-12-06 03:34:35.079799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.136 [2024-12-06 03:34:35.079822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.136 [2024-12-06 03:34:35.080314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.136 [2024-12-06 03:34:35.080490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.136 [2024-12-06 03:34:35.080500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.136 [2024-12-06 03:34:35.080508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.136 [2024-12-06 03:34:35.080514] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.136 [2024-12-06 03:34:35.092166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.136 [2024-12-06 03:34:35.092615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.136 [2024-12-06 03:34:35.092634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.136 [2024-12-06 03:34:35.092642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.136 [2024-12-06 03:34:35.092816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.136 [2024-12-06 03:34:35.093013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.136 [2024-12-06 03:34:35.093023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.136 [2024-12-06 03:34:35.093030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.136 [2024-12-06 03:34:35.093038] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.136 [2024-12-06 03:34:35.105320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.136 [2024-12-06 03:34:35.105764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.136 [2024-12-06 03:34:35.105782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.136 [2024-12-06 03:34:35.105790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.136 [2024-12-06 03:34:35.105975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.136 [2024-12-06 03:34:35.106155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.136 [2024-12-06 03:34:35.106165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.136 [2024-12-06 03:34:35.106172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.136 [2024-12-06 03:34:35.106179] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.136 [2024-12-06 03:34:35.118472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.136 [2024-12-06 03:34:35.118852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.136 [2024-12-06 03:34:35.118897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.136 [2024-12-06 03:34:35.118934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.136 [2024-12-06 03:34:35.119392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.136 [2024-12-06 03:34:35.119567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.136 [2024-12-06 03:34:35.119577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.136 [2024-12-06 03:34:35.119584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.136 [2024-12-06 03:34:35.119590] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.136 [2024-12-06 03:34:35.131379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.136 [2024-12-06 03:34:35.131803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.136 [2024-12-06 03:34:35.131819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.136 [2024-12-06 03:34:35.131827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.136 [2024-12-06 03:34:35.131997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.136 [2024-12-06 03:34:35.132189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.136 [2024-12-06 03:34:35.132198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.136 [2024-12-06 03:34:35.132205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.136 [2024-12-06 03:34:35.132211] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.136 [2024-12-06 03:34:35.144251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.136 [2024-12-06 03:34:35.144655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.136 [2024-12-06 03:34:35.144672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.136 [2024-12-06 03:34:35.144679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.136 [2024-12-06 03:34:35.144842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.136 [2024-12-06 03:34:35.145013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.136 [2024-12-06 03:34:35.145023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.136 [2024-12-06 03:34:35.145030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.136 [2024-12-06 03:34:35.145036] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.136 [2024-12-06 03:34:35.157077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.136 [2024-12-06 03:34:35.157511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.136 [2024-12-06 03:34:35.157555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.136 [2024-12-06 03:34:35.157579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.136 [2024-12-06 03:34:35.158114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.136 [2024-12-06 03:34:35.158293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.136 [2024-12-06 03:34:35.158303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.136 [2024-12-06 03:34:35.158310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.136 [2024-12-06 03:34:35.158317] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.136 [2024-12-06 03:34:35.169896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.136 [2024-12-06 03:34:35.170305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.136 [2024-12-06 03:34:35.170323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.136 [2024-12-06 03:34:35.170330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.136 [2024-12-06 03:34:35.170496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.136 [2024-12-06 03:34:35.170662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.136 [2024-12-06 03:34:35.170671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.136 [2024-12-06 03:34:35.170677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.136 [2024-12-06 03:34:35.170683] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.136 [2024-12-06 03:34:35.182775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.136 [2024-12-06 03:34:35.183209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.136 [2024-12-06 03:34:35.183228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.136 [2024-12-06 03:34:35.183235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.136 [2024-12-06 03:34:35.183399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.136 [2024-12-06 03:34:35.183565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.136 [2024-12-06 03:34:35.183575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.136 [2024-12-06 03:34:35.183581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.136 [2024-12-06 03:34:35.183588] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.136 [2024-12-06 03:34:35.195723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.136 [2024-12-06 03:34:35.196136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.136 [2024-12-06 03:34:35.196181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.136 [2024-12-06 03:34:35.196206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.136 [2024-12-06 03:34:35.196655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.136 [2024-12-06 03:34:35.196821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.136 [2024-12-06 03:34:35.196830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.136 [2024-12-06 03:34:35.196840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.136 [2024-12-06 03:34:35.196846] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.136 [2024-12-06 03:34:35.208640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.136 [2024-12-06 03:34:35.209046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.136 [2024-12-06 03:34:35.209063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.136 [2024-12-06 03:34:35.209072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.136 [2024-12-06 03:34:35.209237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.136 [2024-12-06 03:34:35.209401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.136 [2024-12-06 03:34:35.209411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.136 [2024-12-06 03:34:35.209417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.136 [2024-12-06 03:34:35.209424] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.136 [2024-12-06 03:34:35.221508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.136 [2024-12-06 03:34:35.221958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.136 [2024-12-06 03:34:35.222004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.136 [2024-12-06 03:34:35.222029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.136 [2024-12-06 03:34:35.222529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.136 [2024-12-06 03:34:35.222705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.136 [2024-12-06 03:34:35.222714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.136 [2024-12-06 03:34:35.222721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.136 [2024-12-06 03:34:35.222728] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.136 [2024-12-06 03:34:35.235008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.136 [2024-12-06 03:34:35.235452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.136 [2024-12-06 03:34:35.235497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.136 [2024-12-06 03:34:35.235521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.136 [2024-12-06 03:34:35.236122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.136 [2024-12-06 03:34:35.236394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.136 [2024-12-06 03:34:35.236404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.136 [2024-12-06 03:34:35.236410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.136 [2024-12-06 03:34:35.236418] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.136 [2024-12-06 03:34:35.248073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.136 [2024-12-06 03:34:35.248428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.136 [2024-12-06 03:34:35.248447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.136 [2024-12-06 03:34:35.248454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.136 [2024-12-06 03:34:35.248619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.136 [2024-12-06 03:34:35.248784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.136 [2024-12-06 03:34:35.248794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.136 [2024-12-06 03:34:35.248801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.136 [2024-12-06 03:34:35.248807] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.136 [2024-12-06 03:34:35.260888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.136 [2024-12-06 03:34:35.261315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.136 [2024-12-06 03:34:35.261370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.136 [2024-12-06 03:34:35.261394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.136 [2024-12-06 03:34:35.261994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.136 [2024-12-06 03:34:35.262584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.136 [2024-12-06 03:34:35.262623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.136 [2024-12-06 03:34:35.262630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.136 [2024-12-06 03:34:35.262638] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.399 [2024-12-06 03:34:35.274102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.399 [2024-12-06 03:34:35.274548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.399 [2024-12-06 03:34:35.274593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.399 [2024-12-06 03:34:35.274617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.399 [2024-12-06 03:34:35.275216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.399 [2024-12-06 03:34:35.275767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.399 [2024-12-06 03:34:35.275777] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.399 [2024-12-06 03:34:35.275783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.399 [2024-12-06 03:34:35.275791] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.399 [2024-12-06 03:34:35.287027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.399 [2024-12-06 03:34:35.287440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.399 [2024-12-06 03:34:35.287457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.399 [2024-12-06 03:34:35.287468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.399 [2024-12-06 03:34:35.287632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.399 [2024-12-06 03:34:35.287797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.399 [2024-12-06 03:34:35.287807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.399 [2024-12-06 03:34:35.287813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.399 [2024-12-06 03:34:35.287819] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.399 [2024-12-06 03:34:35.299904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.399 [2024-12-06 03:34:35.300263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.399 [2024-12-06 03:34:35.300280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.399 [2024-12-06 03:34:35.300288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.399 [2024-12-06 03:34:35.300451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.399 [2024-12-06 03:34:35.300616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.399 [2024-12-06 03:34:35.300625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.399 [2024-12-06 03:34:35.300631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.399 [2024-12-06 03:34:35.300638] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.399 [2024-12-06 03:34:35.312747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.399 [2024-12-06 03:34:35.313193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.399 [2024-12-06 03:34:35.313240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.399 [2024-12-06 03:34:35.313265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.399 [2024-12-06 03:34:35.313799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.399 [2024-12-06 03:34:35.313999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.399 [2024-12-06 03:34:35.314012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.399 [2024-12-06 03:34:35.314023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.399 [2024-12-06 03:34:35.314032] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.399 [2024-12-06 03:34:35.326341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.399 [2024-12-06 03:34:35.326697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.399 [2024-12-06 03:34:35.326741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.399 [2024-12-06 03:34:35.326765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.399 [2024-12-06 03:34:35.327365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.399 [2024-12-06 03:34:35.327874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.399 [2024-12-06 03:34:35.327883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.399 [2024-12-06 03:34:35.327890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.399 [2024-12-06 03:34:35.327897] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.399 [2024-12-06 03:34:35.339205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.399 [2024-12-06 03:34:35.339591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.399 [2024-12-06 03:34:35.339636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.399 [2024-12-06 03:34:35.339659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.399 [2024-12-06 03:34:35.340260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.399 [2024-12-06 03:34:35.340809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.399 [2024-12-06 03:34:35.340818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.399 [2024-12-06 03:34:35.340825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.399 [2024-12-06 03:34:35.340831] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.399 [2024-12-06 03:34:35.352093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.399 [2024-12-06 03:34:35.352533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.399 [2024-12-06 03:34:35.352551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.399 [2024-12-06 03:34:35.352558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.399 [2024-12-06 03:34:35.352732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.399 [2024-12-06 03:34:35.352922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.399 [2024-12-06 03:34:35.352931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.399 [2024-12-06 03:34:35.352938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.399 [2024-12-06 03:34:35.352945] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.399 [2024-12-06 03:34:35.365197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.399 [2024-12-06 03:34:35.365546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.399 [2024-12-06 03:34:35.365563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.399 [2024-12-06 03:34:35.365572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.399 [2024-12-06 03:34:35.365751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.399 [2024-12-06 03:34:35.365931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.399 [2024-12-06 03:34:35.365941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.400 [2024-12-06 03:34:35.365958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.400 [2024-12-06 03:34:35.365965] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.400 [2024-12-06 03:34:35.378077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.400 [2024-12-06 03:34:35.378428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.400 [2024-12-06 03:34:35.378472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.400 [2024-12-06 03:34:35.378496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.400 [2024-12-06 03:34:35.379020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.400 [2024-12-06 03:34:35.379187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.400 [2024-12-06 03:34:35.379197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.400 [2024-12-06 03:34:35.379203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.400 [2024-12-06 03:34:35.379209] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.400 [2024-12-06 03:34:35.390931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.400 [2024-12-06 03:34:35.391358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.400 [2024-12-06 03:34:35.391376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.400 [2024-12-06 03:34:35.391383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.400 [2024-12-06 03:34:35.391547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.400 [2024-12-06 03:34:35.391713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.400 [2024-12-06 03:34:35.391723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.400 [2024-12-06 03:34:35.391730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.400 [2024-12-06 03:34:35.391736] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.400 [2024-12-06 03:34:35.403805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.400 [2024-12-06 03:34:35.404240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.400 [2024-12-06 03:34:35.404291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.400 [2024-12-06 03:34:35.404315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.400 [2024-12-06 03:34:35.404898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.400 [2024-12-06 03:34:35.405367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.400 [2024-12-06 03:34:35.405377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.400 [2024-12-06 03:34:35.405384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.400 [2024-12-06 03:34:35.405391] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.400 [2024-12-06 03:34:35.416750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.400 [2024-12-06 03:34:35.417170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.400 [2024-12-06 03:34:35.417215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.400 [2024-12-06 03:34:35.417239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.400 [2024-12-06 03:34:35.417799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.400 [2024-12-06 03:34:35.417970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.400 [2024-12-06 03:34:35.417996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.400 [2024-12-06 03:34:35.418003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.400 [2024-12-06 03:34:35.418010] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.400 [2024-12-06 03:34:35.429607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.400 [2024-12-06 03:34:35.430028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.400 [2024-12-06 03:34:35.430046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.400 [2024-12-06 03:34:35.430053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.400 [2024-12-06 03:34:35.430218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.400 [2024-12-06 03:34:35.430384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.400 [2024-12-06 03:34:35.430393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.400 [2024-12-06 03:34:35.430399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.400 [2024-12-06 03:34:35.430406] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.400 [2024-12-06 03:34:35.442520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.400 [2024-12-06 03:34:35.442847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.400 [2024-12-06 03:34:35.442865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.400 [2024-12-06 03:34:35.442873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.400 [2024-12-06 03:34:35.443062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.400 [2024-12-06 03:34:35.443237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.400 [2024-12-06 03:34:35.443247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.400 [2024-12-06 03:34:35.443253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.400 [2024-12-06 03:34:35.443260] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.400 [2024-12-06 03:34:35.455405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.400 [2024-12-06 03:34:35.455767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.400 [2024-12-06 03:34:35.455811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.400 [2024-12-06 03:34:35.455842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.400 [2024-12-06 03:34:35.456330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.400 [2024-12-06 03:34:35.456506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.400 [2024-12-06 03:34:35.456516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.400 [2024-12-06 03:34:35.456522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.400 [2024-12-06 03:34:35.456529] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.400 [2024-12-06 03:34:35.468239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.400 [2024-12-06 03:34:35.468667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.400 [2024-12-06 03:34:35.468712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.400 [2024-12-06 03:34:35.468736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.400 [2024-12-06 03:34:35.469235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.400 [2024-12-06 03:34:35.469411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.400 [2024-12-06 03:34:35.469420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.400 [2024-12-06 03:34:35.469427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.400 [2024-12-06 03:34:35.469433] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.400 [2024-12-06 03:34:35.481099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.400 [2024-12-06 03:34:35.481521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.400 [2024-12-06 03:34:35.481538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.400 [2024-12-06 03:34:35.481545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.400 [2024-12-06 03:34:35.481710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.400 [2024-12-06 03:34:35.481875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.400 [2024-12-06 03:34:35.481884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.400 [2024-12-06 03:34:35.481891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.400 [2024-12-06 03:34:35.481897] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.400 [2024-12-06 03:34:35.493929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.400 [2024-12-06 03:34:35.494355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.400 [2024-12-06 03:34:35.494372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.400 [2024-12-06 03:34:35.494381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.400 [2024-12-06 03:34:35.494554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.400 [2024-12-06 03:34:35.494732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.400 [2024-12-06 03:34:35.494742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.400 [2024-12-06 03:34:35.494749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.400 [2024-12-06 03:34:35.494755] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.400 [2024-12-06 03:34:35.506955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.400 [2024-12-06 03:34:35.507361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.400 [2024-12-06 03:34:35.507378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.400 [2024-12-06 03:34:35.507386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.400 [2024-12-06 03:34:35.507550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.400 [2024-12-06 03:34:35.507716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.400 [2024-12-06 03:34:35.507725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.400 [2024-12-06 03:34:35.507732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.400 [2024-12-06 03:34:35.507738] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.400 [2024-12-06 03:34:35.520043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.400 [2024-12-06 03:34:35.520392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.401 [2024-12-06 03:34:35.520437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.401 [2024-12-06 03:34:35.520462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.401 [2024-12-06 03:34:35.521061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.401 [2024-12-06 03:34:35.521500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.401 [2024-12-06 03:34:35.521510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.401 [2024-12-06 03:34:35.521518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.401 [2024-12-06 03:34:35.521525] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.401 [2024-12-06 03:34:35.533192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.401 [2024-12-06 03:34:35.533635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.401 [2024-12-06 03:34:35.533653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.401 [2024-12-06 03:34:35.533662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.401 [2024-12-06 03:34:35.533841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.401 [2024-12-06 03:34:35.534026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.401 [2024-12-06 03:34:35.534036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.401 [2024-12-06 03:34:35.534042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.401 [2024-12-06 03:34:35.534052] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.665 6895.75 IOPS, 26.94 MiB/s [2024-12-06T02:34:35.806Z] [2024-12-06 03:34:35.546241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.665 [2024-12-06 03:34:35.546589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.665 [2024-12-06 03:34:35.546607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.665 [2024-12-06 03:34:35.546615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.665 [2024-12-06 03:34:35.546788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.665 [2024-12-06 03:34:35.546968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.665 [2024-12-06 03:34:35.546994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.665 [2024-12-06 03:34:35.547001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.665 [2024-12-06 03:34:35.547008] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.665 [2024-12-06 03:34:35.559054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.665 [2024-12-06 03:34:35.559485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.665 [2024-12-06 03:34:35.559529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.665 [2024-12-06 03:34:35.559553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.665 [2024-12-06 03:34:35.560152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.665 [2024-12-06 03:34:35.560552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.665 [2024-12-06 03:34:35.560561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.665 [2024-12-06 03:34:35.560568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.665 [2024-12-06 03:34:35.560575] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.665 [2024-12-06 03:34:35.571877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.665 [2024-12-06 03:34:35.572307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.665 [2024-12-06 03:34:35.572325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.665 [2024-12-06 03:34:35.572333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.665 [2024-12-06 03:34:35.572507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.665 [2024-12-06 03:34:35.572681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.665 [2024-12-06 03:34:35.572691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.665 [2024-12-06 03:34:35.572697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.665 [2024-12-06 03:34:35.572704] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.665 [2024-12-06 03:34:35.584822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.665 [2024-12-06 03:34:35.585225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.665 [2024-12-06 03:34:35.585242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.665 [2024-12-06 03:34:35.585249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.665 [2024-12-06 03:34:35.585413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.665 [2024-12-06 03:34:35.585578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.665 [2024-12-06 03:34:35.585587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.665 [2024-12-06 03:34:35.585593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.665 [2024-12-06 03:34:35.585599] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.665 [2024-12-06 03:34:35.597744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.665 [2024-12-06 03:34:35.598160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.665 [2024-12-06 03:34:35.598178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.665 [2024-12-06 03:34:35.598187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.665 [2024-12-06 03:34:35.598365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.665 [2024-12-06 03:34:35.598530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.665 [2024-12-06 03:34:35.598539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.665 [2024-12-06 03:34:35.598545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.665 [2024-12-06 03:34:35.598552] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.665 [2024-12-06 03:34:35.610749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.665 [2024-12-06 03:34:35.611196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.665 [2024-12-06 03:34:35.611214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.665 [2024-12-06 03:34:35.611223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.665 [2024-12-06 03:34:35.611397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.665 [2024-12-06 03:34:35.611571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.665 [2024-12-06 03:34:35.611597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.665 [2024-12-06 03:34:35.611604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.665 [2024-12-06 03:34:35.611612] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.665 [2024-12-06 03:34:35.623881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.665 [2024-12-06 03:34:35.624305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.665 [2024-12-06 03:34:35.624324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.665 [2024-12-06 03:34:35.624335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.665 [2024-12-06 03:34:35.624514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.665 [2024-12-06 03:34:35.624695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.665 [2024-12-06 03:34:35.624706] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.665 [2024-12-06 03:34:35.624712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.665 [2024-12-06 03:34:35.624719] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.665 [2024-12-06 03:34:35.636934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.666 [2024-12-06 03:34:35.637336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.666 [2024-12-06 03:34:35.637353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.666 [2024-12-06 03:34:35.637362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.666 [2024-12-06 03:34:35.637536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.666 [2024-12-06 03:34:35.637712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.666 [2024-12-06 03:34:35.637722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.666 [2024-12-06 03:34:35.637729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.666 [2024-12-06 03:34:35.637735] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.666 [2024-12-06 03:34:35.650118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.666 [2024-12-06 03:34:35.650495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.666 [2024-12-06 03:34:35.650513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.666 [2024-12-06 03:34:35.650521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.666 [2024-12-06 03:34:35.650700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.666 [2024-12-06 03:34:35.650881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.666 [2024-12-06 03:34:35.650891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.666 [2024-12-06 03:34:35.650899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.666 [2024-12-06 03:34:35.650906] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.666 [2024-12-06 03:34:35.663197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.666 [2024-12-06 03:34:35.663542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.666 [2024-12-06 03:34:35.663560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.666 [2024-12-06 03:34:35.663568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.666 [2024-12-06 03:34:35.663742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.666 [2024-12-06 03:34:35.663917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.666 [2024-12-06 03:34:35.663930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.666 [2024-12-06 03:34:35.663939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.666 [2024-12-06 03:34:35.663946] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.666 [2024-12-06 03:34:35.676288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.666 [2024-12-06 03:34:35.676694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.666 [2024-12-06 03:34:35.676712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.666 [2024-12-06 03:34:35.676720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.666 [2024-12-06 03:34:35.676895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.666 [2024-12-06 03:34:35.677078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.666 [2024-12-06 03:34:35.677089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.666 [2024-12-06 03:34:35.677096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.666 [2024-12-06 03:34:35.677103] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.666 [2024-12-06 03:34:35.689397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.666 [2024-12-06 03:34:35.689820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.666 [2024-12-06 03:34:35.689864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.666 [2024-12-06 03:34:35.689888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.666 [2024-12-06 03:34:35.690462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.666 [2024-12-06 03:34:35.690638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.666 [2024-12-06 03:34:35.690649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.666 [2024-12-06 03:34:35.690656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.666 [2024-12-06 03:34:35.690663] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.666 [2024-12-06 03:34:35.702429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.666 [2024-12-06 03:34:35.702851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.666 [2024-12-06 03:34:35.702868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.666 [2024-12-06 03:34:35.702875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.666 [2024-12-06 03:34:35.703046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.666 [2024-12-06 03:34:35.703212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.666 [2024-12-06 03:34:35.703221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.666 [2024-12-06 03:34:35.703228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.666 [2024-12-06 03:34:35.703237] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.666 [2024-12-06 03:34:35.715446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.666 [2024-12-06 03:34:35.715805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.666 [2024-12-06 03:34:35.715822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.666 [2024-12-06 03:34:35.715830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.666 [2024-12-06 03:34:35.716019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.666 [2024-12-06 03:34:35.716194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.666 [2024-12-06 03:34:35.716204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.666 [2024-12-06 03:34:35.716211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.666 [2024-12-06 03:34:35.716218] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.666 [2024-12-06 03:34:35.728396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.666 [2024-12-06 03:34:35.728750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.666 [2024-12-06 03:34:35.728768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.666 [2024-12-06 03:34:35.728775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.666 [2024-12-06 03:34:35.728939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.666 [2024-12-06 03:34:35.729136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.666 [2024-12-06 03:34:35.729146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.666 [2024-12-06 03:34:35.729153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.666 [2024-12-06 03:34:35.729160] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.666 [2024-12-06 03:34:35.741379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.666 [2024-12-06 03:34:35.741754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.666 [2024-12-06 03:34:35.741799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.666 [2024-12-06 03:34:35.741823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.666 [2024-12-06 03:34:35.742426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.666 [2024-12-06 03:34:35.742901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.666 [2024-12-06 03:34:35.742911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.666 [2024-12-06 03:34:35.742917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.666 [2024-12-06 03:34:35.742923] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.666 [2024-12-06 03:34:35.754337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.666 [2024-12-06 03:34:35.754685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.666 [2024-12-06 03:34:35.754702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.666 [2024-12-06 03:34:35.754710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.666 [2024-12-06 03:34:35.754884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.666 [2024-12-06 03:34:35.755064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.666 [2024-12-06 03:34:35.755075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.667 [2024-12-06 03:34:35.755082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.667 [2024-12-06 03:34:35.755088] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.667 [2024-12-06 03:34:35.767411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.667 [2024-12-06 03:34:35.767765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.667 [2024-12-06 03:34:35.767782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.667 [2024-12-06 03:34:35.767789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.667 [2024-12-06 03:34:35.767962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.667 [2024-12-06 03:34:35.768128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.667 [2024-12-06 03:34:35.768138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.667 [2024-12-06 03:34:35.768145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.667 [2024-12-06 03:34:35.768151] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.667 [2024-12-06 03:34:35.780361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.667 [2024-12-06 03:34:35.780771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.667 [2024-12-06 03:34:35.780788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.667 [2024-12-06 03:34:35.780795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.667 [2024-12-06 03:34:35.780965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.667 [2024-12-06 03:34:35.781132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.667 [2024-12-06 03:34:35.781141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.667 [2024-12-06 03:34:35.781148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.667 [2024-12-06 03:34:35.781154] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.667 [2024-12-06 03:34:35.793451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.667 [2024-12-06 03:34:35.793854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.667 [2024-12-06 03:34:35.793871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.667 [2024-12-06 03:34:35.793879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.667 [2024-12-06 03:34:35.794072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.667 [2024-12-06 03:34:35.794248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.667 [2024-12-06 03:34:35.794257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.667 [2024-12-06 03:34:35.794264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.667 [2024-12-06 03:34:35.794271] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.926 [2024-12-06 03:34:35.806473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.926 [2024-12-06 03:34:35.806914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.926 [2024-12-06 03:34:35.806972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.926 [2024-12-06 03:34:35.806998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.926 [2024-12-06 03:34:35.807583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.927 [2024-12-06 03:34:35.808069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.927 [2024-12-06 03:34:35.808080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.927 [2024-12-06 03:34:35.808087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.927 [2024-12-06 03:34:35.808094] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.927 [2024-12-06 03:34:35.819382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.927 [2024-12-06 03:34:35.819809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.927 [2024-12-06 03:34:35.819853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.927 [2024-12-06 03:34:35.819877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.927 [2024-12-06 03:34:35.820475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.927 [2024-12-06 03:34:35.821012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.927 [2024-12-06 03:34:35.821023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.927 [2024-12-06 03:34:35.821030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.927 [2024-12-06 03:34:35.821038] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.927 [2024-12-06 03:34:35.832362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.927 [2024-12-06 03:34:35.832696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.927 [2024-12-06 03:34:35.832712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.927 [2024-12-06 03:34:35.832719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.927 [2024-12-06 03:34:35.832885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.927 [2024-12-06 03:34:35.833057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.927 [2024-12-06 03:34:35.833070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.927 [2024-12-06 03:34:35.833077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.927 [2024-12-06 03:34:35.833084] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.927 [2024-12-06 03:34:35.845339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.927 [2024-12-06 03:34:35.845759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.927 [2024-12-06 03:34:35.845803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.927 [2024-12-06 03:34:35.845826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.927 [2024-12-06 03:34:35.846423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.927 [2024-12-06 03:34:35.847025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.927 [2024-12-06 03:34:35.847053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.927 [2024-12-06 03:34:35.847074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.927 [2024-12-06 03:34:35.847093] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.927 [2024-12-06 03:34:35.858405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.927 [2024-12-06 03:34:35.858761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.927 [2024-12-06 03:34:35.858805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.927 [2024-12-06 03:34:35.858829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.927 [2024-12-06 03:34:35.859273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.927 [2024-12-06 03:34:35.859440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.927 [2024-12-06 03:34:35.859450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.927 [2024-12-06 03:34:35.859456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.927 [2024-12-06 03:34:35.859463] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.927 [2024-12-06 03:34:35.871467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.927 [2024-12-06 03:34:35.871904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.927 [2024-12-06 03:34:35.871921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.927 [2024-12-06 03:34:35.871930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.927 [2024-12-06 03:34:35.872130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.927 [2024-12-06 03:34:35.872311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.927 [2024-12-06 03:34:35.872321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.927 [2024-12-06 03:34:35.872328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.927 [2024-12-06 03:34:35.872338] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.927 [2024-12-06 03:34:35.884651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.927 [2024-12-06 03:34:35.885029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.927 [2024-12-06 03:34:35.885048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.927 [2024-12-06 03:34:35.885057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.927 [2024-12-06 03:34:35.885237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.927 [2024-12-06 03:34:35.885418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.927 [2024-12-06 03:34:35.885427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.927 [2024-12-06 03:34:35.885434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.927 [2024-12-06 03:34:35.885442] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.927 [2024-12-06 03:34:35.897757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.927 [2024-12-06 03:34:35.898119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.927 [2024-12-06 03:34:35.898170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.927 [2024-12-06 03:34:35.898194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.927 [2024-12-06 03:34:35.898777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.927 [2024-12-06 03:34:35.899015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.927 [2024-12-06 03:34:35.899025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.927 [2024-12-06 03:34:35.899032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.927 [2024-12-06 03:34:35.899039] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.927 [2024-12-06 03:34:35.910683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.927 [2024-12-06 03:34:35.910965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.927 [2024-12-06 03:34:35.910999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.927 [2024-12-06 03:34:35.911008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.927 [2024-12-06 03:34:35.911181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.927 [2024-12-06 03:34:35.911355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.927 [2024-12-06 03:34:35.911365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.927 [2024-12-06 03:34:35.911371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.927 [2024-12-06 03:34:35.911378] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.927 [2024-12-06 03:34:35.923548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.927 [2024-12-06 03:34:35.923886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.927 [2024-12-06 03:34:35.923905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.927 [2024-12-06 03:34:35.923913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.927 [2024-12-06 03:34:35.924107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.927 [2024-12-06 03:34:35.924282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.927 [2024-12-06 03:34:35.924292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.927 [2024-12-06 03:34:35.924299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.927 [2024-12-06 03:34:35.924305] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.927 [2024-12-06 03:34:35.936527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.927 [2024-12-06 03:34:35.936943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.927 [2024-12-06 03:34:35.936999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.927 [2024-12-06 03:34:35.937024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.927 [2024-12-06 03:34:35.937608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.927 [2024-12-06 03:34:35.937813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.927 [2024-12-06 03:34:35.937822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.927 [2024-12-06 03:34:35.937829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.927 [2024-12-06 03:34:35.937835] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.927 [2024-12-06 03:34:35.949472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.927 [2024-12-06 03:34:35.949889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.927 [2024-12-06 03:34:35.949933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.927 [2024-12-06 03:34:35.949972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.928 [2024-12-06 03:34:35.950398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.928 [2024-12-06 03:34:35.950566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.928 [2024-12-06 03:34:35.950575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.928 [2024-12-06 03:34:35.950582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.928 [2024-12-06 03:34:35.950588] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.928 [2024-12-06 03:34:35.962486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.928 [2024-12-06 03:34:35.962893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.928 [2024-12-06 03:34:35.962910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.928 [2024-12-06 03:34:35.962917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.928 [2024-12-06 03:34:35.963103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.928 [2024-12-06 03:34:35.963285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.928 [2024-12-06 03:34:35.963294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.928 [2024-12-06 03:34:35.963301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.928 [2024-12-06 03:34:35.963307] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.928 [2024-12-06 03:34:35.975343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.928 [2024-12-06 03:34:35.975790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.928 [2024-12-06 03:34:35.975808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.928 [2024-12-06 03:34:35.975816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.928 [2024-12-06 03:34:35.975995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.928 [2024-12-06 03:34:35.976170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.928 [2024-12-06 03:34:35.976180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.928 [2024-12-06 03:34:35.976187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.928 [2024-12-06 03:34:35.976193] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.928 [2024-12-06 03:34:35.988244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.928 [2024-12-06 03:34:35.988668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.928 [2024-12-06 03:34:35.988685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.928 [2024-12-06 03:34:35.988693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.928 [2024-12-06 03:34:35.988867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.928 [2024-12-06 03:34:35.989048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.928 [2024-12-06 03:34:35.989059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.928 [2024-12-06 03:34:35.989066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.928 [2024-12-06 03:34:35.989073] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.928 [2024-12-06 03:34:36.001283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.928 [2024-12-06 03:34:36.001644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.928 [2024-12-06 03:34:36.001661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.928 [2024-12-06 03:34:36.001669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.928 [2024-12-06 03:34:36.001842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.928 [2024-12-06 03:34:36.002024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.928 [2024-12-06 03:34:36.002037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.928 [2024-12-06 03:34:36.002044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.928 [2024-12-06 03:34:36.002051] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.928 [2024-12-06 03:34:36.014167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.928 [2024-12-06 03:34:36.014665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.928 [2024-12-06 03:34:36.014682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.928 [2024-12-06 03:34:36.014690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.928 [2024-12-06 03:34:36.014863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.928 [2024-12-06 03:34:36.015045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.928 [2024-12-06 03:34:36.015056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.928 [2024-12-06 03:34:36.015063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.928 [2024-12-06 03:34:36.015069] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.928 [2024-12-06 03:34:36.027102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.928 [2024-12-06 03:34:36.027513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.928 [2024-12-06 03:34:36.027559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.928 [2024-12-06 03:34:36.027583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.928 [2024-12-06 03:34:36.028188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.928 [2024-12-06 03:34:36.028367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.928 [2024-12-06 03:34:36.028377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.928 [2024-12-06 03:34:36.028383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.928 [2024-12-06 03:34:36.028389] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.928 [2024-12-06 03:34:36.040156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.928 [2024-12-06 03:34:36.040580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.928 [2024-12-06 03:34:36.040624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.928 [2024-12-06 03:34:36.040648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.928 [2024-12-06 03:34:36.041244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.928 [2024-12-06 03:34:36.041437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.928 [2024-12-06 03:34:36.041447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.928 [2024-12-06 03:34:36.041453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.928 [2024-12-06 03:34:36.041463] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.928 [2024-12-06 03:34:36.053792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.928 [2024-12-06 03:34:36.054161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.928 [2024-12-06 03:34:36.054178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:15.928 [2024-12-06 03:34:36.054185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:15.928 [2024-12-06 03:34:36.054354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:15.928 [2024-12-06 03:34:36.054524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.928 [2024-12-06 03:34:36.054533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.928 [2024-12-06 03:34:36.054540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.928 [2024-12-06 03:34:36.054546] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.189 [2024-12-06 03:34:36.066791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.189 [2024-12-06 03:34:36.067231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.189 [2024-12-06 03:34:36.067276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.189 [2024-12-06 03:34:36.067300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.189 [2024-12-06 03:34:36.067719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.189 [2024-12-06 03:34:36.067912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.189 [2024-12-06 03:34:36.067923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.189 [2024-12-06 03:34:36.067930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.189 [2024-12-06 03:34:36.067937] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.189 [2024-12-06 03:34:36.079660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.189 [2024-12-06 03:34:36.080089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.189 [2024-12-06 03:34:36.080106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.189 [2024-12-06 03:34:36.080113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.189 [2024-12-06 03:34:36.080278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.189 [2024-12-06 03:34:36.080443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.189 [2024-12-06 03:34:36.080460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.189 [2024-12-06 03:34:36.080466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.189 [2024-12-06 03:34:36.080473] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.189 [2024-12-06 03:34:36.092551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.189 [2024-12-06 03:34:36.092975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.189 [2024-12-06 03:34:36.092996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.189 [2024-12-06 03:34:36.093005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.189 [2024-12-06 03:34:36.093169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.189 [2024-12-06 03:34:36.093334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.189 [2024-12-06 03:34:36.093343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.189 [2024-12-06 03:34:36.093350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.189 [2024-12-06 03:34:36.093356] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.189 [2024-12-06 03:34:36.105430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.189 [2024-12-06 03:34:36.105855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.189 [2024-12-06 03:34:36.105899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.189 [2024-12-06 03:34:36.105923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.189 [2024-12-06 03:34:36.106524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.189 [2024-12-06 03:34:36.107093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.189 [2024-12-06 03:34:36.107103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.189 [2024-12-06 03:34:36.107110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.189 [2024-12-06 03:34:36.107117] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.189 [2024-12-06 03:34:36.118511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.189 [2024-12-06 03:34:36.118895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.189 [2024-12-06 03:34:36.118941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.189 [2024-12-06 03:34:36.118984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.189 [2024-12-06 03:34:36.119471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.189 [2024-12-06 03:34:36.119646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.189 [2024-12-06 03:34:36.119654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.189 [2024-12-06 03:34:36.119661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.189 [2024-12-06 03:34:36.119667] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.189 [2024-12-06 03:34:36.131457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.189 [2024-12-06 03:34:36.131828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.189 [2024-12-06 03:34:36.131846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.189 [2024-12-06 03:34:36.131854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.189 [2024-12-06 03:34:36.132055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.189 [2024-12-06 03:34:36.132235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.189 [2024-12-06 03:34:36.132245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.189 [2024-12-06 03:34:36.132252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.189 [2024-12-06 03:34:36.132259] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.189 [2024-12-06 03:34:36.144547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.190 [2024-12-06 03:34:36.144993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.190 [2024-12-06 03:34:36.145037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.190 [2024-12-06 03:34:36.145064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.190 [2024-12-06 03:34:36.145651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.190 [2024-12-06 03:34:36.146209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.190 [2024-12-06 03:34:36.146219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.190 [2024-12-06 03:34:36.146226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.190 [2024-12-06 03:34:36.146233] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.190 [2024-12-06 03:34:36.157414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.190 [2024-12-06 03:34:36.157768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.190 [2024-12-06 03:34:36.157786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.190 [2024-12-06 03:34:36.157793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.190 [2024-12-06 03:34:36.157965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.190 [2024-12-06 03:34:36.158155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.190 [2024-12-06 03:34:36.158164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.190 [2024-12-06 03:34:36.158171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.190 [2024-12-06 03:34:36.158178] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.190 [2024-12-06 03:34:36.170357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.190 [2024-12-06 03:34:36.170783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.190 [2024-12-06 03:34:36.170799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.190 [2024-12-06 03:34:36.170807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.190 [2024-12-06 03:34:36.170978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.190 [2024-12-06 03:34:36.171170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.190 [2024-12-06 03:34:36.171180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.190 [2024-12-06 03:34:36.171190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.190 [2024-12-06 03:34:36.171196] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.190 [2024-12-06 03:34:36.183309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.190 [2024-12-06 03:34:36.183731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.190 [2024-12-06 03:34:36.183748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.190 [2024-12-06 03:34:36.183755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.190 [2024-12-06 03:34:36.183919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.190 [2024-12-06 03:34:36.184112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.190 [2024-12-06 03:34:36.184123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.190 [2024-12-06 03:34:36.184130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.190 [2024-12-06 03:34:36.184136] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.190 [2024-12-06 03:34:36.196234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.190 [2024-12-06 03:34:36.196595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.190 [2024-12-06 03:34:36.196638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.190 [2024-12-06 03:34:36.196662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.190 [2024-12-06 03:34:36.197159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.190 [2024-12-06 03:34:36.197335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.190 [2024-12-06 03:34:36.197345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.190 [2024-12-06 03:34:36.197352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.190 [2024-12-06 03:34:36.197358] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.190 [2024-12-06 03:34:36.209087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.190 [2024-12-06 03:34:36.209452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.190 [2024-12-06 03:34:36.209496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.190 [2024-12-06 03:34:36.209521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.190 [2024-12-06 03:34:36.210026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.190 [2024-12-06 03:34:36.210203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.190 [2024-12-06 03:34:36.210213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.190 [2024-12-06 03:34:36.210220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.190 [2024-12-06 03:34:36.210227] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.190 [2024-12-06 03:34:36.222164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.190 [2024-12-06 03:34:36.222594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.190 [2024-12-06 03:34:36.222640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.190 [2024-12-06 03:34:36.222664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.190 [2024-12-06 03:34:36.223108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.190 [2024-12-06 03:34:36.223283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.190 [2024-12-06 03:34:36.223293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.190 [2024-12-06 03:34:36.223300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.190 [2024-12-06 03:34:36.223307] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.190 [2024-12-06 03:34:36.235030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.190 [2024-12-06 03:34:36.235381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.190 [2024-12-06 03:34:36.235398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.190 [2024-12-06 03:34:36.235405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.190 [2024-12-06 03:34:36.235569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.190 [2024-12-06 03:34:36.235735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.190 [2024-12-06 03:34:36.235744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.190 [2024-12-06 03:34:36.235751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.190 [2024-12-06 03:34:36.235757] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.190 [2024-12-06 03:34:36.247937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.190 [2024-12-06 03:34:36.248361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.190 [2024-12-06 03:34:36.248379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.190 [2024-12-06 03:34:36.248387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.190 [2024-12-06 03:34:36.248551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.190 [2024-12-06 03:34:36.248716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.190 [2024-12-06 03:34:36.248726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.190 [2024-12-06 03:34:36.248732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.190 [2024-12-06 03:34:36.248739] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.190 [2024-12-06 03:34:36.260817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.190 [2024-12-06 03:34:36.261244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.190 [2024-12-06 03:34:36.261264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.191 [2024-12-06 03:34:36.261273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.191 [2024-12-06 03:34:36.261439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.191 [2024-12-06 03:34:36.261604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.191 [2024-12-06 03:34:36.261613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.191 [2024-12-06 03:34:36.261620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.191 [2024-12-06 03:34:36.261626] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.191 [2024-12-06 03:34:36.273837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.191 [2024-12-06 03:34:36.274253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.191 [2024-12-06 03:34:36.274271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.191 [2024-12-06 03:34:36.274278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.191 [2024-12-06 03:34:36.274442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.191 [2024-12-06 03:34:36.274607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.191 [2024-12-06 03:34:36.274617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.191 [2024-12-06 03:34:36.274623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.191 [2024-12-06 03:34:36.274630] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.191 [2024-12-06 03:34:36.286787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.191 [2024-12-06 03:34:36.287210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.191 [2024-12-06 03:34:36.287227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.191 [2024-12-06 03:34:36.287235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.191 [2024-12-06 03:34:36.287399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.191 [2024-12-06 03:34:36.287563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.191 [2024-12-06 03:34:36.287573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.191 [2024-12-06 03:34:36.287579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.191 [2024-12-06 03:34:36.287585] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.191 [2024-12-06 03:34:36.299646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.191 [2024-12-06 03:34:36.300070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.191 [2024-12-06 03:34:36.300125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.191 [2024-12-06 03:34:36.300150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.191 [2024-12-06 03:34:36.300734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.191 [2024-12-06 03:34:36.301219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.191 [2024-12-06 03:34:36.301229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.191 [2024-12-06 03:34:36.301236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.191 [2024-12-06 03:34:36.301244] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.191 [2024-12-06 03:34:36.312510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.191 [2024-12-06 03:34:36.312939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.191 [2024-12-06 03:34:36.312961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.191 [2024-12-06 03:34:36.312969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.191 [2024-12-06 03:34:36.313133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.191 [2024-12-06 03:34:36.313298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.191 [2024-12-06 03:34:36.313307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.191 [2024-12-06 03:34:36.313314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.191 [2024-12-06 03:34:36.313320] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.191 [2024-12-06 03:34:36.325670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.452 [2024-12-06 03:34:36.326081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.452 [2024-12-06 03:34:36.326098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.452 [2024-12-06 03:34:36.326106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.452 [2024-12-06 03:34:36.326285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.452 [2024-12-06 03:34:36.326465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.452 [2024-12-06 03:34:36.326475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.452 [2024-12-06 03:34:36.326482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.452 [2024-12-06 03:34:36.326489] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.452 [2024-12-06 03:34:36.338718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.452 [2024-12-06 03:34:36.339162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.452 [2024-12-06 03:34:36.339209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.452 [2024-12-06 03:34:36.339234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.452 [2024-12-06 03:34:36.339818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.452 [2024-12-06 03:34:36.340417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.452 [2024-12-06 03:34:36.340445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.452 [2024-12-06 03:34:36.340475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.452 [2024-12-06 03:34:36.340496] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.452 [2024-12-06 03:34:36.351588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.452 [2024-12-06 03:34:36.352011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.452 [2024-12-06 03:34:36.352029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.452 [2024-12-06 03:34:36.352037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.452 [2024-12-06 03:34:36.352201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.452 [2024-12-06 03:34:36.352367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.452 [2024-12-06 03:34:36.352376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.452 [2024-12-06 03:34:36.352383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.452 [2024-12-06 03:34:36.352389] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.452 [2024-12-06 03:34:36.364462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.452 [2024-12-06 03:34:36.364900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.452 [2024-12-06 03:34:36.364945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.452 [2024-12-06 03:34:36.364986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.452 [2024-12-06 03:34:36.365476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.452 [2024-12-06 03:34:36.365732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.453 [2024-12-06 03:34:36.365745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.453 [2024-12-06 03:34:36.365755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.453 [2024-12-06 03:34:36.365763] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.453 [2024-12-06 03:34:36.377793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.453 [2024-12-06 03:34:36.378224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.453 [2024-12-06 03:34:36.378242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.453 [2024-12-06 03:34:36.378249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.453 [2024-12-06 03:34:36.378418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.453 [2024-12-06 03:34:36.378589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.453 [2024-12-06 03:34:36.378598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.453 [2024-12-06 03:34:36.378605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.453 [2024-12-06 03:34:36.378611] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.453 [2024-12-06 03:34:36.390667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.453 [2024-12-06 03:34:36.391110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.453 [2024-12-06 03:34:36.391129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.453 [2024-12-06 03:34:36.391138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.453 [2024-12-06 03:34:36.391317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.453 [2024-12-06 03:34:36.391498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.453 [2024-12-06 03:34:36.391508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.453 [2024-12-06 03:34:36.391515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.453 [2024-12-06 03:34:36.391521] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.453 [2024-12-06 03:34:36.403757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.453 [2024-12-06 03:34:36.404195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.453 [2024-12-06 03:34:36.404240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.453 [2024-12-06 03:34:36.404264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.453 [2024-12-06 03:34:36.404848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.453 [2024-12-06 03:34:36.405464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.453 [2024-12-06 03:34:36.405474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.453 [2024-12-06 03:34:36.405481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.453 [2024-12-06 03:34:36.405487] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.453 [2024-12-06 03:34:36.416735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.453 [2024-12-06 03:34:36.417159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.453 [2024-12-06 03:34:36.417176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.453 [2024-12-06 03:34:36.417184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.453 [2024-12-06 03:34:36.417348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.453 [2024-12-06 03:34:36.417514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.453 [2024-12-06 03:34:36.417523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.453 [2024-12-06 03:34:36.417529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.453 [2024-12-06 03:34:36.417536] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.453 [2024-12-06 03:34:36.429711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.453 [2024-12-06 03:34:36.430127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.453 [2024-12-06 03:34:36.430144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.453 [2024-12-06 03:34:36.430154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.453 [2024-12-06 03:34:36.430319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.453 [2024-12-06 03:34:36.430484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.453 [2024-12-06 03:34:36.430494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.453 [2024-12-06 03:34:36.430500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.453 [2024-12-06 03:34:36.430507] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.453 [2024-12-06 03:34:36.442643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.453 [2024-12-06 03:34:36.443065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.453 [2024-12-06 03:34:36.443082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.453 [2024-12-06 03:34:36.443090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.453 [2024-12-06 03:34:36.443254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.453 [2024-12-06 03:34:36.443418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.453 [2024-12-06 03:34:36.443428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.453 [2024-12-06 03:34:36.443434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.453 [2024-12-06 03:34:36.443441] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.453 [2024-12-06 03:34:36.455498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.453 [2024-12-06 03:34:36.455852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.453 [2024-12-06 03:34:36.455868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.453 [2024-12-06 03:34:36.455875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.453 [2024-12-06 03:34:36.456065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.453 [2024-12-06 03:34:36.456240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.453 [2024-12-06 03:34:36.456250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.453 [2024-12-06 03:34:36.456257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.453 [2024-12-06 03:34:36.456264] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.453 [2024-12-06 03:34:36.468457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.453 [2024-12-06 03:34:36.468879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.453 [2024-12-06 03:34:36.468897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.453 [2024-12-06 03:34:36.468904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.453 [2024-12-06 03:34:36.469097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.453 [2024-12-06 03:34:36.469275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.453 [2024-12-06 03:34:36.469285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.453 [2024-12-06 03:34:36.469291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.453 [2024-12-06 03:34:36.469298] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.453 [2024-12-06 03:34:36.481481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.453 [2024-12-06 03:34:36.481900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.453 [2024-12-06 03:34:36.481942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.453 [2024-12-06 03:34:36.481983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.453 [2024-12-06 03:34:36.482568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.453 [2024-12-06 03:34:36.483059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.453 [2024-12-06 03:34:36.483069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.453 [2024-12-06 03:34:36.483075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.453 [2024-12-06 03:34:36.483081] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.453 [2024-12-06 03:34:36.494515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.453 [2024-12-06 03:34:36.494934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.453 [2024-12-06 03:34:36.494956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.453 [2024-12-06 03:34:36.494964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.453 [2024-12-06 03:34:36.495129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.454 [2024-12-06 03:34:36.495295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.454 [2024-12-06 03:34:36.495304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.454 [2024-12-06 03:34:36.495310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.454 [2024-12-06 03:34:36.495316] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.454 [2024-12-06 03:34:36.507443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.454 [2024-12-06 03:34:36.507845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.454 [2024-12-06 03:34:36.507889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.454 [2024-12-06 03:34:36.507913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.454 [2024-12-06 03:34:36.508515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.454 [2024-12-06 03:34:36.508931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.454 [2024-12-06 03:34:36.508941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.454 [2024-12-06 03:34:36.508956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.454 [2024-12-06 03:34:36.508963] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.454 [2024-12-06 03:34:36.520442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.454 [2024-12-06 03:34:36.520835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.454 [2024-12-06 03:34:36.520852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.454 [2024-12-06 03:34:36.520860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.454 [2024-12-06 03:34:36.521048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.454 [2024-12-06 03:34:36.521224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.454 [2024-12-06 03:34:36.521234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.454 [2024-12-06 03:34:36.521240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.454 [2024-12-06 03:34:36.521247] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.454 [2024-12-06 03:34:36.533372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.454 [2024-12-06 03:34:36.533796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.454 [2024-12-06 03:34:36.533813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.454 [2024-12-06 03:34:36.533821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.454 [2024-12-06 03:34:36.534014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.454 [2024-12-06 03:34:36.534189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.454 [2024-12-06 03:34:36.534199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.454 [2024-12-06 03:34:36.534205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.454 [2024-12-06 03:34:36.534212] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.454 5516.60 IOPS, 21.55 MiB/s [2024-12-06T02:34:36.595Z] [2024-12-06 03:34:36.546216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.454 [2024-12-06 03:34:36.546635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.454 [2024-12-06 03:34:36.546679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.454 [2024-12-06 03:34:36.546704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.454 [2024-12-06 03:34:36.547306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.454 [2024-12-06 03:34:36.547867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.454 [2024-12-06 03:34:36.547881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.454 [2024-12-06 03:34:36.547892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.454 [2024-12-06 03:34:36.547902] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.454 [2024-12-06 03:34:36.559714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.454 [2024-12-06 03:34:36.560089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.454 [2024-12-06 03:34:36.560107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.454 [2024-12-06 03:34:36.560116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.454 [2024-12-06 03:34:36.560285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.454 [2024-12-06 03:34:36.560455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.454 [2024-12-06 03:34:36.560464] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.454 [2024-12-06 03:34:36.560471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.454 [2024-12-06 03:34:36.560478] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.454 [2024-12-06 03:34:36.572636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.454 [2024-12-06 03:34:36.573053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.454 [2024-12-06 03:34:36.573070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.454 [2024-12-06 03:34:36.573078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.454 [2024-12-06 03:34:36.573243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.454 [2024-12-06 03:34:36.573408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.454 [2024-12-06 03:34:36.573418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.454 [2024-12-06 03:34:36.573425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.454 [2024-12-06 03:34:36.573431] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.454 [2024-12-06 03:34:36.585780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.454 [2024-12-06 03:34:36.586172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.454 [2024-12-06 03:34:36.586192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.454 [2024-12-06 03:34:36.586200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.454 [2024-12-06 03:34:36.586380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.454 [2024-12-06 03:34:36.586560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.454 [2024-12-06 03:34:36.586570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.454 [2024-12-06 03:34:36.586576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.454 [2024-12-06 03:34:36.586583] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.715 [2024-12-06 03:34:36.598900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.715 [2024-12-06 03:34:36.599349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.715 [2024-12-06 03:34:36.599395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.715 [2024-12-06 03:34:36.599427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.715 [2024-12-06 03:34:36.599870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.715 [2024-12-06 03:34:36.600050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.715 [2024-12-06 03:34:36.600061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.715 [2024-12-06 03:34:36.600068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.715 [2024-12-06 03:34:36.600075] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.715 [2024-12-06 03:34:36.611725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.715 [2024-12-06 03:34:36.612169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.715 [2024-12-06 03:34:36.612186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.715 [2024-12-06 03:34:36.612194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.715 [2024-12-06 03:34:36.612359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.715 [2024-12-06 03:34:36.612525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.715 [2024-12-06 03:34:36.612535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.715 [2024-12-06 03:34:36.612541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.715 [2024-12-06 03:34:36.612548] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.715 [2024-12-06 03:34:36.624642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.715 [2024-12-06 03:34:36.625087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.715 [2024-12-06 03:34:36.625133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.715 [2024-12-06 03:34:36.625169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.715 [2024-12-06 03:34:36.625675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.715 [2024-12-06 03:34:36.625840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.715 [2024-12-06 03:34:36.625849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.715 [2024-12-06 03:34:36.625856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.715 [2024-12-06 03:34:36.625863] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.715 [2024-12-06 03:34:36.637723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.715 [2024-12-06 03:34:36.638160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.715 [2024-12-06 03:34:36.638177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.715 [2024-12-06 03:34:36.638184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.715 [2024-12-06 03:34:36.638366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.715 [2024-12-06 03:34:36.638545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.715 [2024-12-06 03:34:36.638555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.715 [2024-12-06 03:34:36.638561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.715 [2024-12-06 03:34:36.638568] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.715 [2024-12-06 03:34:36.650555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.715 [2024-12-06 03:34:36.650975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.715 [2024-12-06 03:34:36.650993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.715 [2024-12-06 03:34:36.651017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.715 [2024-12-06 03:34:36.651198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.716 [2024-12-06 03:34:36.651378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.716 [2024-12-06 03:34:36.651388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.716 [2024-12-06 03:34:36.651395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.716 [2024-12-06 03:34:36.651402] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.716 [2024-12-06 03:34:36.663640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.716 [2024-12-06 03:34:36.664081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.716 [2024-12-06 03:34:36.664127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.716 [2024-12-06 03:34:36.664152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.716 [2024-12-06 03:34:36.664731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.716 [2024-12-06 03:34:36.664907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.716 [2024-12-06 03:34:36.664917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.716 [2024-12-06 03:34:36.664924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.716 [2024-12-06 03:34:36.664931] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.716 [2024-12-06 03:34:36.676855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.716 [2024-12-06 03:34:36.677168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.716 [2024-12-06 03:34:36.677186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.716 [2024-12-06 03:34:36.677194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.716 [2024-12-06 03:34:36.677373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.716 [2024-12-06 03:34:36.677553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.716 [2024-12-06 03:34:36.677563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.716 [2024-12-06 03:34:36.677574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.716 [2024-12-06 03:34:36.677581] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.716 [2024-12-06 03:34:36.690097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.716 [2024-12-06 03:34:36.690463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.716 [2024-12-06 03:34:36.690482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.716 [2024-12-06 03:34:36.690490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.716 [2024-12-06 03:34:36.690670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.716 [2024-12-06 03:34:36.690851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.716 [2024-12-06 03:34:36.690862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.716 [2024-12-06 03:34:36.690869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.716 [2024-12-06 03:34:36.690876] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.716 [2024-12-06 03:34:36.703202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.716 [2024-12-06 03:34:36.703667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.716 [2024-12-06 03:34:36.703684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.716 [2024-12-06 03:34:36.703693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.716 [2024-12-06 03:34:36.703873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.716 [2024-12-06 03:34:36.704066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.716 [2024-12-06 03:34:36.704077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.716 [2024-12-06 03:34:36.704085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.716 [2024-12-06 03:34:36.704093] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.716 [2024-12-06 03:34:36.716386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.716 [2024-12-06 03:34:36.716825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.716 [2024-12-06 03:34:36.716843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.716 [2024-12-06 03:34:36.716851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.716 [2024-12-06 03:34:36.717037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.716 [2024-12-06 03:34:36.717217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.716 [2024-12-06 03:34:36.717227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.716 [2024-12-06 03:34:36.717235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.716 [2024-12-06 03:34:36.717242] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.716 [2024-12-06 03:34:36.729612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.716 [2024-12-06 03:34:36.730030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.716 [2024-12-06 03:34:36.730048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.716 [2024-12-06 03:34:36.730057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.716 [2024-12-06 03:34:36.730236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.716 [2024-12-06 03:34:36.730416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.716 [2024-12-06 03:34:36.730426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.716 [2024-12-06 03:34:36.730433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.716 [2024-12-06 03:34:36.730440] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.716 [2024-12-06 03:34:36.742746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.716 [2024-12-06 03:34:36.743201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.716 [2024-12-06 03:34:36.743219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.716 [2024-12-06 03:34:36.743227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.716 [2024-12-06 03:34:36.743401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.716 [2024-12-06 03:34:36.743576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.716 [2024-12-06 03:34:36.743586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.716 [2024-12-06 03:34:36.743593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.716 [2024-12-06 03:34:36.743600] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.716 [2024-12-06 03:34:36.755772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.716 [2024-12-06 03:34:36.756208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.716 [2024-12-06 03:34:36.756227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.716 [2024-12-06 03:34:36.756235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.716 [2024-12-06 03:34:36.756408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.716 [2024-12-06 03:34:36.756583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.716 [2024-12-06 03:34:36.756593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.716 [2024-12-06 03:34:36.756600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.716 [2024-12-06 03:34:36.756606] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.716 [2024-12-06 03:34:36.768901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.716 [2024-12-06 03:34:36.769317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.716 [2024-12-06 03:34:36.769335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.716 [2024-12-06 03:34:36.769346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.716 [2024-12-06 03:34:36.769521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.716 [2024-12-06 03:34:36.769697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.716 [2024-12-06 03:34:36.769707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.716 [2024-12-06 03:34:36.769714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.716 [2024-12-06 03:34:36.769720] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.716 [2024-12-06 03:34:36.781991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.716 [2024-12-06 03:34:36.782418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.716 [2024-12-06 03:34:36.782435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.716 [2024-12-06 03:34:36.782443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.717 [2024-12-06 03:34:36.782616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.717 [2024-12-06 03:34:36.782791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.717 [2024-12-06 03:34:36.782801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.717 [2024-12-06 03:34:36.782808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.717 [2024-12-06 03:34:36.782815] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.717 [2024-12-06 03:34:36.794969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.717 [2024-12-06 03:34:36.795381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.717 [2024-12-06 03:34:36.795399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.717 [2024-12-06 03:34:36.795407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.717 [2024-12-06 03:34:36.795581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.717 [2024-12-06 03:34:36.795755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.717 [2024-12-06 03:34:36.795765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.717 [2024-12-06 03:34:36.795772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.717 [2024-12-06 03:34:36.795779] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.717 [2024-12-06 03:34:36.808024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.717 [2024-12-06 03:34:36.808400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.717 [2024-12-06 03:34:36.808417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.717 [2024-12-06 03:34:36.808425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.717 [2024-12-06 03:34:36.808599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.717 [2024-12-06 03:34:36.808777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.717 [2024-12-06 03:34:36.808787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.717 [2024-12-06 03:34:36.808793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.717 [2024-12-06 03:34:36.808800] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.717 [2024-12-06 03:34:36.821072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.717 [2024-12-06 03:34:36.821486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.717 [2024-12-06 03:34:36.821503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.717 [2024-12-06 03:34:36.821511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.717 [2024-12-06 03:34:36.821686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.717 [2024-12-06 03:34:36.821862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.717 [2024-12-06 03:34:36.821872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.717 [2024-12-06 03:34:36.821879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.717 [2024-12-06 03:34:36.821885] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.717 [2024-12-06 03:34:36.834160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.717 [2024-12-06 03:34:36.834589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.717 [2024-12-06 03:34:36.834607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.717 [2024-12-06 03:34:36.834615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.717 [2024-12-06 03:34:36.834790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.717 [2024-12-06 03:34:36.834970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.717 [2024-12-06 03:34:36.834980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.717 [2024-12-06 03:34:36.834987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.717 [2024-12-06 03:34:36.834994] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.717 [2024-12-06 03:34:36.847302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.717 [2024-12-06 03:34:36.847715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.717 [2024-12-06 03:34:36.847734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.717 [2024-12-06 03:34:36.847742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.717 [2024-12-06 03:34:36.847920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.717 [2024-12-06 03:34:36.848106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.717 [2024-12-06 03:34:36.848116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.717 [2024-12-06 03:34:36.848127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.717 [2024-12-06 03:34:36.848133] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.978 [2024-12-06 03:34:36.860363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.978 [2024-12-06 03:34:36.860727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.978 [2024-12-06 03:34:36.860746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.978 [2024-12-06 03:34:36.860754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.978 [2024-12-06 03:34:36.860928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.978 [2024-12-06 03:34:36.861109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.978 [2024-12-06 03:34:36.861120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.978 [2024-12-06 03:34:36.861126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.978 [2024-12-06 03:34:36.861133] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.978 [2024-12-06 03:34:36.873413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.978 [2024-12-06 03:34:36.873847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.978 [2024-12-06 03:34:36.873865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.978 [2024-12-06 03:34:36.873873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.978 [2024-12-06 03:34:36.874053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.978 [2024-12-06 03:34:36.874229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.978 [2024-12-06 03:34:36.874239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.978 [2024-12-06 03:34:36.874246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.978 [2024-12-06 03:34:36.874253] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.978 [2024-12-06 03:34:36.886509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.978 [2024-12-06 03:34:36.886943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.978 [2024-12-06 03:34:36.886965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.978 [2024-12-06 03:34:36.886973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.978 [2024-12-06 03:34:36.887146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.978 [2024-12-06 03:34:36.887320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.978 [2024-12-06 03:34:36.887330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.978 [2024-12-06 03:34:36.887337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.978 [2024-12-06 03:34:36.887343] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.978 [2024-12-06 03:34:36.899606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.978 [2024-12-06 03:34:36.900040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.978 [2024-12-06 03:34:36.900057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.978 [2024-12-06 03:34:36.900065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.978 [2024-12-06 03:34:36.900239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.978 [2024-12-06 03:34:36.900413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.978 [2024-12-06 03:34:36.900423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.978 [2024-12-06 03:34:36.900429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.978 [2024-12-06 03:34:36.900436] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.978 [2024-12-06 03:34:36.912682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.978 [2024-12-06 03:34:36.913117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.978 [2024-12-06 03:34:36.913135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.978 [2024-12-06 03:34:36.913143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.978 [2024-12-06 03:34:36.913323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.978 [2024-12-06 03:34:36.913503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.978 [2024-12-06 03:34:36.913512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.978 [2024-12-06 03:34:36.913519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.978 [2024-12-06 03:34:36.913526] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.978 [2024-12-06 03:34:36.925820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.978 [2024-12-06 03:34:36.926266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.978 [2024-12-06 03:34:36.926284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.978 [2024-12-06 03:34:36.926292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.978 [2024-12-06 03:34:36.926472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.978 [2024-12-06 03:34:36.926651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.978 [2024-12-06 03:34:36.926661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.978 [2024-12-06 03:34:36.926668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.978 [2024-12-06 03:34:36.926675] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.978 [2024-12-06 03:34:36.938822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.978 [2024-12-06 03:34:36.939263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.978 [2024-12-06 03:34:36.939281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.978 [2024-12-06 03:34:36.939292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.978 [2024-12-06 03:34:36.939468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.978 [2024-12-06 03:34:36.939643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.978 [2024-12-06 03:34:36.939653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.978 [2024-12-06 03:34:36.939660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.978 [2024-12-06 03:34:36.939666] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.978 [2024-12-06 03:34:36.951912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.978 [2024-12-06 03:34:36.952370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.978 [2024-12-06 03:34:36.952387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.978 [2024-12-06 03:34:36.952396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.978 [2024-12-06 03:34:36.952569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.978 [2024-12-06 03:34:36.952744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.978 [2024-12-06 03:34:36.952754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.978 [2024-12-06 03:34:36.952760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.978 [2024-12-06 03:34:36.952767] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.978 [2024-12-06 03:34:36.965026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.978 [2024-12-06 03:34:36.965466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.978 [2024-12-06 03:34:36.965484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.978 [2024-12-06 03:34:36.965492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.979 [2024-12-06 03:34:36.965666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.979 [2024-12-06 03:34:36.965840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.979 [2024-12-06 03:34:36.965850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.979 [2024-12-06 03:34:36.965856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.979 [2024-12-06 03:34:36.965863] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.979 [2024-12-06 03:34:36.978140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.979 [2024-12-06 03:34:36.978570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.979 [2024-12-06 03:34:36.978588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.979 [2024-12-06 03:34:36.978596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.979 [2024-12-06 03:34:36.978770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.979 [2024-12-06 03:34:36.978954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.979 [2024-12-06 03:34:36.978965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.979 [2024-12-06 03:34:36.978972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.979 [2024-12-06 03:34:36.978979] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.979 [2024-12-06 03:34:36.991221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.979 [2024-12-06 03:34:36.991653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.979 [2024-12-06 03:34:36.991671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.979 [2024-12-06 03:34:36.991679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.979 [2024-12-06 03:34:36.991853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.979 [2024-12-06 03:34:36.992033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.979 [2024-12-06 03:34:36.992044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.979 [2024-12-06 03:34:36.992051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.979 [2024-12-06 03:34:36.992058] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.979 [2024-12-06 03:34:37.004314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.979 [2024-12-06 03:34:37.004747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.979 [2024-12-06 03:34:37.004765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.979 [2024-12-06 03:34:37.004774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.979 [2024-12-06 03:34:37.004953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.979 [2024-12-06 03:34:37.005129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.979 [2024-12-06 03:34:37.005139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.979 [2024-12-06 03:34:37.005145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.979 [2024-12-06 03:34:37.005152] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.979 [2024-12-06 03:34:37.017371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.979 [2024-12-06 03:34:37.017806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.979 [2024-12-06 03:34:37.017823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.979 [2024-12-06 03:34:37.017831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.979 [2024-12-06 03:34:37.018012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.979 [2024-12-06 03:34:37.018187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.979 [2024-12-06 03:34:37.018196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.979 [2024-12-06 03:34:37.018204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.979 [2024-12-06 03:34:37.018216] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.979 [2024-12-06 03:34:37.030488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.979 [2024-12-06 03:34:37.030853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.979 [2024-12-06 03:34:37.030871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.979 [2024-12-06 03:34:37.030879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.979 [2024-12-06 03:34:37.031056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.979 [2024-12-06 03:34:37.031231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.979 [2024-12-06 03:34:37.031241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.979 [2024-12-06 03:34:37.031248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.979 [2024-12-06 03:34:37.031254] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.979 [2024-12-06 03:34:37.043592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.979 [2024-12-06 03:34:37.043999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.979 [2024-12-06 03:34:37.044016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.979 [2024-12-06 03:34:37.044025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.979 [2024-12-06 03:34:37.044199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.979 [2024-12-06 03:34:37.044373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.979 [2024-12-06 03:34:37.044383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.979 [2024-12-06 03:34:37.044389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.979 [2024-12-06 03:34:37.044396] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.979 [2024-12-06 03:34:37.056581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.979 [2024-12-06 03:34:37.056941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.979 [2024-12-06 03:34:37.056963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.979 [2024-12-06 03:34:37.056972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.979 [2024-12-06 03:34:37.057145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.979 [2024-12-06 03:34:37.057321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.979 [2024-12-06 03:34:37.057331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.979 [2024-12-06 03:34:37.057338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.979 [2024-12-06 03:34:37.057345] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2765434 Killed "${NVMF_APP[@]}" "$@" 00:26:16.979 03:34:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:16.979 03:34:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:16.979 03:34:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:16.979 03:34:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:16.979 03:34:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:16.979 03:34:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2766636 00:26:16.979 03:34:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2766636 00:26:16.979 03:34:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:16.979 [2024-12-06 03:34:37.069798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.979 03:34:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2766636 ']' 00:26:16.979 [2024-12-06 03:34:37.070173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.979 [2024-12-06 03:34:37.070191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.979 [2024-12-06 03:34:37.070200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.979 [2024-12-06 03:34:37.070380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.979 03:34:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:16.979 [2024-12-06 03:34:37.070561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.979 [2024-12-06 03:34:37.070572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.979 [2024-12-06 03:34:37.070579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.979 [2024-12-06 03:34:37.070586] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.979 03:34:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:16.979 03:34:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:16.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:16.980 03:34:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:16.980 03:34:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:16.980 [2024-12-06 03:34:37.082902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.980 [2024-12-06 03:34:37.083314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.980 [2024-12-06 03:34:37.083332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.980 [2024-12-06 03:34:37.083339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.980 [2024-12-06 03:34:37.083519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.980 [2024-12-06 03:34:37.083699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.980 [2024-12-06 03:34:37.083708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.980 [2024-12-06 03:34:37.083715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.980 [2024-12-06 03:34:37.083721] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.980 [2024-12-06 03:34:37.096048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.980 [2024-12-06 03:34:37.096416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.980 [2024-12-06 03:34:37.096434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.980 [2024-12-06 03:34:37.096443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.980 [2024-12-06 03:34:37.096621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.980 [2024-12-06 03:34:37.096801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.980 [2024-12-06 03:34:37.096811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.980 [2024-12-06 03:34:37.096817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.980 [2024-12-06 03:34:37.096824] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:16.980 [2024-12-06 03:34:37.109099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:16.980 [2024-12-06 03:34:37.109537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.980 [2024-12-06 03:34:37.109554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:16.980 [2024-12-06 03:34:37.109563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:16.980 [2024-12-06 03:34:37.109742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:16.980 [2024-12-06 03:34:37.109922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:16.980 [2024-12-06 03:34:37.109932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:16.980 [2024-12-06 03:34:37.109939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:16.980 [2024-12-06 03:34:37.109945] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:17.241 [2024-12-06 03:34:37.117834] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:26:17.241 [2024-12-06 03:34:37.117877] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:17.241 [2024-12-06 03:34:37.122274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:17.241 [2024-12-06 03:34:37.122620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.241 [2024-12-06 03:34:37.122639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:17.241 [2024-12-06 03:34:37.122647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:17.241 [2024-12-06 03:34:37.122827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:17.241 [2024-12-06 03:34:37.123013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:17.241 [2024-12-06 03:34:37.123025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:17.241 [2024-12-06 03:34:37.123032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:17.241 [2024-12-06 03:34:37.123039] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:17.241 [2024-12-06 03:34:37.135399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:17.241 [2024-12-06 03:34:37.135735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.241 [2024-12-06 03:34:37.135753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:17.241 [2024-12-06 03:34:37.135762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:17.241 [2024-12-06 03:34:37.135937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:17.241 [2024-12-06 03:34:37.136120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:17.241 [2024-12-06 03:34:37.136130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:17.241 [2024-12-06 03:34:37.136137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:17.241 [2024-12-06 03:34:37.136145] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:17.241 [2024-12-06 03:34:37.148459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:17.241 [2024-12-06 03:34:37.148884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.241 [2024-12-06 03:34:37.148902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:17.241 [2024-12-06 03:34:37.148910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:17.241 [2024-12-06 03:34:37.149094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:17.241 [2024-12-06 03:34:37.149276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:17.241 [2024-12-06 03:34:37.149285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:17.241 [2024-12-06 03:34:37.149292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:17.241 [2024-12-06 03:34:37.149299] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:17.241 [2024-12-06 03:34:37.161606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:17.241 [2024-12-06 03:34:37.162022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.241 [2024-12-06 03:34:37.162041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:17.241 [2024-12-06 03:34:37.162049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:17.241 [2024-12-06 03:34:37.162228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:17.241 [2024-12-06 03:34:37.162409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:17.241 [2024-12-06 03:34:37.162418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:17.241 [2024-12-06 03:34:37.162425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:17.241 [2024-12-06 03:34:37.162433] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:17.241 [2024-12-06 03:34:37.174751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:17.241 [2024-12-06 03:34:37.175051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.241 [2024-12-06 03:34:37.175072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:17.241 [2024-12-06 03:34:37.175081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:17.241 [2024-12-06 03:34:37.175259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:17.241 [2024-12-06 03:34:37.175440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:17.241 [2024-12-06 03:34:37.175450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:17.241 [2024-12-06 03:34:37.175456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:17.241 [2024-12-06 03:34:37.175463] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:17.241 [2024-12-06 03:34:37.187823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:17.241 [2024-12-06 03:34:37.188119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:17.241 [2024-12-06 03:34:37.188204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.241 [2024-12-06 03:34:37.188222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:17.241 [2024-12-06 03:34:37.188230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:17.241 [2024-12-06 03:34:37.188410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:17.241 [2024-12-06 03:34:37.188591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:17.241 [2024-12-06 03:34:37.188603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:17.241 [2024-12-06 03:34:37.188610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:17.241 [2024-12-06 03:34:37.188616] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:17.241 [2024-12-06 03:34:37.200932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:17.241 [2024-12-06 03:34:37.201326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.241 [2024-12-06 03:34:37.201348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:17.241 [2024-12-06 03:34:37.201358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:17.241 [2024-12-06 03:34:37.201539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:17.241 [2024-12-06 03:34:37.201720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:17.241 [2024-12-06 03:34:37.201731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:17.241 [2024-12-06 03:34:37.201739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:17.242 [2024-12-06 03:34:37.201747] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:17.242 [2024-12-06 03:34:37.214109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:17.242 [2024-12-06 03:34:37.214453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.242 [2024-12-06 03:34:37.214472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:17.242 [2024-12-06 03:34:37.214481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:17.242 [2024-12-06 03:34:37.214659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:17.242 [2024-12-06 03:34:37.214834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:17.242 [2024-12-06 03:34:37.214845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:17.242 [2024-12-06 03:34:37.214852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:17.242 [2024-12-06 03:34:37.214860] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:17.242 [2024-12-06 03:34:37.227230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:17.242 [2024-12-06 03:34:37.227577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.242 [2024-12-06 03:34:37.227596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:17.242 [2024-12-06 03:34:37.227603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:17.242 [2024-12-06 03:34:37.227778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:17.242 [2024-12-06 03:34:37.227957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:17.242 [2024-12-06 03:34:37.227968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:17.242 [2024-12-06 03:34:37.227976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:17.242 [2024-12-06 03:34:37.227984] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:17.242 [2024-12-06 03:34:37.232309] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:17.242 [2024-12-06 03:34:37.232337] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:17.242 [2024-12-06 03:34:37.232344] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:17.242 [2024-12-06 03:34:37.232350] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:17.242 [2024-12-06 03:34:37.232356] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:17.242 [2024-12-06 03:34:37.233647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:17.242 [2024-12-06 03:34:37.233679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:17.242 [2024-12-06 03:34:37.233681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:17.242 [2024-12-06 03:34:37.240600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:17.242 [2024-12-06 03:34:37.241025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.242 [2024-12-06 03:34:37.241048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:17.242 [2024-12-06 03:34:37.241058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:17.242 [2024-12-06 03:34:37.241239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:17.242 [2024-12-06 03:34:37.241422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:17.242 [2024-12-06 03:34:37.241433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:17.242 [2024-12-06 03:34:37.241442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:17.242 [2024-12-06 03:34:37.241450] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:17.242 [2024-12-06 03:34:37.253780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:17.242 [2024-12-06 03:34:37.254142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.242 [2024-12-06 03:34:37.254163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:17.242 [2024-12-06 03:34:37.254173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:17.242 [2024-12-06 03:34:37.254354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:17.242 [2024-12-06 03:34:37.254535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:17.242 [2024-12-06 03:34:37.254546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:17.242 [2024-12-06 03:34:37.254554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:17.242 [2024-12-06 03:34:37.254561] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:17.242 [2024-12-06 03:34:37.266875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:17.242 [2024-12-06 03:34:37.267252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.242 [2024-12-06 03:34:37.267274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:17.242 [2024-12-06 03:34:37.267284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:17.242 [2024-12-06 03:34:37.267465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:17.242 [2024-12-06 03:34:37.267646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:17.242 [2024-12-06 03:34:37.267656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:17.242 [2024-12-06 03:34:37.267664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:17.242 [2024-12-06 03:34:37.267672] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:17.242 [2024-12-06 03:34:37.279977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:17.242 [2024-12-06 03:34:37.280363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.242 [2024-12-06 03:34:37.280383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:17.242 [2024-12-06 03:34:37.280393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:17.242 [2024-12-06 03:34:37.280573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:17.242 [2024-12-06 03:34:37.280755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:17.242 [2024-12-06 03:34:37.280765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:17.242 [2024-12-06 03:34:37.280772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:17.242 [2024-12-06 03:34:37.280780] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:17.242 [2024-12-06 03:34:37.293108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:17.242 [2024-12-06 03:34:37.293498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.242 [2024-12-06 03:34:37.293524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:17.242 [2024-12-06 03:34:37.293534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:17.242 [2024-12-06 03:34:37.293714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:17.242 [2024-12-06 03:34:37.293897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:17.243 [2024-12-06 03:34:37.293907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:17.243 [2024-12-06 03:34:37.293915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:17.243 [2024-12-06 03:34:37.293922] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:17.243 [2024-12-06 03:34:37.306259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:17.243 [2024-12-06 03:34:37.306613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.243 [2024-12-06 03:34:37.306632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:17.243 [2024-12-06 03:34:37.306641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:17.243 [2024-12-06 03:34:37.306821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:17.243 [2024-12-06 03:34:37.307008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:17.243 [2024-12-06 03:34:37.307020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:17.243 [2024-12-06 03:34:37.307027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:17.243 [2024-12-06 03:34:37.307035] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:17.243 [2024-12-06 03:34:37.319348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:17.243 [2024-12-06 03:34:37.319777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.243 [2024-12-06 03:34:37.319795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:17.243 [2024-12-06 03:34:37.319804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:17.243 [2024-12-06 03:34:37.319988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:17.243 [2024-12-06 03:34:37.320170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:17.243 [2024-12-06 03:34:37.320180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:17.243 [2024-12-06 03:34:37.320187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:17.243 [2024-12-06 03:34:37.320194] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:17.243 [2024-12-06 03:34:37.332516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:17.243 [2024-12-06 03:34:37.332884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.243 [2024-12-06 03:34:37.332903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:17.243 [2024-12-06 03:34:37.332911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:17.243 [2024-12-06 03:34:37.333100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:17.243 [2024-12-06 03:34:37.333282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:17.243 [2024-12-06 03:34:37.333293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:17.243 [2024-12-06 03:34:37.333299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:17.243 [2024-12-06 03:34:37.333306] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:17.243 03:34:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:17.243 03:34:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:17.243 03:34:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:17.243 03:34:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:17.243 03:34:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:17.243 [2024-12-06 03:34:37.345612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:17.243 [2024-12-06 03:34:37.346074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.243 [2024-12-06 03:34:37.346094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:17.243 [2024-12-06 03:34:37.346103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:17.243 [2024-12-06 03:34:37.346283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:17.243 [2024-12-06 03:34:37.346464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:17.243 [2024-12-06 03:34:37.346474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:17.243 [2024-12-06 03:34:37.346483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:17.243 [2024-12-06 03:34:37.346491] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:17.243 [2024-12-06 03:34:37.358805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:17.243 [2024-12-06 03:34:37.359108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.243 [2024-12-06 03:34:37.359128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:17.243 [2024-12-06 03:34:37.359138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:17.243 [2024-12-06 03:34:37.359319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:17.243 [2024-12-06 03:34:37.359500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:17.243 [2024-12-06 03:34:37.359510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:17.243 [2024-12-06 03:34:37.359517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:17.243 [2024-12-06 03:34:37.359523] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:17.243 [2024-12-06 03:34:37.371998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:17.243 [2024-12-06 03:34:37.372314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.243 [2024-12-06 03:34:37.372333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:17.243 [2024-12-06 03:34:37.372341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:17.243 [2024-12-06 03:34:37.372524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:17.243 [2024-12-06 03:34:37.372704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:17.243 [2024-12-06 03:34:37.372714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:17.243 [2024-12-06 03:34:37.372721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:17.243 [2024-12-06 03:34:37.372727] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:17.243 03:34:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:17.243 03:34:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:17.503 03:34:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.503 03:34:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:17.503 [2024-12-06 03:34:37.383669] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:17.503 [2024-12-06 03:34:37.385208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:17.503 [2024-12-06 03:34:37.385601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.503 [2024-12-06 03:34:37.385620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:17.503 [2024-12-06 03:34:37.385628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:17.503 [2024-12-06 03:34:37.385806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:17.503 [2024-12-06 03:34:37.385992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:17.503 [2024-12-06 03:34:37.386003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:17.503 [2024-12-06 03:34:37.386010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:17.503 [2024-12-06 03:34:37.386017] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:17.503 03:34:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.503 03:34:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:17.503 03:34:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.503 03:34:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:17.503 [2024-12-06 03:34:37.398382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:17.503 [2024-12-06 03:34:37.398751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.503 [2024-12-06 03:34:37.398770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:17.503 [2024-12-06 03:34:37.398779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:17.503 [2024-12-06 03:34:37.398963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:17.503 [2024-12-06 03:34:37.399143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:17.503 [2024-12-06 03:34:37.399153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:17.503 [2024-12-06 03:34:37.399161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:17.503 [2024-12-06 03:34:37.399171] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:17.503 [2024-12-06 03:34:37.411496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:17.503 [2024-12-06 03:34:37.411853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.503 [2024-12-06 03:34:37.411871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:17.503 [2024-12-06 03:34:37.411879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:17.503 [2024-12-06 03:34:37.412062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:17.503 [2024-12-06 03:34:37.412243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:17.503 [2024-12-06 03:34:37.412253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:17.503 [2024-12-06 03:34:37.412260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:17.504 [2024-12-06 03:34:37.412267] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:17.504 Malloc0 00:26:17.504 [2024-12-06 03:34:37.424590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:17.504 03:34:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.504 [2024-12-06 03:34:37.425030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.504 [2024-12-06 03:34:37.425049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:17.504 [2024-12-06 03:34:37.425058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:17.504 [2024-12-06 03:34:37.425237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:17.504 03:34:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:17.504 [2024-12-06 03:34:37.425418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:17.504 [2024-12-06 03:34:37.425429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:17.504 [2024-12-06 03:34:37.425436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:17.504 [2024-12-06 03:34:37.425443] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:17.504 03:34:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.504 03:34:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:17.504 03:34:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.504 03:34:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:17.504 03:34:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.504 03:34:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:17.504 [2024-12-06 03:34:37.437774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:17.504 [2024-12-06 03:34:37.438177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.504 [2024-12-06 03:34:37.438195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdd120 with addr=10.0.0.2, port=4420 00:26:17.504 [2024-12-06 03:34:37.438204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd120 is same with the state(6) to be set 00:26:17.504 [2024-12-06 03:34:37.438387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdd120 (9): Bad file descriptor 00:26:17.504 [2024-12-06 03:34:37.438566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:17.504 [2024-12-06 03:34:37.438576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:17.504 [2024-12-06 03:34:37.438583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:17.504 [2024-12-06 03:34:37.438590] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:17.504 03:34:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.504 03:34:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:17.504 03:34:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.504 03:34:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:17.504 [2024-12-06 03:34:37.447994] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:17.504 [2024-12-06 03:34:37.450906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:17.504 03:34:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.504 03:34:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2765695 00:26:17.504 4597.17 IOPS, 17.96 MiB/s [2024-12-06T02:34:37.645Z] [2024-12-06 03:34:37.605283] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:26:19.822 5382.43 IOPS, 21.03 MiB/s [2024-12-06T02:34:40.900Z] 6091.88 IOPS, 23.80 MiB/s [2024-12-06T02:34:41.839Z] 6639.78 IOPS, 25.94 MiB/s [2024-12-06T02:34:42.787Z] 7066.10 IOPS, 27.60 MiB/s [2024-12-06T02:34:43.722Z] 7411.45 IOPS, 28.95 MiB/s [2024-12-06T02:34:44.659Z] 7695.42 IOPS, 30.06 MiB/s [2024-12-06T02:34:45.597Z] 7939.77 IOPS, 31.01 MiB/s [2024-12-06T02:34:46.976Z] 8144.71 IOPS, 31.82 MiB/s [2024-12-06T02:34:46.976Z] 8331.60 IOPS, 32.55 MiB/s 00:26:26.835 Latency(us) 00:26:26.835 [2024-12-06T02:34:46.976Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:26.835 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:26.835 Verification LBA range: start 0x0 length 0x4000 00:26:26.835 Nvme1n1 : 15.01 8336.01 32.56 11116.63 0.00 6559.77 666.05 21199.47 00:26:26.835 [2024-12-06T02:34:46.976Z] =================================================================================================================== 00:26:26.835 [2024-12-06T02:34:46.976Z] Total : 8336.01 32.56 11116.63 0.00 6559.77 666.05 21199.47 00:26:26.835 03:34:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:26:26.835 03:34:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:26.835 03:34:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.835 03:34:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:26.835 03:34:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.835 03:34:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:26.835 03:34:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:26.835 03:34:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:26.835 03:34:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:26:26.835 03:34:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:26.835 03:34:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:26:26.835 03:34:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:26.835 03:34:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:26.835 rmmod nvme_tcp 00:26:26.836 rmmod nvme_fabrics 00:26:26.836 rmmod nvme_keyring 00:26:26.836 03:34:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:26.836 03:34:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:26:26.836 03:34:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:26:26.836 03:34:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2766636 ']' 00:26:26.836 03:34:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2766636 00:26:26.836 03:34:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2766636 ']' 00:26:26.836 03:34:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2766636 00:26:26.836 03:34:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:26:26.836 03:34:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:26.836 03:34:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2766636 00:26:26.836 03:34:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:26.836 03:34:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:26.836 03:34:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2766636' 00:26:26.836 killing process with pid 2766636 00:26:26.836 03:34:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2766636 00:26:26.836 03:34:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2766636 00:26:27.095 03:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:27.095 03:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:27.095 03:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:27.095 03:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:26:27.095 03:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:26:27.095 03:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:26:27.095 03:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:27.095 03:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:27.095 03:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:27.095 03:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:27.095 03:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:27.095 03:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:29.003 03:34:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:29.003 00:26:29.003 real 0m26.095s 00:26:29.003 user 1m1.222s 00:26:29.003 sys 0m6.705s 00:26:29.003 03:34:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:29.003 03:34:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:29.003 ************************************ 00:26:29.003 END TEST nvmf_bdevperf 00:26:29.003 ************************************ 00:26:29.262 03:34:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:29.262 03:34:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:29.262 03:34:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:29.262 03:34:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.262 ************************************ 00:26:29.262 START TEST nvmf_target_disconnect 00:26:29.262 ************************************ 00:26:29.262 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:29.262 * Looking for test storage... 00:26:29.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:29.262 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:29.262 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:26:29.262 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:29.262 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:29.262 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:29.262 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:29.262 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:29.262 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:26:29.262 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:26:29.262 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:26:29.262 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:26:29.262 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:26:29.262 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:26:29.262 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:26:29.262 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:29.262 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:26:29.262 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:26:29.262 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:29.262 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:29.262 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:26:29.262 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:26:29.262 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:29.262 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:26:29.262 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:26:29.262 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:26:29.262 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:26:29.262 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:29.262 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:26:29.262 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:26:29.262 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:29.262 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:29.262 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:26:29.262 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:29.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.263 --rc genhtml_branch_coverage=1 00:26:29.263 --rc genhtml_function_coverage=1 00:26:29.263 --rc genhtml_legend=1 00:26:29.263 --rc geninfo_all_blocks=1 00:26:29.263 --rc geninfo_unexecuted_blocks=1 00:26:29.263 00:26:29.263 ' 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:29.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.263 --rc genhtml_branch_coverage=1 00:26:29.263 --rc genhtml_function_coverage=1 00:26:29.263 --rc genhtml_legend=1 00:26:29.263 --rc geninfo_all_blocks=1 00:26:29.263 --rc geninfo_unexecuted_blocks=1 00:26:29.263 00:26:29.263 ' 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:29.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.263 --rc genhtml_branch_coverage=1 00:26:29.263 --rc genhtml_function_coverage=1 00:26:29.263 --rc genhtml_legend=1 00:26:29.263 --rc geninfo_all_blocks=1 00:26:29.263 --rc geninfo_unexecuted_blocks=1 00:26:29.263 00:26:29.263 ' 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:29.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.263 --rc genhtml_branch_coverage=1 00:26:29.263 --rc genhtml_function_coverage=1 00:26:29.263 --rc genhtml_legend=1 00:26:29.263 --rc geninfo_all_blocks=1 00:26:29.263 --rc geninfo_unexecuted_blocks=1 00:26:29.263 00:26:29.263 ' 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:29.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:29.263 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:29.264 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:29.264 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:29.264 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:29.264 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:29.264 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:29.264 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:29.264 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:29.264 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:26:29.264 03:34:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:35.876 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:35.876 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:35.876 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:35.877 Found net devices under 0000:86:00.0: cvl_0_0 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:35.877 Found net devices under 0000:86:00.1: cvl_0_1 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:35.877 03:34:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:35.877 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:35.877 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:35.877 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:35.877 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:35.877 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:35.877 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.346 ms 00:26:35.877 00:26:35.877 --- 10.0.0.2 ping statistics --- 00:26:35.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:35.877 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:26:35.877 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:35.877 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:35.877 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:26:35.877 00:26:35.877 --- 10.0.0.1 ping statistics --- 00:26:35.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:35.877 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:26:35.877 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:35.877 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:26:35.877 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:35.877 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:35.877 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:35.877 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:35.877 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:35.877 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:35.877 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:35.877 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:35.877 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:35.877 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:35.877 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:35.877 ************************************ 00:26:35.877 START TEST nvmf_target_disconnect_tc1 00:26:35.877 ************************************ 00:26:35.877 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:26:35.877 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:35.877 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:26:35.877 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:35.877 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:35.877 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:35.877 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:35.877 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:35.877 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:35.877 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:35.877 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:35.877 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:26:35.877 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:35.877 [2024-12-06 03:34:55.253138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.877 [2024-12-06 03:34:55.253191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf36ac0 with addr=10.0.0.2, port=4420 00:26:35.877 [2024-12-06 03:34:55.253210] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:35.877 [2024-12-06 03:34:55.253219] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:35.877 [2024-12-06 03:34:55.253227] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:26:35.877 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:35.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:35.877 Initializing NVMe Controllers 00:26:35.877 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:35.878 00:26:35.878 real 0m0.108s 00:26:35.878 user 0m0.052s 00:26:35.878 sys 0m0.052s 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:35.878 ************************************ 00:26:35.878 END TEST nvmf_target_disconnect_tc1 00:26:35.878 ************************************ 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:35.878 ************************************ 00:26:35.878 START TEST nvmf_target_disconnect_tc2 00:26:35.878 ************************************ 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2771787 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2771787 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2771787 ']' 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:35.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:35.878 [2024-12-06 03:34:55.389870] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:26:35.878 [2024-12-06 03:34:55.389907] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:35.878 [2024-12-06 03:34:55.468163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:35.878 [2024-12-06 03:34:55.508148] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:35.878 [2024-12-06 03:34:55.508188] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:35.878 [2024-12-06 03:34:55.508195] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:35.878 [2024-12-06 03:34:55.508201] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:35.878 [2024-12-06 03:34:55.508207] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:35.878 [2024-12-06 03:34:55.510838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:26:35.878 [2024-12-06 03:34:55.510978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:26:35.878 [2024-12-06 03:34:55.511063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:35.878 [2024-12-06 03:34:55.511064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:35.878 Malloc0 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:35.878 [2024-12-06 03:34:55.700736] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:35.878 [2024-12-06 03:34:55.732987] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2771915 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:26:35.878 03:34:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:37.787 03:34:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2771787 00:26:37.787 03:34:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:26:37.787 Read completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Read completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Read completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Read completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Read completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Read completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Read completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Write completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Write completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Write completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Read completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Read completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Write completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Read completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Read completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Read completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Read completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Read completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Write completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Write completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Write completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Write completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Write completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Read completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Read completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Read completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Write completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Read completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Write completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Write completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Write completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Write completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Read completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Read completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Read completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 [2024-12-06 03:34:57.769122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.787 Read completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Read completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Read completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Read completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Read completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Read completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Read completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Write completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Write completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Write completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Read completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Write completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Read completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Write completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Read completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Read completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Write completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Read completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Write completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Write completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Write completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Read completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Read completed with error (sct=0, sc=8) 00:26:37.787 starting I/O failed 00:26:37.787 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Write completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Write completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Write completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 [2024-12-06 03:34:57.769323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Write completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Write completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Write completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Write completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Write completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Write completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Write completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Write completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Write completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Write completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 [2024-12-06 03:34:57.769520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Write completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Write completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Write completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Write completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Write completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Write completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Write completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Write completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Write completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Write completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 Read completed with error (sct=0, sc=8) 00:26:37.788 starting I/O failed 00:26:37.788 [2024-12-06 03:34:57.769722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:37.788 [2024-12-06 03:34:57.769995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.788 [2024-12-06 03:34:57.770015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:37.788 qpair failed and we were unable to recover it. 00:26:37.788 [2024-12-06 03:34:57.770198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.788 [2024-12-06 03:34:57.770211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:37.788 qpair failed and we were unable to recover it. 00:26:37.788 [2024-12-06 03:34:57.770388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.788 [2024-12-06 03:34:57.770421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:37.788 qpair failed and we were unable to recover it. 00:26:37.788 [2024-12-06 03:34:57.770665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.789 [2024-12-06 03:34:57.770698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:37.789 qpair failed and we were unable to recover it. 00:26:37.789 [2024-12-06 03:34:57.770841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.789 [2024-12-06 03:34:57.770879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:37.789 qpair failed and we were unable to recover it. 00:26:37.789 [2024-12-06 03:34:57.771079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.789 [2024-12-06 03:34:57.771107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:37.789 qpair failed and we were unable to recover it. 00:26:37.789 [2024-12-06 03:34:57.771222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.789 [2024-12-06 03:34:57.771234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:37.789 qpair failed and we were unable to recover it. 00:26:37.789 [2024-12-06 03:34:57.771388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.789 [2024-12-06 03:34:57.771422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:37.789 qpair failed and we were unable to recover it. 00:26:37.789 [2024-12-06 03:34:57.771557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.789 [2024-12-06 03:34:57.771589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:37.789 qpair failed and we were unable to recover it. 00:26:37.789 [2024-12-06 03:34:57.771756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.789 [2024-12-06 03:34:57.771790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:37.789 qpair failed and we were unable to recover it. 00:26:37.789 [2024-12-06 03:34:57.771922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.789 [2024-12-06 03:34:57.771963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:37.789 qpair failed and we were unable to recover it. 00:26:37.789 [2024-12-06 03:34:57.772213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.789 [2024-12-06 03:34:57.772245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:37.789 qpair failed and we were unable to recover it. 00:26:37.789 [2024-12-06 03:34:57.772514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.789 [2024-12-06 03:34:57.772546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:37.789 qpair failed and we were unable to recover it. 00:26:37.789 [2024-12-06 03:34:57.772836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.789 [2024-12-06 03:34:57.772869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:37.789 qpair failed and we were unable to recover it. 00:26:37.789 [2024-12-06 03:34:57.773080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.789 [2024-12-06 03:34:57.773113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:37.789 qpair failed and we were unable to recover it. 00:26:37.789 [2024-12-06 03:34:57.773385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.789 [2024-12-06 03:34:57.773416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:37.789 qpair failed and we were unable to recover it. 00:26:37.789 [2024-12-06 03:34:57.773572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.789 [2024-12-06 03:34:57.773604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:37.789 qpair failed and we were unable to recover it. 00:26:37.789 [2024-12-06 03:34:57.773805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.789 [2024-12-06 03:34:57.773838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:37.789 qpair failed and we were unable to recover it. 00:26:37.789 [2024-12-06 03:34:57.774028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.789 [2024-12-06 03:34:57.774076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:37.789 qpair failed and we were unable to recover it. 00:26:37.789 [2024-12-06 03:34:57.774264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.789 [2024-12-06 03:34:57.774298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:37.789 qpair failed and we were unable to recover it. 00:26:37.789 [2024-12-06 03:34:57.774488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.789 [2024-12-06 03:34:57.774500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:37.789 qpair failed and we were unable to recover it. 00:26:37.789 [2024-12-06 03:34:57.774750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.789 [2024-12-06 03:34:57.774783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:37.789 qpair failed and we were unable to recover it. 00:26:37.789 [2024-12-06 03:34:57.774971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.789 [2024-12-06 03:34:57.775004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:37.789 qpair failed and we were unable to recover it. 00:26:37.789 [2024-12-06 03:34:57.775156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.789 [2024-12-06 03:34:57.775189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:37.789 qpair failed and we were unable to recover it. 00:26:37.789 [2024-12-06 03:34:57.775409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.789 [2024-12-06 03:34:57.775440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:37.789 qpair failed and we were unable to recover it. 00:26:37.789 [2024-12-06 03:34:57.775628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.789 [2024-12-06 03:34:57.775661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:37.789 qpair failed and we were unable to recover it. 00:26:37.789 [2024-12-06 03:34:57.775846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.789 [2024-12-06 03:34:57.775879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:37.789 qpair failed and we were unable to recover it. 00:26:37.789 [2024-12-06 03:34:57.776066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.789 [2024-12-06 03:34:57.776100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:37.789 qpair failed and we were unable to recover it. 00:26:37.789 [2024-12-06 03:34:57.776305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.789 [2024-12-06 03:34:57.776336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:37.789 qpair failed and we were unable to recover it. 00:26:37.789 [2024-12-06 03:34:57.776591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.789 [2024-12-06 03:34:57.776623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:37.789 qpair failed and we were unable to recover it. 00:26:37.789 [2024-12-06 03:34:57.776813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.789 [2024-12-06 03:34:57.776845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:37.789 qpair failed and we were unable to recover it. 00:26:37.789 [2024-12-06 03:34:57.777087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.789 [2024-12-06 03:34:57.777120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:37.789 qpair failed and we were unable to recover it. 00:26:37.789 [2024-12-06 03:34:57.777403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.789 [2024-12-06 03:34:57.777435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:37.789 qpair failed and we were unable to recover it. 00:26:37.789 [2024-12-06 03:34:57.777769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.789 [2024-12-06 03:34:57.777801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:37.789 qpair failed and we were unable to recover it. 00:26:37.789 [2024-12-06 03:34:57.777981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.789 [2024-12-06 03:34:57.778016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:37.789 qpair failed and we were unable to recover it. 00:26:37.789 [2024-12-06 03:34:57.778169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.789 [2024-12-06 03:34:57.778201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:37.790 qpair failed and we were unable to recover it. 00:26:37.790 [2024-12-06 03:34:57.778446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.790 [2024-12-06 03:34:57.778477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:37.790 qpair failed and we were unable to recover it. 00:26:37.790 [2024-12-06 03:34:57.778748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.790 [2024-12-06 03:34:57.778787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:37.790 qpair failed and we were unable to recover it. 00:26:37.790 [2024-12-06 03:34:57.779078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.790 [2024-12-06 03:34:57.779111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:37.790 qpair failed and we were unable to recover it. 00:26:37.790 [2024-12-06 03:34:57.779361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.790 [2024-12-06 03:34:57.779394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:37.790 qpair failed and we were unable to recover it. 00:26:37.790 [2024-12-06 03:34:57.779710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.790 [2024-12-06 03:34:57.779743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:37.790 qpair failed and we were unable to recover it. 00:26:37.790 [2024-12-06 03:34:57.779884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.790 [2024-12-06 03:34:57.779916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:37.790 qpair failed and we were unable to recover it. 00:26:37.790 [2024-12-06 03:34:57.780096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.790 [2024-12-06 03:34:57.780160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.790 qpair failed and we were unable to recover it. 00:26:37.790 [2024-12-06 03:34:57.780382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.790 [2024-12-06 03:34:57.780443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:37.790 qpair failed and we were unable to recover it. 00:26:37.790 [2024-12-06 03:34:57.780744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.790 [2024-12-06 03:34:57.780785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:37.790 qpair failed and we were unable to recover it. 00:26:37.790 [2024-12-06 03:34:57.781007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.790 [2024-12-06 03:34:57.781025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:37.790 qpair failed and we were unable to recover it. 00:26:37.790 [2024-12-06 03:34:57.781215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.790 [2024-12-06 03:34:57.781230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.790 qpair failed and we were unable to recover it. 00:26:37.790 [2024-12-06 03:34:57.781411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.790 [2024-12-06 03:34:57.781445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.790 qpair failed and we were unable to recover it. 00:26:37.790 [2024-12-06 03:34:57.781571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.790 [2024-12-06 03:34:57.781603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.790 qpair failed and we were unable to recover it. 00:26:37.790 [2024-12-06 03:34:57.781785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.790 [2024-12-06 03:34:57.781817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.790 qpair failed and we were unable to recover it. 00:26:37.790 [2024-12-06 03:34:57.781994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.790 [2024-12-06 03:34:57.782006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.790 qpair failed and we were unable to recover it. 00:26:37.790 [2024-12-06 03:34:57.782152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.790 [2024-12-06 03:34:57.782164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.790 qpair failed and we were unable to recover it. 00:26:37.790 [2024-12-06 03:34:57.782319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.790 [2024-12-06 03:34:57.782354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.790 qpair failed and we were unable to recover it. 00:26:37.790 [2024-12-06 03:34:57.782556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.790 [2024-12-06 03:34:57.782588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.790 qpair failed and we were unable to recover it. 00:26:37.790 [2024-12-06 03:34:57.782728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.790 [2024-12-06 03:34:57.782761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.790 qpair failed and we were unable to recover it. 00:26:37.790 [2024-12-06 03:34:57.782965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.790 [2024-12-06 03:34:57.782999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.790 qpair failed and we were unable to recover it. 00:26:37.790 [2024-12-06 03:34:57.783217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.790 [2024-12-06 03:34:57.783249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.790 qpair failed and we were unable to recover it. 00:26:37.790 [2024-12-06 03:34:57.783497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.790 [2024-12-06 03:34:57.783530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.790 qpair failed and we were unable to recover it. 00:26:37.790 [2024-12-06 03:34:57.783728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.790 [2024-12-06 03:34:57.783769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.790 qpair failed and we were unable to recover it. 00:26:37.790 [2024-12-06 03:34:57.784038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.790 [2024-12-06 03:34:57.784071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.790 qpair failed and we were unable to recover it. 00:26:37.790 [2024-12-06 03:34:57.784344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.790 [2024-12-06 03:34:57.784376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.790 qpair failed and we were unable to recover it. 00:26:37.790 [2024-12-06 03:34:57.784659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.790 [2024-12-06 03:34:57.784692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.790 qpair failed and we were unable to recover it. 00:26:37.790 [2024-12-06 03:34:57.784968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.790 [2024-12-06 03:34:57.785003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.790 qpair failed and we were unable to recover it. 00:26:37.790 [2024-12-06 03:34:57.785247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.790 [2024-12-06 03:34:57.785280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.790 qpair failed and we were unable to recover it. 00:26:37.790 [2024-12-06 03:34:57.785527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.790 [2024-12-06 03:34:57.785560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.790 qpair failed and we were unable to recover it. 00:26:37.790 [2024-12-06 03:34:57.785758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.790 [2024-12-06 03:34:57.785770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.790 qpair failed and we were unable to recover it. 00:26:37.790 [2024-12-06 03:34:57.785976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.790 [2024-12-06 03:34:57.786010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.790 qpair failed and we were unable to recover it. 00:26:37.790 [2024-12-06 03:34:57.786258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.790 [2024-12-06 03:34:57.786290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.790 qpair failed and we were unable to recover it. 00:26:37.790 [2024-12-06 03:34:57.786530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.791 [2024-12-06 03:34:57.786561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.791 qpair failed and we were unable to recover it. 00:26:37.791 [2024-12-06 03:34:57.786811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.791 [2024-12-06 03:34:57.786844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.791 qpair failed and we were unable to recover it. 00:26:37.791 [2024-12-06 03:34:57.787087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.791 [2024-12-06 03:34:57.787121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.791 qpair failed and we were unable to recover it. 00:26:37.791 [2024-12-06 03:34:57.787371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.791 [2024-12-06 03:34:57.787404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.791 qpair failed and we were unable to recover it. 00:26:37.791 [2024-12-06 03:34:57.787599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.791 [2024-12-06 03:34:57.787633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.791 qpair failed and we were unable to recover it. 00:26:37.791 [2024-12-06 03:34:57.787897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.791 [2024-12-06 03:34:57.787910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.791 qpair failed and we were unable to recover it. 00:26:37.791 [2024-12-06 03:34:57.788005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.791 [2024-12-06 03:34:57.788016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.791 qpair failed and we were unable to recover it. 00:26:37.791 [2024-12-06 03:34:57.788223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.791 [2024-12-06 03:34:57.788255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.791 qpair failed and we were unable to recover it. 00:26:37.791 [2024-12-06 03:34:57.788528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.791 [2024-12-06 03:34:57.788559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.791 qpair failed and we were unable to recover it. 00:26:37.791 [2024-12-06 03:34:57.788851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.791 [2024-12-06 03:34:57.788883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.791 qpair failed and we were unable to recover it. 00:26:37.791 [2024-12-06 03:34:57.789024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.791 [2024-12-06 03:34:57.789058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.791 qpair failed and we were unable to recover it. 00:26:37.791 [2024-12-06 03:34:57.789302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.791 [2024-12-06 03:34:57.789335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.791 qpair failed and we were unable to recover it. 00:26:37.791 [2024-12-06 03:34:57.789628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.791 [2024-12-06 03:34:57.789661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.791 qpair failed and we were unable to recover it. 00:26:37.791 [2024-12-06 03:34:57.789868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.791 [2024-12-06 03:34:57.789899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.791 qpair failed and we were unable to recover it. 00:26:37.791 [2024-12-06 03:34:57.790062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.791 [2024-12-06 03:34:57.790095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.791 qpair failed and we were unable to recover it. 00:26:37.791 [2024-12-06 03:34:57.790231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.791 [2024-12-06 03:34:57.790263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.791 qpair failed and we were unable to recover it. 00:26:37.791 [2024-12-06 03:34:57.790530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.791 [2024-12-06 03:34:57.790563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.791 qpair failed and we were unable to recover it. 00:26:37.791 [2024-12-06 03:34:57.790843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.791 [2024-12-06 03:34:57.790877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.791 qpair failed and we were unable to recover it. 00:26:37.791 [2024-12-06 03:34:57.791152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.791 [2024-12-06 03:34:57.791185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.791 qpair failed and we were unable to recover it. 00:26:37.791 [2024-12-06 03:34:57.791471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.791 [2024-12-06 03:34:57.791503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.791 qpair failed and we were unable to recover it. 00:26:37.791 [2024-12-06 03:34:57.791823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.791 [2024-12-06 03:34:57.791856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.791 qpair failed and we were unable to recover it. 00:26:37.791 [2024-12-06 03:34:57.792112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.791 [2024-12-06 03:34:57.792144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.791 qpair failed and we were unable to recover it. 00:26:37.791 [2024-12-06 03:34:57.792337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.791 [2024-12-06 03:34:57.792370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.791 qpair failed and we were unable to recover it. 00:26:37.791 [2024-12-06 03:34:57.792591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.791 [2024-12-06 03:34:57.792623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.791 qpair failed and we were unable to recover it. 00:26:37.791 [2024-12-06 03:34:57.792745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.791 [2024-12-06 03:34:57.792777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.791 qpair failed and we were unable to recover it. 00:26:37.791 [2024-12-06 03:34:57.792992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.791 [2024-12-06 03:34:57.793026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.791 qpair failed and we were unable to recover it. 00:26:37.791 [2024-12-06 03:34:57.793321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.791 [2024-12-06 03:34:57.793353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.791 qpair failed and we were unable to recover it. 00:26:37.791 [2024-12-06 03:34:57.793527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.791 [2024-12-06 03:34:57.793539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.791 qpair failed and we were unable to recover it. 00:26:37.791 [2024-12-06 03:34:57.793750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.791 [2024-12-06 03:34:57.793782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.791 qpair failed and we were unable to recover it. 00:26:37.791 [2024-12-06 03:34:57.793999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.791 [2024-12-06 03:34:57.794033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.791 qpair failed and we were unable to recover it. 00:26:37.791 [2024-12-06 03:34:57.794292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.791 [2024-12-06 03:34:57.794331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.791 qpair failed and we were unable to recover it. 00:26:37.791 [2024-12-06 03:34:57.794473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.791 [2024-12-06 03:34:57.794506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.792 qpair failed and we were unable to recover it. 00:26:37.792 [2024-12-06 03:34:57.794791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.792 [2024-12-06 03:34:57.794824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.792 qpair failed and we were unable to recover it. 00:26:37.792 [2024-12-06 03:34:57.795058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.792 [2024-12-06 03:34:57.795091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.792 qpair failed and we were unable to recover it. 00:26:37.792 [2024-12-06 03:34:57.795291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.792 [2024-12-06 03:34:57.795324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.792 qpair failed and we were unable to recover it. 00:26:37.792 [2024-12-06 03:34:57.795598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.792 [2024-12-06 03:34:57.795630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.792 qpair failed and we were unable to recover it. 00:26:37.792 [2024-12-06 03:34:57.795797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.792 [2024-12-06 03:34:57.795809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.792 qpair failed and we were unable to recover it. 00:26:37.792 [2024-12-06 03:34:57.795984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.792 [2024-12-06 03:34:57.795997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.792 qpair failed and we were unable to recover it. 00:26:37.792 [2024-12-06 03:34:57.796226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.792 [2024-12-06 03:34:57.796259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.792 qpair failed and we were unable to recover it. 00:26:37.792 [2024-12-06 03:34:57.796545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.792 [2024-12-06 03:34:57.796577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.792 qpair failed and we were unable to recover it. 00:26:37.792 [2024-12-06 03:34:57.796823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.792 [2024-12-06 03:34:57.796855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.792 qpair failed and we were unable to recover it. 00:26:37.792 [2024-12-06 03:34:57.797175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.792 [2024-12-06 03:34:57.797208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.792 qpair failed and we were unable to recover it. 00:26:37.792 [2024-12-06 03:34:57.797458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.792 [2024-12-06 03:34:57.797490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.792 qpair failed and we were unable to recover it. 00:26:37.792 [2024-12-06 03:34:57.797777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.792 [2024-12-06 03:34:57.797809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.792 qpair failed and we were unable to recover it. 00:26:37.792 [2024-12-06 03:34:57.798097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.792 [2024-12-06 03:34:57.798131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.792 qpair failed and we were unable to recover it. 00:26:37.792 [2024-12-06 03:34:57.798268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.792 [2024-12-06 03:34:57.798300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.792 qpair failed and we were unable to recover it. 00:26:37.792 [2024-12-06 03:34:57.798544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.792 [2024-12-06 03:34:57.798577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.792 qpair failed and we were unable to recover it. 00:26:37.792 [2024-12-06 03:34:57.798828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.792 [2024-12-06 03:34:57.798862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.792 qpair failed and we were unable to recover it. 00:26:37.792 [2024-12-06 03:34:57.799088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.792 [2024-12-06 03:34:57.799121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.792 qpair failed and we were unable to recover it. 00:26:37.792 [2024-12-06 03:34:57.799424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.792 [2024-12-06 03:34:57.799457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.792 qpair failed and we were unable to recover it. 00:26:37.792 [2024-12-06 03:34:57.799737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.792 [2024-12-06 03:34:57.799768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.792 qpair failed and we were unable to recover it. 00:26:37.792 [2024-12-06 03:34:57.799879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.792 [2024-12-06 03:34:57.799909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.792 qpair failed and we were unable to recover it. 00:26:37.792 [2024-12-06 03:34:57.800186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.792 [2024-12-06 03:34:57.800219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.792 qpair failed and we were unable to recover it. 00:26:37.792 [2024-12-06 03:34:57.800433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.792 [2024-12-06 03:34:57.800465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.792 qpair failed and we were unable to recover it. 00:26:37.792 [2024-12-06 03:34:57.800780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.792 [2024-12-06 03:34:57.800813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.792 qpair failed and we were unable to recover it. 00:26:37.792 [2024-12-06 03:34:57.801003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.792 [2024-12-06 03:34:57.801036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.792 qpair failed and we were unable to recover it. 00:26:37.792 [2024-12-06 03:34:57.801330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.792 [2024-12-06 03:34:57.801363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.792 qpair failed and we were unable to recover it. 00:26:37.792 [2024-12-06 03:34:57.801512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.792 [2024-12-06 03:34:57.801525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.792 qpair failed and we were unable to recover it. 00:26:37.792 [2024-12-06 03:34:57.801730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.792 [2024-12-06 03:34:57.801742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.792 qpair failed and we were unable to recover it. 00:26:37.792 [2024-12-06 03:34:57.801994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.792 [2024-12-06 03:34:57.802006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.792 qpair failed and we were unable to recover it. 00:26:37.792 [2024-12-06 03:34:57.802104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.793 [2024-12-06 03:34:57.802115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.793 qpair failed and we were unable to recover it. 00:26:37.793 [2024-12-06 03:34:57.802257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.793 [2024-12-06 03:34:57.802290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.793 qpair failed and we were unable to recover it. 00:26:37.793 [2024-12-06 03:34:57.802470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.793 [2024-12-06 03:34:57.802503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.793 qpair failed and we were unable to recover it. 00:26:37.793 [2024-12-06 03:34:57.802617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.793 [2024-12-06 03:34:57.802648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.793 qpair failed and we were unable to recover it. 00:26:37.793 [2024-12-06 03:34:57.802916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.793 [2024-12-06 03:34:57.802959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.793 qpair failed and we were unable to recover it. 00:26:37.793 [2024-12-06 03:34:57.803092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.793 [2024-12-06 03:34:57.803124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.793 qpair failed and we were unable to recover it. 00:26:37.793 [2024-12-06 03:34:57.803398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.793 [2024-12-06 03:34:57.803431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.793 qpair failed and we were unable to recover it. 00:26:37.793 [2024-12-06 03:34:57.803618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.793 [2024-12-06 03:34:57.803651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.793 qpair failed and we were unable to recover it. 00:26:37.793 [2024-12-06 03:34:57.803860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.793 [2024-12-06 03:34:57.803893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.793 qpair failed and we were unable to recover it. 00:26:37.793 [2024-12-06 03:34:57.804229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.793 [2024-12-06 03:34:57.804263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.793 qpair failed and we were unable to recover it. 00:26:37.793 [2024-12-06 03:34:57.804517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.793 [2024-12-06 03:34:57.804556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.793 qpair failed and we were unable to recover it. 00:26:37.793 [2024-12-06 03:34:57.804848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.793 [2024-12-06 03:34:57.804881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.793 qpair failed and we were unable to recover it. 00:26:37.793 [2024-12-06 03:34:57.805103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.793 [2024-12-06 03:34:57.805137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.793 qpair failed and we were unable to recover it. 00:26:37.793 [2024-12-06 03:34:57.805325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.793 [2024-12-06 03:34:57.805359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.793 qpair failed and we were unable to recover it. 00:26:37.793 [2024-12-06 03:34:57.805600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.793 [2024-12-06 03:34:57.805612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.793 qpair failed and we were unable to recover it. 00:26:37.793 [2024-12-06 03:34:57.805771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.793 [2024-12-06 03:34:57.805784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.793 qpair failed and we were unable to recover it. 00:26:37.793 [2024-12-06 03:34:57.805882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.793 [2024-12-06 03:34:57.805893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.793 qpair failed and we were unable to recover it. 00:26:37.793 [2024-12-06 03:34:57.806033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.793 [2024-12-06 03:34:57.806046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.793 qpair failed and we were unable to recover it. 00:26:37.793 [2024-12-06 03:34:57.806272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.793 [2024-12-06 03:34:57.806284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.793 qpair failed and we were unable to recover it. 00:26:37.793 [2024-12-06 03:34:57.806455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.793 [2024-12-06 03:34:57.806467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.793 qpair failed and we were unable to recover it. 00:26:37.793 [2024-12-06 03:34:57.806615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.793 [2024-12-06 03:34:57.806627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.793 qpair failed and we were unable to recover it. 00:26:37.793 [2024-12-06 03:34:57.806774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.793 [2024-12-06 03:34:57.806787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.793 qpair failed and we were unable to recover it. 00:26:37.793 [2024-12-06 03:34:57.806873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.793 [2024-12-06 03:34:57.806884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.793 qpair failed and we were unable to recover it. 00:26:37.793 [2024-12-06 03:34:57.806973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.793 [2024-12-06 03:34:57.806984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.793 qpair failed and we were unable to recover it. 00:26:37.793 [2024-12-06 03:34:57.807165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.793 [2024-12-06 03:34:57.807199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.793 qpair failed and we were unable to recover it. 00:26:37.793 [2024-12-06 03:34:57.807445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.793 [2024-12-06 03:34:57.807477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.793 qpair failed and we were unable to recover it. 00:26:37.793 [2024-12-06 03:34:57.807740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.793 [2024-12-06 03:34:57.807773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.793 qpair failed and we were unable to recover it. 00:26:37.793 [2024-12-06 03:34:57.808063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.794 [2024-12-06 03:34:57.808098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.794 qpair failed and we were unable to recover it. 00:26:37.794 [2024-12-06 03:34:57.808287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.794 [2024-12-06 03:34:57.808320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.794 qpair failed and we were unable to recover it. 00:26:37.794 [2024-12-06 03:34:57.808511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.794 [2024-12-06 03:34:57.808544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.794 qpair failed and we were unable to recover it. 00:26:37.794 [2024-12-06 03:34:57.808792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.794 [2024-12-06 03:34:57.808825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.794 qpair failed and we were unable to recover it. 00:26:37.794 [2024-12-06 03:34:57.809068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.794 [2024-12-06 03:34:57.809103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.794 qpair failed and we were unable to recover it. 00:26:37.794 [2024-12-06 03:34:57.809329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.794 [2024-12-06 03:34:57.809362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.794 qpair failed and we were unable to recover it. 00:26:37.794 [2024-12-06 03:34:57.809539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.794 [2024-12-06 03:34:57.809551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.794 qpair failed and we were unable to recover it. 00:26:37.794 [2024-12-06 03:34:57.809733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.794 [2024-12-06 03:34:57.809767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.794 qpair failed and we were unable to recover it. 00:26:37.794 [2024-12-06 03:34:57.809966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.794 [2024-12-06 03:34:57.810001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.794 qpair failed and we were unable to recover it. 00:26:37.794 [2024-12-06 03:34:57.810288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.794 [2024-12-06 03:34:57.810321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.794 qpair failed and we were unable to recover it. 00:26:37.794 [2024-12-06 03:34:57.810621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.794 [2024-12-06 03:34:57.810654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.794 qpair failed and we were unable to recover it. 00:26:37.794 [2024-12-06 03:34:57.810934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.794 [2024-12-06 03:34:57.810950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.794 qpair failed and we were unable to recover it. 00:26:37.794 [2024-12-06 03:34:57.811110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.794 [2024-12-06 03:34:57.811122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.794 qpair failed and we were unable to recover it. 00:26:37.794 [2024-12-06 03:34:57.811285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.794 [2024-12-06 03:34:57.811297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.794 qpair failed and we were unable to recover it. 00:26:37.794 [2024-12-06 03:34:57.811387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.794 [2024-12-06 03:34:57.811399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.794 qpair failed and we were unable to recover it. 00:26:37.794 [2024-12-06 03:34:57.811546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.794 [2024-12-06 03:34:57.811558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.794 qpair failed and we were unable to recover it. 00:26:37.794 [2024-12-06 03:34:57.811635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.794 [2024-12-06 03:34:57.811646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.794 qpair failed and we were unable to recover it. 00:26:37.794 [2024-12-06 03:34:57.811924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.794 [2024-12-06 03:34:57.811965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.794 qpair failed and we were unable to recover it. 00:26:37.794 [2024-12-06 03:34:57.812258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.794 [2024-12-06 03:34:57.812291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.794 qpair failed and we were unable to recover it. 00:26:37.794 [2024-12-06 03:34:57.812508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.794 [2024-12-06 03:34:57.812541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.794 qpair failed and we were unable to recover it. 00:26:37.794 [2024-12-06 03:34:57.812808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.794 [2024-12-06 03:34:57.812841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.794 qpair failed and we were unable to recover it. 00:26:37.794 [2024-12-06 03:34:57.813087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.794 [2024-12-06 03:34:57.813100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.794 qpair failed and we were unable to recover it. 00:26:37.794 [2024-12-06 03:34:57.813268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.794 [2024-12-06 03:34:57.813280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.794 qpair failed and we were unable to recover it. 00:26:37.794 [2024-12-06 03:34:57.813448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.794 [2024-12-06 03:34:57.813487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.794 qpair failed and we were unable to recover it. 00:26:37.794 [2024-12-06 03:34:57.813702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.794 [2024-12-06 03:34:57.813735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.794 qpair failed and we were unable to recover it. 00:26:37.794 [2024-12-06 03:34:57.813991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.794 [2024-12-06 03:34:57.814027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.794 qpair failed and we were unable to recover it. 00:26:37.794 [2024-12-06 03:34:57.814217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.794 [2024-12-06 03:34:57.814250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.794 qpair failed and we were unable to recover it. 00:26:37.794 [2024-12-06 03:34:57.814529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.794 [2024-12-06 03:34:57.814563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.794 qpair failed and we were unable to recover it. 00:26:37.794 [2024-12-06 03:34:57.814831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.794 [2024-12-06 03:34:57.814865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.794 qpair failed and we were unable to recover it. 00:26:37.794 [2024-12-06 03:34:57.815144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.794 [2024-12-06 03:34:57.815179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.794 qpair failed and we were unable to recover it. 00:26:37.794 [2024-12-06 03:34:57.815454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.794 [2024-12-06 03:34:57.815487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.794 qpair failed and we were unable to recover it. 00:26:37.794 [2024-12-06 03:34:57.815599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.794 [2024-12-06 03:34:57.815632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.794 qpair failed and we were unable to recover it. 00:26:37.795 [2024-12-06 03:34:57.815877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.795 [2024-12-06 03:34:57.815910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.795 qpair failed and we were unable to recover it. 00:26:37.795 [2024-12-06 03:34:57.816165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.795 [2024-12-06 03:34:57.816201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.795 qpair failed and we were unable to recover it. 00:26:37.795 [2024-12-06 03:34:57.816493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.795 [2024-12-06 03:34:57.816526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.795 qpair failed and we were unable to recover it. 00:26:37.795 [2024-12-06 03:34:57.816834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.795 [2024-12-06 03:34:57.816866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.795 qpair failed and we were unable to recover it. 00:26:37.795 [2024-12-06 03:34:57.817126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.795 [2024-12-06 03:34:57.817161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.795 qpair failed and we were unable to recover it. 00:26:37.795 [2024-12-06 03:34:57.817394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.795 [2024-12-06 03:34:57.817428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.795 qpair failed and we were unable to recover it. 00:26:37.795 [2024-12-06 03:34:57.817688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.795 [2024-12-06 03:34:57.817701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.795 qpair failed and we were unable to recover it. 00:26:37.795 [2024-12-06 03:34:57.817851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.795 [2024-12-06 03:34:57.817863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.795 qpair failed and we were unable to recover it. 00:26:37.795 [2024-12-06 03:34:57.818022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.795 [2024-12-06 03:34:57.818035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.795 qpair failed and we were unable to recover it. 00:26:37.795 [2024-12-06 03:34:57.818209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.795 [2024-12-06 03:34:57.818242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.795 qpair failed and we were unable to recover it. 00:26:37.795 [2024-12-06 03:34:57.818469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.795 [2024-12-06 03:34:57.818502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.795 qpair failed and we were unable to recover it. 00:26:37.795 [2024-12-06 03:34:57.818642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.795 [2024-12-06 03:34:57.818674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.795 qpair failed and we were unable to recover it. 00:26:37.795 [2024-12-06 03:34:57.818879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.795 [2024-12-06 03:34:57.818891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.795 qpair failed and we were unable to recover it. 00:26:37.795 [2024-12-06 03:34:57.819122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.795 [2024-12-06 03:34:57.819156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.795 qpair failed and we were unable to recover it. 00:26:37.795 [2024-12-06 03:34:57.819355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.795 [2024-12-06 03:34:57.819388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.795 qpair failed and we were unable to recover it. 00:26:37.795 [2024-12-06 03:34:57.819577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.795 [2024-12-06 03:34:57.819610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.795 qpair failed and we were unable to recover it. 00:26:37.795 [2024-12-06 03:34:57.819878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.795 [2024-12-06 03:34:57.819911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.795 qpair failed and we were unable to recover it. 00:26:37.795 [2024-12-06 03:34:57.820052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.795 [2024-12-06 03:34:57.820087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.795 qpair failed and we were unable to recover it. 00:26:37.795 [2024-12-06 03:34:57.820271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.795 [2024-12-06 03:34:57.820309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.795 qpair failed and we were unable to recover it. 00:26:37.795 [2024-12-06 03:34:57.820510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.795 [2024-12-06 03:34:57.820543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.795 qpair failed and we were unable to recover it. 00:26:37.795 [2024-12-06 03:34:57.820757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.795 [2024-12-06 03:34:57.820790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.795 qpair failed and we were unable to recover it. 00:26:37.795 [2024-12-06 03:34:57.820991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.795 [2024-12-06 03:34:57.821026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.795 qpair failed and we were unable to recover it. 00:26:37.795 [2024-12-06 03:34:57.821221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.795 [2024-12-06 03:34:57.821255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.795 qpair failed and we were unable to recover it. 00:26:37.795 [2024-12-06 03:34:57.821458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.795 [2024-12-06 03:34:57.821491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.795 qpair failed and we were unable to recover it. 00:26:37.795 [2024-12-06 03:34:57.821774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.795 [2024-12-06 03:34:57.821786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.795 qpair failed and we were unable to recover it. 00:26:37.795 [2024-12-06 03:34:57.821937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.795 [2024-12-06 03:34:57.821957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.795 qpair failed and we were unable to recover it. 00:26:37.795 [2024-12-06 03:34:57.822190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.795 [2024-12-06 03:34:57.822224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.795 qpair failed and we were unable to recover it. 00:26:37.795 [2024-12-06 03:34:57.822422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.795 [2024-12-06 03:34:57.822455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.795 qpair failed and we were unable to recover it. 00:26:37.795 [2024-12-06 03:34:57.822675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.795 [2024-12-06 03:34:57.822687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.795 qpair failed and we were unable to recover it. 00:26:37.795 [2024-12-06 03:34:57.822944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.795 [2024-12-06 03:34:57.822988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.795 qpair failed and we were unable to recover it. 00:26:37.795 [2024-12-06 03:34:57.823178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.795 [2024-12-06 03:34:57.823211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.796 qpair failed and we were unable to recover it. 00:26:37.796 [2024-12-06 03:34:57.823420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.796 [2024-12-06 03:34:57.823454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.796 qpair failed and we were unable to recover it. 00:26:37.796 [2024-12-06 03:34:57.823648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.796 [2024-12-06 03:34:57.823661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.796 qpair failed and we were unable to recover it. 00:26:37.796 [2024-12-06 03:34:57.823892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.796 [2024-12-06 03:34:57.823925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.796 qpair failed and we were unable to recover it. 00:26:37.796 [2024-12-06 03:34:57.824123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.796 [2024-12-06 03:34:57.824158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.796 qpair failed and we were unable to recover it. 00:26:37.796 [2024-12-06 03:34:57.824411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.796 [2024-12-06 03:34:57.824443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.796 qpair failed and we were unable to recover it. 00:26:37.796 [2024-12-06 03:34:57.824609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.796 [2024-12-06 03:34:57.824621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.796 qpair failed and we were unable to recover it. 00:26:37.796 [2024-12-06 03:34:57.824829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.796 [2024-12-06 03:34:57.824863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.796 qpair failed and we were unable to recover it. 00:26:37.796 [2024-12-06 03:34:57.825148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.796 [2024-12-06 03:34:57.825182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.796 qpair failed and we were unable to recover it. 00:26:37.796 [2024-12-06 03:34:57.825405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.796 [2024-12-06 03:34:57.825439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.796 qpair failed and we were unable to recover it. 00:26:37.796 [2024-12-06 03:34:57.825686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.796 [2024-12-06 03:34:57.825719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.796 qpair failed and we were unable to recover it. 00:26:37.796 [2024-12-06 03:34:57.825915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.796 [2024-12-06 03:34:57.825957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.796 qpair failed and we were unable to recover it. 00:26:37.796 [2024-12-06 03:34:57.826223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.796 [2024-12-06 03:34:57.826236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.796 qpair failed and we were unable to recover it. 00:26:37.796 [2024-12-06 03:34:57.826406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.796 [2024-12-06 03:34:57.826440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.796 qpair failed and we were unable to recover it. 00:26:37.796 [2024-12-06 03:34:57.826655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.796 [2024-12-06 03:34:57.826666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.796 qpair failed and we were unable to recover it. 00:26:37.796 [2024-12-06 03:34:57.826849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.796 [2024-12-06 03:34:57.826882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.796 qpair failed and we were unable to recover it. 00:26:37.796 [2024-12-06 03:34:57.827023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.796 [2024-12-06 03:34:57.827057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.796 qpair failed and we were unable to recover it. 00:26:37.796 [2024-12-06 03:34:57.827240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.796 [2024-12-06 03:34:57.827273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.796 qpair failed and we were unable to recover it. 00:26:37.796 [2024-12-06 03:34:57.827521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.796 [2024-12-06 03:34:57.827554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.796 qpair failed and we were unable to recover it. 00:26:37.796 [2024-12-06 03:34:57.827833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.796 [2024-12-06 03:34:57.827866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.796 qpair failed and we were unable to recover it. 00:26:37.796 [2024-12-06 03:34:57.827997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.796 [2024-12-06 03:34:57.828030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.796 qpair failed and we were unable to recover it. 00:26:37.796 [2024-12-06 03:34:57.828279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.796 [2024-12-06 03:34:57.828312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.796 qpair failed and we were unable to recover it. 00:26:37.796 [2024-12-06 03:34:57.828587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.796 [2024-12-06 03:34:57.828627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.796 qpair failed and we were unable to recover it. 00:26:37.796 [2024-12-06 03:34:57.828780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.796 [2024-12-06 03:34:57.828792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.796 qpair failed and we were unable to recover it. 00:26:37.796 [2024-12-06 03:34:57.829001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.796 [2024-12-06 03:34:57.829035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.796 qpair failed and we were unable to recover it. 00:26:37.796 [2024-12-06 03:34:57.829327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.796 [2024-12-06 03:34:57.829360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.796 qpair failed and we were unable to recover it. 00:26:37.796 [2024-12-06 03:34:57.829632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.796 [2024-12-06 03:34:57.829665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.796 qpair failed and we were unable to recover it. 00:26:37.796 [2024-12-06 03:34:57.829875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.796 [2024-12-06 03:34:57.829909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.796 qpair failed and we were unable to recover it. 00:26:37.796 [2024-12-06 03:34:57.830136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.796 [2024-12-06 03:34:57.830176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.796 qpair failed and we were unable to recover it. 00:26:37.796 [2024-12-06 03:34:57.830300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.796 [2024-12-06 03:34:57.830332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.796 qpair failed and we were unable to recover it. 00:26:37.796 [2024-12-06 03:34:57.830555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.796 [2024-12-06 03:34:57.830588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.796 qpair failed and we were unable to recover it. 00:26:37.796 [2024-12-06 03:34:57.830847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.796 [2024-12-06 03:34:57.830879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.796 qpair failed and we were unable to recover it. 00:26:37.796 [2024-12-06 03:34:57.831123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.796 [2024-12-06 03:34:57.831157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.796 qpair failed and we were unable to recover it. 00:26:37.796 [2024-12-06 03:34:57.831340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.796 [2024-12-06 03:34:57.831374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.796 qpair failed and we were unable to recover it. 00:26:37.797 [2024-12-06 03:34:57.831612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.797 [2024-12-06 03:34:57.831625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.797 qpair failed and we were unable to recover it. 00:26:37.797 [2024-12-06 03:34:57.831720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.797 [2024-12-06 03:34:57.831731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.797 qpair failed and we were unable to recover it. 00:26:37.797 [2024-12-06 03:34:57.831944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.797 [2024-12-06 03:34:57.831959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.797 qpair failed and we were unable to recover it. 00:26:37.797 [2024-12-06 03:34:57.832042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.797 [2024-12-06 03:34:57.832053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.797 qpair failed and we were unable to recover it. 00:26:37.797 [2024-12-06 03:34:57.832185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.797 [2024-12-06 03:34:57.832197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.797 qpair failed and we were unable to recover it. 00:26:37.797 [2024-12-06 03:34:57.832374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.797 [2024-12-06 03:34:57.832387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.797 qpair failed and we were unable to recover it. 00:26:37.797 [2024-12-06 03:34:57.832617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.797 [2024-12-06 03:34:57.832650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.797 qpair failed and we were unable to recover it. 00:26:37.797 [2024-12-06 03:34:57.832930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.797 [2024-12-06 03:34:57.832975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.797 qpair failed and we were unable to recover it. 00:26:37.797 [2024-12-06 03:34:57.833163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.797 [2024-12-06 03:34:57.833196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.797 qpair failed and we were unable to recover it. 00:26:37.797 [2024-12-06 03:34:57.833481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.797 [2024-12-06 03:34:57.833514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.797 qpair failed and we were unable to recover it. 00:26:37.797 [2024-12-06 03:34:57.833772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.797 [2024-12-06 03:34:57.833806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.797 qpair failed and we were unable to recover it. 00:26:37.797 [2024-12-06 03:34:57.833937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.797 [2024-12-06 03:34:57.833954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.797 qpair failed and we were unable to recover it. 00:26:37.797 [2024-12-06 03:34:57.834196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.797 [2024-12-06 03:34:57.834228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.797 qpair failed and we were unable to recover it. 00:26:37.797 [2024-12-06 03:34:57.834376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.797 [2024-12-06 03:34:57.834408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.797 qpair failed and we were unable to recover it. 00:26:37.797 [2024-12-06 03:34:57.834599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.797 [2024-12-06 03:34:57.834632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.797 qpair failed and we were unable to recover it. 00:26:37.797 [2024-12-06 03:34:57.834900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.797 [2024-12-06 03:34:57.834912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.797 qpair failed and we were unable to recover it. 00:26:37.797 [2024-12-06 03:34:57.835122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.797 [2024-12-06 03:34:57.835156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.797 qpair failed and we were unable to recover it. 00:26:37.797 [2024-12-06 03:34:57.835288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.797 [2024-12-06 03:34:57.835321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.797 qpair failed and we were unable to recover it. 00:26:37.797 [2024-12-06 03:34:57.835567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.797 [2024-12-06 03:34:57.835599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.797 qpair failed and we were unable to recover it. 00:26:37.797 [2024-12-06 03:34:57.835796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.797 [2024-12-06 03:34:57.835829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.797 qpair failed and we were unable to recover it. 00:26:37.797 [2024-12-06 03:34:57.836008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.797 [2024-12-06 03:34:57.836042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.797 qpair failed and we were unable to recover it. 00:26:37.797 [2024-12-06 03:34:57.836297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.797 [2024-12-06 03:34:57.836331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.797 qpair failed and we were unable to recover it. 00:26:37.797 [2024-12-06 03:34:57.836534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.797 [2024-12-06 03:34:57.836568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.797 qpair failed and we were unable to recover it. 00:26:37.797 [2024-12-06 03:34:57.836787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.797 [2024-12-06 03:34:57.836821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.797 qpair failed and we were unable to recover it. 00:26:37.797 [2024-12-06 03:34:57.837062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.797 [2024-12-06 03:34:57.837074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.797 qpair failed and we were unable to recover it. 00:26:37.797 [2024-12-06 03:34:57.837300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.797 [2024-12-06 03:34:57.837333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.797 qpair failed and we were unable to recover it. 00:26:37.797 [2024-12-06 03:34:57.837532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.797 [2024-12-06 03:34:57.837565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.797 qpair failed and we were unable to recover it. 00:26:37.797 [2024-12-06 03:34:57.837817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.797 [2024-12-06 03:34:57.837860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.797 qpair failed and we were unable to recover it. 00:26:37.797 [2024-12-06 03:34:57.838011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.797 [2024-12-06 03:34:57.838023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.797 qpair failed and we were unable to recover it. 00:26:37.797 [2024-12-06 03:34:57.838202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.797 [2024-12-06 03:34:57.838215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.797 qpair failed and we were unable to recover it. 00:26:37.797 [2024-12-06 03:34:57.838376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.797 [2024-12-06 03:34:57.838409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.797 qpair failed and we were unable to recover it. 00:26:37.797 [2024-12-06 03:34:57.838593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.797 [2024-12-06 03:34:57.838626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.798 qpair failed and we were unable to recover it. 00:26:37.798 [2024-12-06 03:34:57.838897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.798 [2024-12-06 03:34:57.838929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.798 qpair failed and we were unable to recover it. 00:26:37.798 [2024-12-06 03:34:57.839140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.798 [2024-12-06 03:34:57.839174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.798 qpair failed and we were unable to recover it. 00:26:37.798 [2024-12-06 03:34:57.839412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.798 [2024-12-06 03:34:57.839456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.798 qpair failed and we were unable to recover it. 00:26:37.798 [2024-12-06 03:34:57.839761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.798 [2024-12-06 03:34:57.839793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.798 qpair failed and we were unable to recover it. 00:26:37.798 [2024-12-06 03:34:57.840073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.798 [2024-12-06 03:34:57.840107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.798 qpair failed and we were unable to recover it. 00:26:37.798 [2024-12-06 03:34:57.840298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.798 [2024-12-06 03:34:57.840332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.798 qpair failed and we were unable to recover it. 00:26:37.798 [2024-12-06 03:34:57.840527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.798 [2024-12-06 03:34:57.840560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.798 qpair failed and we were unable to recover it. 00:26:37.798 [2024-12-06 03:34:57.840805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.798 [2024-12-06 03:34:57.840838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.798 qpair failed and we were unable to recover it. 00:26:37.798 [2024-12-06 03:34:57.841080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.798 [2024-12-06 03:34:57.841093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.798 qpair failed and we were unable to recover it. 00:26:37.798 [2024-12-06 03:34:57.841254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.798 [2024-12-06 03:34:57.841286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.798 qpair failed and we were unable to recover it. 00:26:37.798 [2024-12-06 03:34:57.841570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.798 [2024-12-06 03:34:57.841603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.798 qpair failed and we were unable to recover it. 00:26:37.798 [2024-12-06 03:34:57.841812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.798 [2024-12-06 03:34:57.841825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.798 qpair failed and we were unable to recover it. 00:26:37.798 [2024-12-06 03:34:57.842032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.798 [2024-12-06 03:34:57.842066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.798 qpair failed and we were unable to recover it. 00:26:37.798 [2024-12-06 03:34:57.842265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.798 [2024-12-06 03:34:57.842298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.798 qpair failed and we were unable to recover it. 00:26:37.798 [2024-12-06 03:34:57.842546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.798 [2024-12-06 03:34:57.842579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.798 qpair failed and we were unable to recover it. 00:26:37.798 [2024-12-06 03:34:57.842807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.798 [2024-12-06 03:34:57.842820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.798 qpair failed and we were unable to recover it. 00:26:37.798 [2024-12-06 03:34:57.843075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.798 [2024-12-06 03:34:57.843114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.798 qpair failed and we were unable to recover it. 00:26:37.798 [2024-12-06 03:34:57.843378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.798 [2024-12-06 03:34:57.843412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.798 qpair failed and we were unable to recover it. 00:26:37.798 [2024-12-06 03:34:57.843654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.798 [2024-12-06 03:34:57.843706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.798 qpair failed and we were unable to recover it. 00:26:37.798 [2024-12-06 03:34:57.844003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.798 [2024-12-06 03:34:57.844037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.798 qpair failed and we were unable to recover it. 00:26:37.798 [2024-12-06 03:34:57.844311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.798 [2024-12-06 03:34:57.844345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.798 qpair failed and we were unable to recover it. 00:26:37.798 [2024-12-06 03:34:57.844620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.798 [2024-12-06 03:34:57.844654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.798 qpair failed and we were unable to recover it. 00:26:37.798 [2024-12-06 03:34:57.844936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.798 [2024-12-06 03:34:57.844979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.798 qpair failed and we were unable to recover it. 00:26:37.798 [2024-12-06 03:34:57.845211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.798 [2024-12-06 03:34:57.845244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.798 qpair failed and we were unable to recover it. 00:26:37.798 [2024-12-06 03:34:57.845493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.798 [2024-12-06 03:34:57.845526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.798 qpair failed and we were unable to recover it. 00:26:37.798 [2024-12-06 03:34:57.845775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.798 [2024-12-06 03:34:57.845787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.798 qpair failed and we were unable to recover it. 00:26:37.799 [2024-12-06 03:34:57.846035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.799 [2024-12-06 03:34:57.846059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.799 qpair failed and we were unable to recover it. 00:26:37.799 [2024-12-06 03:34:57.846223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.799 [2024-12-06 03:34:57.846234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.799 qpair failed and we were unable to recover it. 00:26:37.799 [2024-12-06 03:34:57.846369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.799 [2024-12-06 03:34:57.846382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.799 qpair failed and we were unable to recover it. 00:26:37.799 [2024-12-06 03:34:57.846533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.799 [2024-12-06 03:34:57.846580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.799 qpair failed and we were unable to recover it. 00:26:37.799 [2024-12-06 03:34:57.846854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.799 [2024-12-06 03:34:57.846888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.799 qpair failed and we were unable to recover it. 00:26:37.799 [2024-12-06 03:34:57.847099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.799 [2024-12-06 03:34:57.847133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.799 qpair failed and we were unable to recover it. 00:26:37.799 [2024-12-06 03:34:57.847381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.799 [2024-12-06 03:34:57.847412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.799 qpair failed and we were unable to recover it. 00:26:37.799 [2024-12-06 03:34:57.847659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.799 [2024-12-06 03:34:57.847693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.799 qpair failed and we were unable to recover it. 00:26:37.799 [2024-12-06 03:34:57.847969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.799 [2024-12-06 03:34:57.848004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.799 qpair failed and we were unable to recover it. 00:26:37.799 [2024-12-06 03:34:57.848254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.799 [2024-12-06 03:34:57.848287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.799 qpair failed and we were unable to recover it. 00:26:37.799 [2024-12-06 03:34:57.848486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.799 [2024-12-06 03:34:57.848519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.799 qpair failed and we were unable to recover it. 00:26:37.799 [2024-12-06 03:34:57.848792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.799 [2024-12-06 03:34:57.848825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.799 qpair failed and we were unable to recover it. 00:26:37.799 [2024-12-06 03:34:57.849072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.799 [2024-12-06 03:34:57.849085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.799 qpair failed and we were unable to recover it. 00:26:37.799 [2024-12-06 03:34:57.849308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.799 [2024-12-06 03:34:57.849342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.799 qpair failed and we were unable to recover it. 00:26:37.799 [2024-12-06 03:34:57.849473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.799 [2024-12-06 03:34:57.849505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.799 qpair failed and we were unable to recover it. 00:26:37.799 [2024-12-06 03:34:57.849781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.799 [2024-12-06 03:34:57.849813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.799 qpair failed and we were unable to recover it. 00:26:37.799 [2024-12-06 03:34:57.850012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.799 [2024-12-06 03:34:57.850053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.799 qpair failed and we were unable to recover it. 00:26:37.799 [2024-12-06 03:34:57.850332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.799 [2024-12-06 03:34:57.850364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.799 qpair failed and we were unable to recover it. 00:26:37.799 [2024-12-06 03:34:57.850633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.799 [2024-12-06 03:34:57.850675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.799 qpair failed and we were unable to recover it. 00:26:37.799 [2024-12-06 03:34:57.850898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.799 [2024-12-06 03:34:57.850910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.799 qpair failed and we were unable to recover it. 00:26:37.799 [2024-12-06 03:34:57.851058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.799 [2024-12-06 03:34:57.851071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.799 qpair failed and we were unable to recover it. 00:26:37.799 [2024-12-06 03:34:57.851282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.799 [2024-12-06 03:34:57.851315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.799 qpair failed and we were unable to recover it. 00:26:37.799 [2024-12-06 03:34:57.851636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.799 [2024-12-06 03:34:57.851668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.799 qpair failed and we were unable to recover it. 00:26:37.799 [2024-12-06 03:34:57.851979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.799 [2024-12-06 03:34:57.852013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.799 qpair failed and we were unable to recover it. 00:26:37.799 [2024-12-06 03:34:57.852214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.799 [2024-12-06 03:34:57.852247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.799 qpair failed and we were unable to recover it. 00:26:37.799 [2024-12-06 03:34:57.852444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.799 [2024-12-06 03:34:57.852476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.799 qpair failed and we were unable to recover it. 00:26:37.799 [2024-12-06 03:34:57.852748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.799 [2024-12-06 03:34:57.852760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.799 qpair failed and we were unable to recover it. 00:26:37.799 [2024-12-06 03:34:57.852915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.799 [2024-12-06 03:34:57.852928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.799 qpair failed and we were unable to recover it. 00:26:37.799 [2024-12-06 03:34:57.853038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.799 [2024-12-06 03:34:57.853050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.799 qpair failed and we were unable to recover it. 00:26:37.799 [2024-12-06 03:34:57.853296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.799 [2024-12-06 03:34:57.853328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.799 qpair failed and we were unable to recover it. 00:26:37.799 [2024-12-06 03:34:57.853670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.799 [2024-12-06 03:34:57.853704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.799 qpair failed and we were unable to recover it. 00:26:37.800 [2024-12-06 03:34:57.853922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.800 [2024-12-06 03:34:57.853934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.800 qpair failed and we were unable to recover it. 00:26:37.800 [2024-12-06 03:34:57.854136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.800 [2024-12-06 03:34:57.854171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.800 qpair failed and we were unable to recover it. 00:26:37.800 [2024-12-06 03:34:57.854366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.800 [2024-12-06 03:34:57.854398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.800 qpair failed and we were unable to recover it. 00:26:37.800 [2024-12-06 03:34:57.854693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.800 [2024-12-06 03:34:57.854726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.800 qpair failed and we were unable to recover it. 00:26:37.800 [2024-12-06 03:34:57.855004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.800 [2024-12-06 03:34:57.855038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.800 qpair failed and we were unable to recover it. 00:26:37.800 [2024-12-06 03:34:57.855317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.800 [2024-12-06 03:34:57.855349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.800 qpair failed and we were unable to recover it. 00:26:37.800 [2024-12-06 03:34:57.855610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.800 [2024-12-06 03:34:57.855643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.800 qpair failed and we were unable to recover it. 00:26:37.800 [2024-12-06 03:34:57.855881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.800 [2024-12-06 03:34:57.855893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.800 qpair failed and we were unable to recover it. 00:26:37.800 [2024-12-06 03:34:57.856140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.800 [2024-12-06 03:34:57.856153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.800 qpair failed and we were unable to recover it. 00:26:37.800 [2024-12-06 03:34:57.856331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.800 [2024-12-06 03:34:57.856364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.800 qpair failed and we were unable to recover it. 00:26:37.800 [2024-12-06 03:34:57.856616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.800 [2024-12-06 03:34:57.856650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.800 qpair failed and we were unable to recover it. 00:26:37.800 [2024-12-06 03:34:57.856828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.800 [2024-12-06 03:34:57.856840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.800 qpair failed and we were unable to recover it. 00:26:37.800 [2024-12-06 03:34:57.857049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.800 [2024-12-06 03:34:57.857084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.800 qpair failed and we were unable to recover it. 00:26:37.800 [2024-12-06 03:34:57.857301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.800 [2024-12-06 03:34:57.857335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.800 qpair failed and we were unable to recover it. 00:26:37.800 [2024-12-06 03:34:57.857603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.800 [2024-12-06 03:34:57.857637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.800 qpair failed and we were unable to recover it. 00:26:37.800 [2024-12-06 03:34:57.857884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.800 [2024-12-06 03:34:57.857917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.800 qpair failed and we were unable to recover it. 00:26:37.800 [2024-12-06 03:34:57.858120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.800 [2024-12-06 03:34:57.858132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.800 qpair failed and we were unable to recover it. 00:26:37.800 [2024-12-06 03:34:57.858290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.800 [2024-12-06 03:34:57.858323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.800 qpair failed and we were unable to recover it. 00:26:37.800 [2024-12-06 03:34:57.858598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.800 [2024-12-06 03:34:57.858632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.800 qpair failed and we were unable to recover it. 00:26:37.800 [2024-12-06 03:34:57.858879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.800 [2024-12-06 03:34:57.858913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.800 qpair failed and we were unable to recover it. 00:26:37.800 [2024-12-06 03:34:57.859220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.800 [2024-12-06 03:34:57.859254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.800 qpair failed and we were unable to recover it. 00:26:37.800 [2024-12-06 03:34:57.859542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.800 [2024-12-06 03:34:57.859575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.800 qpair failed and we were unable to recover it. 00:26:37.800 [2024-12-06 03:34:57.859765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.800 [2024-12-06 03:34:57.859778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.800 qpair failed and we were unable to recover it. 00:26:37.800 [2024-12-06 03:34:57.859930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.800 [2024-12-06 03:34:57.859942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.800 qpair failed and we were unable to recover it. 00:26:37.800 [2024-12-06 03:34:57.860110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.800 [2024-12-06 03:34:57.860144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.800 qpair failed and we were unable to recover it. 00:26:37.800 [2024-12-06 03:34:57.860276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.800 [2024-12-06 03:34:57.860314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.800 qpair failed and we were unable to recover it. 00:26:37.800 [2024-12-06 03:34:57.860569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.800 [2024-12-06 03:34:57.860602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.800 qpair failed and we were unable to recover it. 00:26:37.800 [2024-12-06 03:34:57.860803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.800 [2024-12-06 03:34:57.860836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.800 qpair failed and we were unable to recover it. 00:26:37.800 [2024-12-06 03:34:57.861022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.800 [2024-12-06 03:34:57.861035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.800 qpair failed and we were unable to recover it. 00:26:37.800 [2024-12-06 03:34:57.861252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.800 [2024-12-06 03:34:57.861284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.800 qpair failed and we were unable to recover it. 00:26:37.800 [2024-12-06 03:34:57.861500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.800 [2024-12-06 03:34:57.861533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.800 qpair failed and we were unable to recover it. 00:26:37.801 [2024-12-06 03:34:57.861788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.801 [2024-12-06 03:34:57.861800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.801 qpair failed and we were unable to recover it. 00:26:37.801 [2024-12-06 03:34:57.861981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.801 [2024-12-06 03:34:57.861994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.801 qpair failed and we were unable to recover it. 00:26:37.801 [2024-12-06 03:34:57.862157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.801 [2024-12-06 03:34:57.862191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.801 qpair failed and we were unable to recover it. 00:26:37.801 [2024-12-06 03:34:57.862443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.801 [2024-12-06 03:34:57.862476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.801 qpair failed and we were unable to recover it. 00:26:37.801 [2024-12-06 03:34:57.862694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.801 [2024-12-06 03:34:57.862727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.801 qpair failed and we were unable to recover it. 00:26:37.801 [2024-12-06 03:34:57.862929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.801 [2024-12-06 03:34:57.862974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.801 qpair failed and we were unable to recover it. 00:26:37.801 [2024-12-06 03:34:57.863242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.801 [2024-12-06 03:34:57.863254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.801 qpair failed and we were unable to recover it. 00:26:37.801 [2024-12-06 03:34:57.863404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.801 [2024-12-06 03:34:57.863417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.801 qpair failed and we were unable to recover it. 00:26:37.801 [2024-12-06 03:34:57.863604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.801 [2024-12-06 03:34:57.863638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.801 qpair failed and we were unable to recover it. 00:26:37.801 [2024-12-06 03:34:57.863850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.801 [2024-12-06 03:34:57.863883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.801 qpair failed and we were unable to recover it. 00:26:37.801 [2024-12-06 03:34:57.864110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.801 [2024-12-06 03:34:57.864144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.801 qpair failed and we were unable to recover it. 00:26:37.801 [2024-12-06 03:34:57.864327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.801 [2024-12-06 03:34:57.864361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.801 qpair failed and we were unable to recover it. 00:26:37.801 [2024-12-06 03:34:57.864546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.801 [2024-12-06 03:34:57.864579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.801 qpair failed and we were unable to recover it. 00:26:37.801 [2024-12-06 03:34:57.864858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.801 [2024-12-06 03:34:57.864890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.801 qpair failed and we were unable to recover it. 00:26:37.801 [2024-12-06 03:34:57.865202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.801 [2024-12-06 03:34:57.865237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.801 qpair failed and we were unable to recover it. 00:26:37.801 [2024-12-06 03:34:57.865494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.801 [2024-12-06 03:34:57.865527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.801 qpair failed and we were unable to recover it. 00:26:37.801 [2024-12-06 03:34:57.865631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.801 [2024-12-06 03:34:57.865643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.801 qpair failed and we were unable to recover it. 00:26:37.801 [2024-12-06 03:34:57.865867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.801 [2024-12-06 03:34:57.865879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.801 qpair failed and we were unable to recover it. 00:26:37.801 [2024-12-06 03:34:57.866124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.801 [2024-12-06 03:34:57.866157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.801 qpair failed and we were unable to recover it. 00:26:37.801 [2024-12-06 03:34:57.866350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.801 [2024-12-06 03:34:57.866383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.801 qpair failed and we were unable to recover it. 00:26:37.801 [2024-12-06 03:34:57.866586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.801 [2024-12-06 03:34:57.866619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.801 qpair failed and we were unable to recover it. 00:26:37.801 [2024-12-06 03:34:57.866839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.801 [2024-12-06 03:34:57.866852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.801 qpair failed and we were unable to recover it. 00:26:37.801 [2024-12-06 03:34:57.867085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.801 [2024-12-06 03:34:57.867118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.801 qpair failed and we were unable to recover it. 00:26:37.801 [2024-12-06 03:34:57.867368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.801 [2024-12-06 03:34:57.867402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.801 qpair failed and we were unable to recover it. 00:26:37.801 [2024-12-06 03:34:57.867597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.801 [2024-12-06 03:34:57.867610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.801 qpair failed and we were unable to recover it. 00:26:37.801 [2024-12-06 03:34:57.867770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.801 [2024-12-06 03:34:57.867782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.801 qpair failed and we were unable to recover it. 00:26:37.801 [2024-12-06 03:34:57.867994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.801 [2024-12-06 03:34:57.868007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.801 qpair failed and we were unable to recover it. 00:26:37.801 [2024-12-06 03:34:57.868187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.801 [2024-12-06 03:34:57.868201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.801 qpair failed and we were unable to recover it. 00:26:37.801 [2024-12-06 03:34:57.868430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.801 [2024-12-06 03:34:57.868463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.801 qpair failed and we were unable to recover it. 00:26:37.801 [2024-12-06 03:34:57.868663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.801 [2024-12-06 03:34:57.868696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.801 qpair failed and we were unable to recover it. 00:26:37.801 [2024-12-06 03:34:57.868961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.801 [2024-12-06 03:34:57.868994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.801 qpair failed and we were unable to recover it. 00:26:37.802 [2024-12-06 03:34:57.869295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.802 [2024-12-06 03:34:57.869338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.802 qpair failed and we were unable to recover it. 00:26:37.802 [2024-12-06 03:34:57.869622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.802 [2024-12-06 03:34:57.869654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.802 qpair failed and we were unable to recover it. 00:26:37.802 [2024-12-06 03:34:57.869856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.802 [2024-12-06 03:34:57.869900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.802 qpair failed and we were unable to recover it. 00:26:37.802 [2024-12-06 03:34:57.870246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.802 [2024-12-06 03:34:57.870300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.802 qpair failed and we were unable to recover it. 00:26:37.802 [2024-12-06 03:34:57.870537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.802 [2024-12-06 03:34:57.870581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.802 qpair failed and we were unable to recover it. 00:26:37.802 [2024-12-06 03:34:57.870805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.802 [2024-12-06 03:34:57.870841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.802 qpair failed and we were unable to recover it. 00:26:37.802 [2024-12-06 03:34:57.871113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.802 [2024-12-06 03:34:57.871126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.802 qpair failed and we were unable to recover it. 00:26:37.802 [2024-12-06 03:34:57.871283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.802 [2024-12-06 03:34:57.871296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.802 qpair failed and we were unable to recover it. 00:26:37.802 [2024-12-06 03:34:57.871545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.802 [2024-12-06 03:34:57.871578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.802 qpair failed and we were unable to recover it. 00:26:37.802 [2024-12-06 03:34:57.871788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.802 [2024-12-06 03:34:57.871826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.802 qpair failed and we were unable to recover it. 00:26:37.802 [2024-12-06 03:34:57.872084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.802 [2024-12-06 03:34:57.872097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.802 qpair failed and we were unable to recover it. 00:26:37.802 [2024-12-06 03:34:57.872373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.802 [2024-12-06 03:34:57.872386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.802 qpair failed and we were unable to recover it. 00:26:37.802 [2024-12-06 03:34:57.872560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.802 [2024-12-06 03:34:57.872573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.802 qpair failed and we were unable to recover it. 00:26:37.802 [2024-12-06 03:34:57.872825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.802 [2024-12-06 03:34:57.872838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.802 qpair failed and we were unable to recover it. 00:26:37.802 [2024-12-06 03:34:57.873012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.802 [2024-12-06 03:34:57.873025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.802 qpair failed and we were unable to recover it. 00:26:37.802 [2024-12-06 03:34:57.873244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.802 [2024-12-06 03:34:57.873277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.802 qpair failed and we were unable to recover it. 00:26:37.802 [2024-12-06 03:34:57.873496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.802 [2024-12-06 03:34:57.873529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.802 qpair failed and we were unable to recover it. 00:26:37.802 [2024-12-06 03:34:57.873812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.802 [2024-12-06 03:34:57.873847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.802 qpair failed and we were unable to recover it. 00:26:37.802 [2024-12-06 03:34:57.874125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.802 [2024-12-06 03:34:57.874138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.802 qpair failed and we were unable to recover it. 00:26:37.802 [2024-12-06 03:34:57.874282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.802 [2024-12-06 03:34:57.874295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.802 qpair failed and we were unable to recover it. 00:26:37.802 [2024-12-06 03:34:57.874393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.802 [2024-12-06 03:34:57.874404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.802 qpair failed and we were unable to recover it. 00:26:37.802 [2024-12-06 03:34:57.874612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.802 [2024-12-06 03:34:57.874625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.802 qpair failed and we were unable to recover it. 00:26:37.802 [2024-12-06 03:34:57.874791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.802 [2024-12-06 03:34:57.874823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.802 qpair failed and we were unable to recover it. 00:26:37.802 [2024-12-06 03:34:57.875012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.802 [2024-12-06 03:34:57.875046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.802 qpair failed and we were unable to recover it. 00:26:37.802 [2024-12-06 03:34:57.875316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.802 [2024-12-06 03:34:57.875351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.802 qpair failed and we were unable to recover it. 00:26:37.802 [2024-12-06 03:34:57.875625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.802 [2024-12-06 03:34:57.875667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.802 qpair failed and we were unable to recover it. 00:26:37.802 [2024-12-06 03:34:57.875837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.802 [2024-12-06 03:34:57.875850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.802 qpair failed and we were unable to recover it. 00:26:37.802 [2024-12-06 03:34:57.876002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.802 [2024-12-06 03:34:57.876015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.802 qpair failed and we were unable to recover it. 00:26:37.802 [2024-12-06 03:34:57.876186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.802 [2024-12-06 03:34:57.876220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.802 qpair failed and we were unable to recover it. 00:26:37.802 [2024-12-06 03:34:57.876474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.802 [2024-12-06 03:34:57.876508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.802 qpair failed and we were unable to recover it. 00:26:37.802 [2024-12-06 03:34:57.876766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.802 [2024-12-06 03:34:57.876800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.802 qpair failed and we were unable to recover it. 00:26:37.802 [2024-12-06 03:34:57.877051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.803 [2024-12-06 03:34:57.877064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.803 qpair failed and we were unable to recover it. 00:26:37.803 [2024-12-06 03:34:57.877226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.803 [2024-12-06 03:34:57.877239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.803 qpair failed and we were unable to recover it. 00:26:37.803 [2024-12-06 03:34:57.877388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.803 [2024-12-06 03:34:57.877402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.803 qpair failed and we were unable to recover it. 00:26:37.803 [2024-12-06 03:34:57.877642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.803 [2024-12-06 03:34:57.877676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.803 qpair failed and we were unable to recover it. 00:26:37.803 [2024-12-06 03:34:57.877965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.803 [2024-12-06 03:34:57.877977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.803 qpair failed and we were unable to recover it. 00:26:37.803 [2024-12-06 03:34:57.878183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.803 [2024-12-06 03:34:57.878197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.803 qpair failed and we were unable to recover it. 00:26:37.803 [2024-12-06 03:34:57.878417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.803 [2024-12-06 03:34:57.878430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.803 qpair failed and we were unable to recover it. 00:26:37.803 [2024-12-06 03:34:57.878691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.803 [2024-12-06 03:34:57.878704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.803 qpair failed and we were unable to recover it. 00:26:37.803 [2024-12-06 03:34:57.878845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.803 [2024-12-06 03:34:57.878858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.803 qpair failed and we were unable to recover it. 00:26:37.803 [2024-12-06 03:34:57.878965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.803 [2024-12-06 03:34:57.878978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.803 qpair failed and we were unable to recover it. 00:26:37.803 [2024-12-06 03:34:57.879209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.803 [2024-12-06 03:34:57.879222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.803 qpair failed and we were unable to recover it. 00:26:37.803 [2024-12-06 03:34:57.879452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.803 [2024-12-06 03:34:57.879465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.803 qpair failed and we were unable to recover it. 00:26:37.803 [2024-12-06 03:34:57.879714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.803 [2024-12-06 03:34:57.879729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.803 qpair failed and we were unable to recover it. 00:26:37.803 [2024-12-06 03:34:57.879959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.803 [2024-12-06 03:34:57.879973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.803 qpair failed and we were unable to recover it. 00:26:37.803 [2024-12-06 03:34:57.880233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.803 [2024-12-06 03:34:57.880265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.803 qpair failed and we were unable to recover it. 00:26:37.803 [2024-12-06 03:34:57.880548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.803 [2024-12-06 03:34:57.880581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.803 qpair failed and we were unable to recover it. 00:26:37.803 [2024-12-06 03:34:57.880865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.803 [2024-12-06 03:34:57.880899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.803 qpair failed and we were unable to recover it. 00:26:37.803 [2024-12-06 03:34:57.881039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.803 [2024-12-06 03:34:57.881052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.803 qpair failed and we were unable to recover it. 00:26:37.803 [2024-12-06 03:34:57.881190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.803 [2024-12-06 03:34:57.881202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.803 qpair failed and we were unable to recover it. 00:26:37.803 [2024-12-06 03:34:57.881350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.803 [2024-12-06 03:34:57.881384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.803 qpair failed and we were unable to recover it. 00:26:37.803 [2024-12-06 03:34:57.881579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.803 [2024-12-06 03:34:57.881611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.803 qpair failed and we were unable to recover it. 00:26:37.803 [2024-12-06 03:34:57.881836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.803 [2024-12-06 03:34:57.881874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.803 qpair failed and we were unable to recover it. 00:26:37.803 [2024-12-06 03:34:57.882105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.803 [2024-12-06 03:34:57.882117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.803 qpair failed and we were unable to recover it. 00:26:37.803 [2024-12-06 03:34:57.882297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.803 [2024-12-06 03:34:57.882309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.803 qpair failed and we were unable to recover it. 00:26:37.803 [2024-12-06 03:34:57.882527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.803 [2024-12-06 03:34:57.882560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.803 qpair failed and we were unable to recover it. 00:26:37.803 [2024-12-06 03:34:57.882772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.803 [2024-12-06 03:34:57.882805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.803 qpair failed and we were unable to recover it. 00:26:37.803 [2024-12-06 03:34:57.883085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.803 [2024-12-06 03:34:57.883099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.803 qpair failed and we were unable to recover it. 00:26:37.803 [2024-12-06 03:34:57.883371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.803 [2024-12-06 03:34:57.883404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.803 qpair failed and we were unable to recover it. 00:26:37.803 [2024-12-06 03:34:57.883705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.803 [2024-12-06 03:34:57.883739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.803 qpair failed and we were unable to recover it. 00:26:37.803 [2024-12-06 03:34:57.884025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.803 [2024-12-06 03:34:57.884060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.803 qpair failed and we were unable to recover it. 00:26:37.803 [2024-12-06 03:34:57.884267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.803 [2024-12-06 03:34:57.884300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.803 qpair failed and we were unable to recover it. 00:26:37.803 [2024-12-06 03:34:57.884551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.803 [2024-12-06 03:34:57.884585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.803 qpair failed and we were unable to recover it. 00:26:37.804 [2024-12-06 03:34:57.884878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.804 [2024-12-06 03:34:57.884911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.804 qpair failed and we were unable to recover it. 00:26:37.804 [2024-12-06 03:34:57.885192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.804 [2024-12-06 03:34:57.885226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.804 qpair failed and we were unable to recover it. 00:26:37.804 [2024-12-06 03:34:57.885423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.804 [2024-12-06 03:34:57.885457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.804 qpair failed and we were unable to recover it. 00:26:37.804 [2024-12-06 03:34:57.885719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.804 [2024-12-06 03:34:57.885752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.804 qpair failed and we were unable to recover it. 00:26:37.804 [2024-12-06 03:34:57.885954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.804 [2024-12-06 03:34:57.885968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.804 qpair failed and we were unable to recover it. 00:26:37.804 [2024-12-06 03:34:57.886071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.804 [2024-12-06 03:34:57.886102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.804 qpair failed and we were unable to recover it. 00:26:37.804 [2024-12-06 03:34:57.886381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.804 [2024-12-06 03:34:57.886415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.804 qpair failed and we were unable to recover it. 00:26:37.804 [2024-12-06 03:34:57.886602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.804 [2024-12-06 03:34:57.886636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.804 qpair failed and we were unable to recover it. 00:26:37.804 [2024-12-06 03:34:57.886837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.804 [2024-12-06 03:34:57.886870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.804 qpair failed and we were unable to recover it. 00:26:37.804 [2024-12-06 03:34:57.887151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.804 [2024-12-06 03:34:57.887165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.804 qpair failed and we were unable to recover it. 00:26:37.804 [2024-12-06 03:34:57.887376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.804 [2024-12-06 03:34:57.887389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.804 qpair failed and we were unable to recover it. 00:26:37.804 [2024-12-06 03:34:57.887495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.804 [2024-12-06 03:34:57.887507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.804 qpair failed and we were unable to recover it. 00:26:37.804 [2024-12-06 03:34:57.887740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.804 [2024-12-06 03:34:57.887774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.804 qpair failed and we were unable to recover it. 00:26:37.804 [2024-12-06 03:34:57.888033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.804 [2024-12-06 03:34:57.888068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.804 qpair failed and we were unable to recover it. 00:26:37.804 [2024-12-06 03:34:57.888266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.804 [2024-12-06 03:34:57.888300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.804 qpair failed and we were unable to recover it. 00:26:37.804 [2024-12-06 03:34:57.888580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.804 [2024-12-06 03:34:57.888613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.804 qpair failed and we were unable to recover it. 00:26:37.804 [2024-12-06 03:34:57.888912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.804 [2024-12-06 03:34:57.888945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.804 qpair failed and we were unable to recover it. 00:26:37.804 [2024-12-06 03:34:57.889216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.804 [2024-12-06 03:34:57.889249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.804 qpair failed and we were unable to recover it. 00:26:37.804 [2024-12-06 03:34:57.889468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.804 [2024-12-06 03:34:57.889502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.804 qpair failed and we were unable to recover it. 00:26:37.804 [2024-12-06 03:34:57.889625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.804 [2024-12-06 03:34:57.889637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.804 qpair failed and we were unable to recover it. 00:26:37.804 [2024-12-06 03:34:57.889874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.804 [2024-12-06 03:34:57.889919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.804 qpair failed and we were unable to recover it. 00:26:37.804 [2024-12-06 03:34:57.890203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.804 [2024-12-06 03:34:57.890238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.804 qpair failed and we were unable to recover it. 00:26:37.804 [2024-12-06 03:34:57.890441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.804 [2024-12-06 03:34:57.890474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.804 qpair failed and we were unable to recover it. 00:26:37.804 [2024-12-06 03:34:57.890743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.804 [2024-12-06 03:34:57.890776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.804 qpair failed and we were unable to recover it. 00:26:37.804 [2024-12-06 03:34:57.890970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.804 [2024-12-06 03:34:57.891005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.804 qpair failed and we were unable to recover it. 00:26:37.804 [2024-12-06 03:34:57.891271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.804 [2024-12-06 03:34:57.891304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.804 qpair failed and we were unable to recover it. 00:26:37.804 [2024-12-06 03:34:57.891504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.804 [2024-12-06 03:34:57.891537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.805 qpair failed and we were unable to recover it. 00:26:37.805 [2024-12-06 03:34:57.891823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.805 [2024-12-06 03:34:57.891855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.805 qpair failed and we were unable to recover it. 00:26:37.805 [2024-12-06 03:34:57.892128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.805 [2024-12-06 03:34:57.892163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.805 qpair failed and we were unable to recover it. 00:26:37.805 [2024-12-06 03:34:57.892450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.805 [2024-12-06 03:34:57.892484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.805 qpair failed and we were unable to recover it. 00:26:37.805 [2024-12-06 03:34:57.892690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.805 [2024-12-06 03:34:57.892722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.805 qpair failed and we were unable to recover it. 00:26:37.805 [2024-12-06 03:34:57.893006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.805 [2024-12-06 03:34:57.893041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.805 qpair failed and we were unable to recover it. 00:26:37.805 [2024-12-06 03:34:57.893328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.805 [2024-12-06 03:34:57.893373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.805 qpair failed and we were unable to recover it. 00:26:37.805 [2024-12-06 03:34:57.893643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.805 [2024-12-06 03:34:57.893678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.805 qpair failed and we were unable to recover it. 00:26:37.805 [2024-12-06 03:34:57.893939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.805 [2024-12-06 03:34:57.893992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.805 qpair failed and we were unable to recover it. 00:26:37.805 [2024-12-06 03:34:57.894205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.805 [2024-12-06 03:34:57.894240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.805 qpair failed and we were unable to recover it. 00:26:37.805 [2024-12-06 03:34:57.894515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.805 [2024-12-06 03:34:57.894549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.805 qpair failed and we were unable to recover it. 00:26:37.805 [2024-12-06 03:34:57.894830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.805 [2024-12-06 03:34:57.894864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.805 qpair failed and we were unable to recover it. 00:26:37.805 [2024-12-06 03:34:57.895084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.805 [2024-12-06 03:34:57.895097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.805 qpair failed and we were unable to recover it. 00:26:37.805 [2024-12-06 03:34:57.895256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.805 [2024-12-06 03:34:57.895289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.805 qpair failed and we were unable to recover it. 00:26:37.805 [2024-12-06 03:34:57.895599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.805 [2024-12-06 03:34:57.895633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.805 qpair failed and we were unable to recover it. 00:26:37.805 [2024-12-06 03:34:57.895908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.805 [2024-12-06 03:34:57.895941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.805 qpair failed and we were unable to recover it. 00:26:37.805 [2024-12-06 03:34:57.896229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.805 [2024-12-06 03:34:57.896242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.805 qpair failed and we were unable to recover it. 00:26:37.805 [2024-12-06 03:34:57.896392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.805 [2024-12-06 03:34:57.896405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.805 qpair failed and we were unable to recover it. 00:26:37.805 [2024-12-06 03:34:57.896623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.805 [2024-12-06 03:34:57.896636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.805 qpair failed and we were unable to recover it. 00:26:37.805 [2024-12-06 03:34:57.896865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.805 [2024-12-06 03:34:57.896877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.805 qpair failed and we were unable to recover it. 00:26:37.805 [2024-12-06 03:34:57.897037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.805 [2024-12-06 03:34:57.897051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.805 qpair failed and we were unable to recover it. 00:26:37.805 [2024-12-06 03:34:57.897304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.805 [2024-12-06 03:34:57.897338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.805 qpair failed and we were unable to recover it. 00:26:37.805 [2024-12-06 03:34:57.897528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.805 [2024-12-06 03:34:57.897560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.805 qpair failed and we were unable to recover it. 00:26:37.805 [2024-12-06 03:34:57.897820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.805 [2024-12-06 03:34:57.897853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.805 qpair failed and we were unable to recover it. 00:26:37.805 [2024-12-06 03:34:57.898130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.805 [2024-12-06 03:34:57.898144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.805 qpair failed and we were unable to recover it. 00:26:37.805 [2024-12-06 03:34:57.898378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.805 [2024-12-06 03:34:57.898411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.805 qpair failed and we were unable to recover it. 00:26:37.805 [2024-12-06 03:34:57.898633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.805 [2024-12-06 03:34:57.898666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.805 qpair failed and we were unable to recover it. 00:26:37.805 [2024-12-06 03:34:57.898889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.805 [2024-12-06 03:34:57.898902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.805 qpair failed and we were unable to recover it. 00:26:37.805 [2024-12-06 03:34:57.899004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.805 [2024-12-06 03:34:57.899015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.805 qpair failed and we were unable to recover it. 00:26:37.805 [2024-12-06 03:34:57.899242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.805 [2024-12-06 03:34:57.899256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.805 qpair failed and we were unable to recover it. 00:26:37.805 [2024-12-06 03:34:57.899513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.805 [2024-12-06 03:34:57.899526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.805 qpair failed and we were unable to recover it. 00:26:37.805 [2024-12-06 03:34:57.899811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.805 [2024-12-06 03:34:57.899844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.806 qpair failed and we were unable to recover it. 00:26:37.806 [2024-12-06 03:34:57.900037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.806 [2024-12-06 03:34:57.900072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.806 qpair failed and we were unable to recover it. 00:26:37.806 [2024-12-06 03:34:57.900274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.806 [2024-12-06 03:34:57.900286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.806 qpair failed and we were unable to recover it. 00:26:37.806 [2024-12-06 03:34:57.900469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.806 [2024-12-06 03:34:57.900508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.806 qpair failed and we were unable to recover it. 00:26:37.806 [2024-12-06 03:34:57.900637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.806 [2024-12-06 03:34:57.900670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.806 qpair failed and we were unable to recover it. 00:26:37.806 [2024-12-06 03:34:57.900877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.806 [2024-12-06 03:34:57.900910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.806 qpair failed and we were unable to recover it. 00:26:37.806 [2024-12-06 03:34:57.901193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.806 [2024-12-06 03:34:57.901229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.806 qpair failed and we were unable to recover it. 00:26:37.806 [2024-12-06 03:34:57.901415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.806 [2024-12-06 03:34:57.901448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.806 qpair failed and we were unable to recover it. 00:26:37.806 [2024-12-06 03:34:57.901653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.806 [2024-12-06 03:34:57.901686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.806 qpair failed and we were unable to recover it. 00:26:37.806 [2024-12-06 03:34:57.901863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.806 [2024-12-06 03:34:57.901876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.806 qpair failed and we were unable to recover it. 00:26:37.806 [2024-12-06 03:34:57.901964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.806 [2024-12-06 03:34:57.901976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.806 qpair failed and we were unable to recover it. 00:26:37.806 [2024-12-06 03:34:57.902091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.806 [2024-12-06 03:34:57.902122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.806 qpair failed and we were unable to recover it. 00:26:37.806 [2024-12-06 03:34:57.902397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.806 [2024-12-06 03:34:57.902430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.806 qpair failed and we were unable to recover it. 00:26:37.806 [2024-12-06 03:34:57.902718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.806 [2024-12-06 03:34:57.902752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.806 qpair failed and we were unable to recover it. 00:26:37.806 [2024-12-06 03:34:57.902981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.806 [2024-12-06 03:34:57.903016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.806 qpair failed and we were unable to recover it. 00:26:37.806 [2024-12-06 03:34:57.903293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.806 [2024-12-06 03:34:57.903326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.806 qpair failed and we were unable to recover it. 00:26:37.806 [2024-12-06 03:34:57.903606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.806 [2024-12-06 03:34:57.903641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.806 qpair failed and we were unable to recover it. 00:26:37.806 [2024-12-06 03:34:57.903899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.806 [2024-12-06 03:34:57.903932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.806 qpair failed and we were unable to recover it. 00:26:37.806 [2024-12-06 03:34:57.904146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.806 [2024-12-06 03:34:57.904181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.806 qpair failed and we were unable to recover it. 00:26:37.806 [2024-12-06 03:34:57.904462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.806 [2024-12-06 03:34:57.904495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.806 qpair failed and we were unable to recover it. 00:26:37.806 [2024-12-06 03:34:57.904745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.806 [2024-12-06 03:34:57.904779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.806 qpair failed and we were unable to recover it. 00:26:37.806 [2024-12-06 03:34:57.905015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.806 [2024-12-06 03:34:57.905049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.806 qpair failed and we were unable to recover it. 00:26:37.806 [2024-12-06 03:34:57.905249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.806 [2024-12-06 03:34:57.905283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.806 qpair failed and we were unable to recover it. 00:26:37.806 [2024-12-06 03:34:57.905483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.806 [2024-12-06 03:34:57.905517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.806 qpair failed and we were unable to recover it. 00:26:37.806 [2024-12-06 03:34:57.905749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.806 [2024-12-06 03:34:57.905791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.806 qpair failed and we were unable to recover it. 00:26:37.806 [2024-12-06 03:34:57.905934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.806 [2024-12-06 03:34:57.905956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.806 qpair failed and we were unable to recover it. 00:26:37.806 [2024-12-06 03:34:57.906133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.806 [2024-12-06 03:34:57.906167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.806 qpair failed and we were unable to recover it. 00:26:37.806 [2024-12-06 03:34:57.906446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.806 [2024-12-06 03:34:57.906478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.806 qpair failed and we were unable to recover it. 00:26:37.806 [2024-12-06 03:34:57.906770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.806 [2024-12-06 03:34:57.906803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.806 qpair failed and we were unable to recover it. 00:26:37.806 [2024-12-06 03:34:57.907075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.806 [2024-12-06 03:34:57.907089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.806 qpair failed and we were unable to recover it. 00:26:37.806 [2024-12-06 03:34:57.907351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.806 [2024-12-06 03:34:57.907384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.806 qpair failed and we were unable to recover it. 00:26:37.806 [2024-12-06 03:34:57.907659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.806 [2024-12-06 03:34:57.907692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.806 qpair failed and we were unable to recover it. 00:26:37.807 [2024-12-06 03:34:57.907823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.807 [2024-12-06 03:34:57.907836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.807 qpair failed and we were unable to recover it. 00:26:37.807 [2024-12-06 03:34:57.908012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.807 [2024-12-06 03:34:57.908026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.807 qpair failed and we were unable to recover it. 00:26:37.807 [2024-12-06 03:34:57.908243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.807 [2024-12-06 03:34:57.908278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.807 qpair failed and we were unable to recover it. 00:26:37.807 [2024-12-06 03:34:57.908483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.807 [2024-12-06 03:34:57.908516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.807 qpair failed and we were unable to recover it. 00:26:37.807 [2024-12-06 03:34:57.908784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.807 [2024-12-06 03:34:57.908818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.807 qpair failed and we were unable to recover it. 00:26:37.807 [2024-12-06 03:34:57.908945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.807 [2024-12-06 03:34:57.908988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.807 qpair failed and we were unable to recover it. 00:26:37.807 [2024-12-06 03:34:57.909295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.807 [2024-12-06 03:34:57.909330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.807 qpair failed and we were unable to recover it. 00:26:37.807 [2024-12-06 03:34:57.909543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.807 [2024-12-06 03:34:57.909577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.807 qpair failed and we were unable to recover it. 00:26:37.807 [2024-12-06 03:34:57.909772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.807 [2024-12-06 03:34:57.909785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.807 qpair failed and we were unable to recover it. 00:26:37.807 [2024-12-06 03:34:57.909874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.807 [2024-12-06 03:34:57.909886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.807 qpair failed and we were unable to recover it. 00:26:37.807 [2024-12-06 03:34:57.909977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.807 [2024-12-06 03:34:57.910015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.807 qpair failed and we were unable to recover it. 00:26:37.807 [2024-12-06 03:34:57.910244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.807 [2024-12-06 03:34:57.910283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.807 qpair failed and we were unable to recover it. 00:26:37.807 [2024-12-06 03:34:57.910567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.807 [2024-12-06 03:34:57.910600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.807 qpair failed and we were unable to recover it. 00:26:37.807 [2024-12-06 03:34:57.910880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.807 [2024-12-06 03:34:57.910914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.807 qpair failed and we were unable to recover it. 00:26:37.807 [2024-12-06 03:34:57.911197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.807 [2024-12-06 03:34:57.911211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.807 qpair failed and we were unable to recover it. 00:26:37.807 [2024-12-06 03:34:57.911441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.807 [2024-12-06 03:34:57.911455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.807 qpair failed and we were unable to recover it. 00:26:37.807 [2024-12-06 03:34:57.911612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.807 [2024-12-06 03:34:57.911625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.807 qpair failed and we were unable to recover it. 00:26:37.807 [2024-12-06 03:34:57.911778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.807 [2024-12-06 03:34:57.911792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.807 qpair failed and we were unable to recover it. 00:26:37.807 [2024-12-06 03:34:57.912028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.807 [2024-12-06 03:34:57.912063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.807 qpair failed and we were unable to recover it. 00:26:37.807 [2024-12-06 03:34:57.912384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.807 [2024-12-06 03:34:57.912416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.807 qpair failed and we were unable to recover it. 00:26:37.807 [2024-12-06 03:34:57.912718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.807 [2024-12-06 03:34:57.912751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.807 qpair failed and we were unable to recover it. 00:26:37.807 [2024-12-06 03:34:57.912978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.807 [2024-12-06 03:34:57.913011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.807 qpair failed and we were unable to recover it. 00:26:37.807 [2024-12-06 03:34:57.913200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.807 [2024-12-06 03:34:57.913233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.807 qpair failed and we were unable to recover it. 00:26:37.807 [2024-12-06 03:34:57.913506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.807 [2024-12-06 03:34:57.913520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.807 qpair failed and we were unable to recover it. 00:26:37.807 [2024-12-06 03:34:57.913756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.807 [2024-12-06 03:34:57.913769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.807 qpair failed and we were unable to recover it. 00:26:37.807 [2024-12-06 03:34:57.913927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.807 [2024-12-06 03:34:57.913940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.807 qpair failed and we were unable to recover it. 00:26:37.807 [2024-12-06 03:34:57.914171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.807 [2024-12-06 03:34:57.914185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.807 qpair failed and we were unable to recover it. 00:26:37.807 [2024-12-06 03:34:57.914350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.807 [2024-12-06 03:34:57.914384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.807 qpair failed and we were unable to recover it. 00:26:37.807 [2024-12-06 03:34:57.914574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.807 [2024-12-06 03:34:57.914607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.807 qpair failed and we were unable to recover it. 00:26:37.807 [2024-12-06 03:34:57.914886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.807 [2024-12-06 03:34:57.914919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.807 qpair failed and we were unable to recover it. 00:26:37.807 [2024-12-06 03:34:57.915192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.807 [2024-12-06 03:34:57.915227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.807 qpair failed and we were unable to recover it. 00:26:37.808 [2024-12-06 03:34:57.915425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.808 [2024-12-06 03:34:57.915458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:37.808 qpair failed and we were unable to recover it. 00:26:38.097 [2024-12-06 03:34:57.915629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-12-06 03:34:57.915642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-12-06 03:34:57.915837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-12-06 03:34:57.915850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-12-06 03:34:57.916082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-12-06 03:34:57.916097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-12-06 03:34:57.916352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-12-06 03:34:57.916366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-12-06 03:34:57.916551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-12-06 03:34:57.916564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-12-06 03:34:57.916776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-12-06 03:34:57.916789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-12-06 03:34:57.916954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-12-06 03:34:57.916969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-12-06 03:34:57.917192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-12-06 03:34:57.917205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-12-06 03:34:57.917408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-12-06 03:34:57.917422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-12-06 03:34:57.917589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-12-06 03:34:57.917603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-12-06 03:34:57.917789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-12-06 03:34:57.917802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-12-06 03:34:57.918013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-12-06 03:34:57.918027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-12-06 03:34:57.918114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-12-06 03:34:57.918125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-12-06 03:34:57.918363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-12-06 03:34:57.918377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-12-06 03:34:57.918608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-12-06 03:34:57.918621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-12-06 03:34:57.918778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-12-06 03:34:57.918791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-12-06 03:34:57.918901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-12-06 03:34:57.918935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-12-06 03:34:57.919094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-12-06 03:34:57.919128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-12-06 03:34:57.919266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-12-06 03:34:57.919300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-12-06 03:34:57.919560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-12-06 03:34:57.919600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-12-06 03:34:57.919852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-12-06 03:34:57.919864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-12-06 03:34:57.920099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-12-06 03:34:57.920133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-12-06 03:34:57.920267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-12-06 03:34:57.920300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-12-06 03:34:57.920556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-12-06 03:34:57.920588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-12-06 03:34:57.920893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-12-06 03:34:57.920927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-12-06 03:34:57.921134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-12-06 03:34:57.921148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-12-06 03:34:57.921317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-12-06 03:34:57.921349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-12-06 03:34:57.921492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-12-06 03:34:57.921526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-12-06 03:34:57.921808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-12-06 03:34:57.921842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-12-06 03:34:57.922075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-12-06 03:34:57.922110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-12-06 03:34:57.922388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-12-06 03:34:57.922423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-12-06 03:34:57.922613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-12-06 03:34:57.922661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-12-06 03:34:57.922892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-12-06 03:34:57.922906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-12-06 03:34:57.923016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-12-06 03:34:57.923029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-12-06 03:34:57.923262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-12-06 03:34:57.923276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-12-06 03:34:57.923508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-12-06 03:34:57.923542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-12-06 03:34:57.923824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-12-06 03:34:57.923860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-12-06 03:34:57.924142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-12-06 03:34:57.924156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-12-06 03:34:57.924393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-12-06 03:34:57.924406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-12-06 03:34:57.924572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-12-06 03:34:57.924586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-12-06 03:34:57.924823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-12-06 03:34:57.924836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-12-06 03:34:57.925071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-12-06 03:34:57.925106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-12-06 03:34:57.925246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-12-06 03:34:57.925280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-12-06 03:34:57.925606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-12-06 03:34:57.925639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-12-06 03:34:57.925903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-12-06 03:34:57.925916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-12-06 03:34:57.926060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-12-06 03:34:57.926072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-12-06 03:34:57.926269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-12-06 03:34:57.926304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-12-06 03:34:57.926591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-12-06 03:34:57.926624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-12-06 03:34:57.926880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-12-06 03:34:57.926915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-12-06 03:34:57.927225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-12-06 03:34:57.927261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-12-06 03:34:57.927545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-12-06 03:34:57.927578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-12-06 03:34:57.927863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-12-06 03:34:57.927898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-12-06 03:34:57.928187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-12-06 03:34:57.928223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-12-06 03:34:57.928431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-12-06 03:34:57.928465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-12-06 03:34:57.928672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-12-06 03:34:57.928705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-12-06 03:34:57.928896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-12-06 03:34:57.928910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-12-06 03:34:57.929062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-12-06 03:34:57.929076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-12-06 03:34:57.929263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-12-06 03:34:57.929277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-12-06 03:34:57.929432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-12-06 03:34:57.929446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-12-06 03:34:57.929552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-12-06 03:34:57.929566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-12-06 03:34:57.929714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-12-06 03:34:57.929728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-12-06 03:34:57.929879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-12-06 03:34:57.929892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-12-06 03:34:57.930055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-12-06 03:34:57.930069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-12-06 03:34:57.930160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-12-06 03:34:57.930173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-12-06 03:34:57.930365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-12-06 03:34:57.930399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-12-06 03:34:57.930550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-12-06 03:34:57.930584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-12-06 03:34:57.930770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-12-06 03:34:57.930803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-12-06 03:34:57.931085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-12-06 03:34:57.931099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-12-06 03:34:57.931265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-12-06 03:34:57.931279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.100 [2024-12-06 03:34:57.931421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-12-06 03:34:57.931434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-12-06 03:34:57.931539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-12-06 03:34:57.931551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-12-06 03:34:57.931783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-12-06 03:34:57.931797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-12-06 03:34:57.932012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-12-06 03:34:57.932026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-12-06 03:34:57.932297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-12-06 03:34:57.932331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-12-06 03:34:57.932536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-12-06 03:34:57.932569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-12-06 03:34:57.932800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-12-06 03:34:57.932834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-12-06 03:34:57.933096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-12-06 03:34:57.933131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-12-06 03:34:57.933423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-12-06 03:34:57.933458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-12-06 03:34:57.933747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-12-06 03:34:57.933781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-12-06 03:34:57.934005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-12-06 03:34:57.934039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-12-06 03:34:57.934294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-12-06 03:34:57.934328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-12-06 03:34:57.934613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-12-06 03:34:57.934648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-12-06 03:34:57.934809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-12-06 03:34:57.934843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-12-06 03:34:57.935094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-12-06 03:34:57.935108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-12-06 03:34:57.935268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-12-06 03:34:57.935281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-12-06 03:34:57.935447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-12-06 03:34:57.935481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-12-06 03:34:57.935817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-12-06 03:34:57.935896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-12-06 03:34:57.936148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-12-06 03:34:57.936194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-12-06 03:34:57.936489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-12-06 03:34:57.936537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-12-06 03:34:57.936831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-12-06 03:34:57.936847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-12-06 03:34:57.937088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-12-06 03:34:57.937103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-12-06 03:34:57.937332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-12-06 03:34:57.937365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-12-06 03:34:57.937573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-12-06 03:34:57.937606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-12-06 03:34:57.937884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-12-06 03:34:57.937918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-12-06 03:34:57.938168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-12-06 03:34:57.938210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-12-06 03:34:57.938524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-12-06 03:34:57.938559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-12-06 03:34:57.938827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-12-06 03:34:57.938861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-12-06 03:34:57.939149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-12-06 03:34:57.939184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-12-06 03:34:57.939487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-12-06 03:34:57.939522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-12-06 03:34:57.939705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-12-06 03:34:57.939739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-12-06 03:34:57.939983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-12-06 03:34:57.940018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-12-06 03:34:57.940243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-12-06 03:34:57.940278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-12-06 03:34:57.940558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-12-06 03:34:57.940590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-12-06 03:34:57.940877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-12-06 03:34:57.940910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-12-06 03:34:57.941201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-12-06 03:34:57.941236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-12-06 03:34:57.941453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-12-06 03:34:57.941486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-12-06 03:34:57.941682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-12-06 03:34:57.941715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-12-06 03:34:57.941904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-12-06 03:34:57.941938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-12-06 03:34:57.942102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-12-06 03:34:57.942137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-12-06 03:34:57.942429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-12-06 03:34:57.942462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-12-06 03:34:57.942669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-12-06 03:34:57.942712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-12-06 03:34:57.942962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-12-06 03:34:57.942980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-12-06 03:34:57.943254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-12-06 03:34:57.943271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-12-06 03:34:57.943498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-12-06 03:34:57.943515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-12-06 03:34:57.943748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-12-06 03:34:57.943765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-12-06 03:34:57.943938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-12-06 03:34:57.943960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-12-06 03:34:57.944137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-12-06 03:34:57.944154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-12-06 03:34:57.944342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-12-06 03:34:57.944359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-12-06 03:34:57.944550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-12-06 03:34:57.944567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-12-06 03:34:57.944788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-12-06 03:34:57.944805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-12-06 03:34:57.945063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-12-06 03:34:57.945097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-12-06 03:34:57.945381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-12-06 03:34:57.945415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-12-06 03:34:57.945617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-12-06 03:34:57.945650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-12-06 03:34:57.945862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-12-06 03:34:57.945896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-12-06 03:34:57.946110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-12-06 03:34:57.946145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-12-06 03:34:57.946334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-12-06 03:34:57.946352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-12-06 03:34:57.946560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-12-06 03:34:57.946602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-12-06 03:34:57.946861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-12-06 03:34:57.946895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-12-06 03:34:57.947212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-12-06 03:34:57.947248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-12-06 03:34:57.947510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-12-06 03:34:57.947543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-12-06 03:34:57.947857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-12-06 03:34:57.947875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-12-06 03:34:57.948032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-12-06 03:34:57.948049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-12-06 03:34:57.948214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-12-06 03:34:57.948232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-12-06 03:34:57.948480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-12-06 03:34:57.948499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-12-06 03:34:57.948761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-12-06 03:34:57.948795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-12-06 03:34:57.948986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-12-06 03:34:57.949020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-12-06 03:34:57.949174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-12-06 03:34:57.949208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-12-06 03:34:57.949439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-12-06 03:34:57.949457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-12-06 03:34:57.949606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-12-06 03:34:57.949624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-12-06 03:34:57.949774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-12-06 03:34:57.949818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-12-06 03:34:57.950091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-12-06 03:34:57.950142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-12-06 03:34:57.950386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-12-06 03:34:57.950420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-12-06 03:34:57.950633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-12-06 03:34:57.950667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-12-06 03:34:57.950941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-12-06 03:34:57.950965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-12-06 03:34:57.951202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-12-06 03:34:57.951219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-12-06 03:34:57.951403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-12-06 03:34:57.951420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-12-06 03:34:57.951602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-12-06 03:34:57.951636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-12-06 03:34:57.951786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-12-06 03:34:57.951821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-12-06 03:34:57.952100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-12-06 03:34:57.952135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-12-06 03:34:57.952283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-12-06 03:34:57.952301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-12-06 03:34:57.952569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-12-06 03:34:57.952603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-12-06 03:34:57.952813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-12-06 03:34:57.952847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-12-06 03:34:57.953043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-12-06 03:34:57.953077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-12-06 03:34:57.953225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-12-06 03:34:57.953259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-12-06 03:34:57.953581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-12-06 03:34:57.953615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-12-06 03:34:57.953885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-12-06 03:34:57.953918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-12-06 03:34:57.954219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-12-06 03:34:57.954254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-12-06 03:34:57.954523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-12-06 03:34:57.954556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-12-06 03:34:57.954703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-12-06 03:34:57.954736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-12-06 03:34:57.955023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-12-06 03:34:57.955059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-12-06 03:34:57.955333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-12-06 03:34:57.955350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-12-06 03:34:57.955591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-12-06 03:34:57.955608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-12-06 03:34:57.955860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-12-06 03:34:57.955878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-12-06 03:34:57.956048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-12-06 03:34:57.956066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-12-06 03:34:57.956372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-12-06 03:34:57.956407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-12-06 03:34:57.956714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-12-06 03:34:57.956748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-12-06 03:34:57.956999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-12-06 03:34:57.957020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-12-06 03:34:57.957184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-12-06 03:34:57.957201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-12-06 03:34:57.957390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-12-06 03:34:57.957423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-12-06 03:34:57.957632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-12-06 03:34:57.957666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-12-06 03:34:57.957988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-12-06 03:34:57.958023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-12-06 03:34:57.958332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-12-06 03:34:57.958367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-12-06 03:34:57.958654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-12-06 03:34:57.958687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-12-06 03:34:57.958969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-12-06 03:34:57.959003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-12-06 03:34:57.959291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-12-06 03:34:57.959326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.104 [2024-12-06 03:34:57.959561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-12-06 03:34:57.959594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-12-06 03:34:57.959816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-12-06 03:34:57.959850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-12-06 03:34:57.959962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-12-06 03:34:57.959979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-12-06 03:34:57.960169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-12-06 03:34:57.960205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-12-06 03:34:57.960446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-12-06 03:34:57.960480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-12-06 03:34:57.960797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-12-06 03:34:57.960833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-12-06 03:34:57.961038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-12-06 03:34:57.961074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-12-06 03:34:57.961349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-12-06 03:34:57.961382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-12-06 03:34:57.961646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-12-06 03:34:57.961681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-12-06 03:34:57.962000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-12-06 03:34:57.962038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-12-06 03:34:57.962194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-12-06 03:34:57.962228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-12-06 03:34:57.962431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-12-06 03:34:57.962448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-12-06 03:34:57.962608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-12-06 03:34:57.962641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-12-06 03:34:57.962960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-12-06 03:34:57.962995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-12-06 03:34:57.963217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-12-06 03:34:57.963251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-12-06 03:34:57.963567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-12-06 03:34:57.963602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-12-06 03:34:57.963745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-12-06 03:34:57.963779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-12-06 03:34:57.964061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-12-06 03:34:57.964095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-12-06 03:34:57.964384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-12-06 03:34:57.964423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-12-06 03:34:57.964561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-12-06 03:34:57.964594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-12-06 03:34:57.964877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-12-06 03:34:57.964911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-12-06 03:34:57.965194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-12-06 03:34:57.965212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-12-06 03:34:57.965382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-12-06 03:34:57.965416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-12-06 03:34:57.965705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-12-06 03:34:57.965740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-12-06 03:34:57.966013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-12-06 03:34:57.966032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-12-06 03:34:57.966145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-12-06 03:34:57.966163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-12-06 03:34:57.966337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-12-06 03:34:57.966355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-12-06 03:34:57.966603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-12-06 03:34:57.966636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-12-06 03:34:57.966784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-12-06 03:34:57.966819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-12-06 03:34:57.967026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-12-06 03:34:57.967060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-12-06 03:34:57.967318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-12-06 03:34:57.967335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-12-06 03:34:57.967610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-12-06 03:34:57.967632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-12-06 03:34:57.967806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-12-06 03:34:57.967824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-12-06 03:34:57.968113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-12-06 03:34:57.968149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-12-06 03:34:57.968355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-12-06 03:34:57.968390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-12-06 03:34:57.968596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-12-06 03:34:57.968629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-12-06 03:34:57.968911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-12-06 03:34:57.968958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-12-06 03:34:57.969229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-12-06 03:34:57.969247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-12-06 03:34:57.969424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-12-06 03:34:57.969442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-12-06 03:34:57.969591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-12-06 03:34:57.969608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-12-06 03:34:57.969873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-12-06 03:34:57.969907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-12-06 03:34:57.970167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-12-06 03:34:57.970203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-12-06 03:34:57.970494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-12-06 03:34:57.970526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-12-06 03:34:57.970806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-12-06 03:34:57.970839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-12-06 03:34:57.971041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-12-06 03:34:57.971083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-12-06 03:34:57.971326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-12-06 03:34:57.971344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-12-06 03:34:57.971592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-12-06 03:34:57.971629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-12-06 03:34:57.971823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-12-06 03:34:57.971857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-12-06 03:34:57.972009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-12-06 03:34:57.972044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-12-06 03:34:57.972338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-12-06 03:34:57.972373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-12-06 03:34:57.972507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-12-06 03:34:57.972543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-12-06 03:34:57.972830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-12-06 03:34:57.972865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-12-06 03:34:57.973150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-12-06 03:34:57.973185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-12-06 03:34:57.973379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-12-06 03:34:57.973395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-12-06 03:34:57.973520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-12-06 03:34:57.973538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-12-06 03:34:57.973807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-12-06 03:34:57.973841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-12-06 03:34:57.974041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-12-06 03:34:57.974059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-12-06 03:34:57.974332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-12-06 03:34:57.974365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-12-06 03:34:57.974586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-12-06 03:34:57.974621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-12-06 03:34:57.974829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-12-06 03:34:57.974863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-12-06 03:34:57.975053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-12-06 03:34:57.975071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-12-06 03:34:57.975311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-12-06 03:34:57.975329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-12-06 03:34:57.975507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-12-06 03:34:57.975525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-12-06 03:34:57.975776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-12-06 03:34:57.975810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-12-06 03:34:57.976018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-12-06 03:34:57.976036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-12-06 03:34:57.976283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-12-06 03:34:57.976301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-12-06 03:34:57.976478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-12-06 03:34:57.976495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-12-06 03:34:57.976701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-12-06 03:34:57.976735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-12-06 03:34:57.976988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-12-06 03:34:57.977022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-12-06 03:34:57.977293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-12-06 03:34:57.977326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-12-06 03:34:57.977589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-12-06 03:34:57.977623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-12-06 03:34:57.977907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-12-06 03:34:57.977957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-12-06 03:34:57.978227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-12-06 03:34:57.978262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-12-06 03:34:57.978543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-12-06 03:34:57.978585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-12-06 03:34:57.978852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-12-06 03:34:57.978870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-12-06 03:34:57.979150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-12-06 03:34:57.979186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-12-06 03:34:57.979393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-12-06 03:34:57.979428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-12-06 03:34:57.979708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-12-06 03:34:57.979743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-12-06 03:34:57.979984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-12-06 03:34:57.980020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-12-06 03:34:57.980331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-12-06 03:34:57.980365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-12-06 03:34:57.980568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-12-06 03:34:57.980602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-12-06 03:34:57.980887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-12-06 03:34:57.980932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-12-06 03:34:57.981115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-12-06 03:34:57.981134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-12-06 03:34:57.981294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-12-06 03:34:57.981328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-12-06 03:34:57.981539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-12-06 03:34:57.981573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-12-06 03:34:57.981772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-12-06 03:34:57.981807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-12-06 03:34:57.981995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-12-06 03:34:57.982030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-12-06 03:34:57.982307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-12-06 03:34:57.982353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-12-06 03:34:57.982526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-12-06 03:34:57.982544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-12-06 03:34:57.982713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-12-06 03:34:57.982747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-12-06 03:34:57.982960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-12-06 03:34:57.982995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-12-06 03:34:57.983205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-12-06 03:34:57.983240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-12-06 03:34:57.983427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-12-06 03:34:57.983446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-12-06 03:34:57.983618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-12-06 03:34:57.983653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-12-06 03:34:57.983945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-12-06 03:34:57.983991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-12-06 03:34:57.984280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-12-06 03:34:57.984315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-12-06 03:34:57.984585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-12-06 03:34:57.984619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-12-06 03:34:57.984834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-12-06 03:34:57.984869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-12-06 03:34:57.985168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-12-06 03:34:57.985204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-12-06 03:34:57.985470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-12-06 03:34:57.985488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-12-06 03:34:57.985660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-12-06 03:34:57.985677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-12-06 03:34:57.985834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-12-06 03:34:57.985868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-12-06 03:34:57.986131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-12-06 03:34:57.986150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-12-06 03:34:57.986343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-12-06 03:34:57.986378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-12-06 03:34:57.986667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-12-06 03:34:57.986702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-12-06 03:34:57.986983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-12-06 03:34:57.987018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-12-06 03:34:57.987347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-12-06 03:34:57.987382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-12-06 03:34:57.987536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-12-06 03:34:57.987570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-12-06 03:34:57.987788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-12-06 03:34:57.987822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-12-06 03:34:57.988078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-12-06 03:34:57.988114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-12-06 03:34:57.988396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-12-06 03:34:57.988431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-12-06 03:34:57.988657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-12-06 03:34:57.988697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-12-06 03:34:57.988979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-12-06 03:34:57.989015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-12-06 03:34:57.989323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-12-06 03:34:57.989358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-12-06 03:34:57.989556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-12-06 03:34:57.989589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-12-06 03:34:57.989858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-12-06 03:34:57.989894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-12-06 03:34:57.990186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-12-06 03:34:57.990221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-12-06 03:34:57.990411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-12-06 03:34:57.990428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-12-06 03:34:57.990664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-12-06 03:34:57.990700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-12-06 03:34:57.990887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-12-06 03:34:57.990921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-12-06 03:34:57.991206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-12-06 03:34:57.991224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-12-06 03:34:57.991450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-12-06 03:34:57.991468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-12-06 03:34:57.991631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-12-06 03:34:57.991648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-12-06 03:34:57.991820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-12-06 03:34:57.991855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-12-06 03:34:57.992084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-12-06 03:34:57.992120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-12-06 03:34:57.992390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-12-06 03:34:57.992424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-12-06 03:34:57.992640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-12-06 03:34:57.992674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-12-06 03:34:57.992933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-12-06 03:34:57.992981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-12-06 03:34:57.993062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-12-06 03:34:57.993078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-12-06 03:34:57.993232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-12-06 03:34:57.993248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-12-06 03:34:57.993501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-12-06 03:34:57.993534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-12-06 03:34:57.993836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-12-06 03:34:57.993870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-12-06 03:34:57.994170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-12-06 03:34:57.994188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-12-06 03:34:57.994481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-12-06 03:34:57.994516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-12-06 03:34:57.994642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-12-06 03:34:57.994676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-12-06 03:34:57.994884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-12-06 03:34:57.994917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-12-06 03:34:57.995212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-12-06 03:34:57.995245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-12-06 03:34:57.995435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-12-06 03:34:57.995470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-12-06 03:34:57.995752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-12-06 03:34:57.995787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-12-06 03:34:57.996058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-12-06 03:34:57.996093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-12-06 03:34:57.996286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-12-06 03:34:57.996333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-12-06 03:34:57.996529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-12-06 03:34:57.996547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-12-06 03:34:57.996738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-12-06 03:34:57.996772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.109 [2024-12-06 03:34:57.997052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-12-06 03:34:57.997087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-12-06 03:34:57.997319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-12-06 03:34:57.997353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-12-06 03:34:57.997636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-12-06 03:34:57.997652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-12-06 03:34:57.997824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-12-06 03:34:57.997842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-12-06 03:34:57.998088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-12-06 03:34:57.998105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-12-06 03:34:57.998313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-12-06 03:34:57.998331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-12-06 03:34:57.998557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-12-06 03:34:57.998592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-12-06 03:34:57.998852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-12-06 03:34:57.998886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-12-06 03:34:57.999128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-12-06 03:34:57.999149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-12-06 03:34:57.999432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-12-06 03:34:57.999466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-12-06 03:34:57.999670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-12-06 03:34:57.999705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-12-06 03:34:57.999971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-12-06 03:34:58.000007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-12-06 03:34:58.000217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-12-06 03:34:58.000252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-12-06 03:34:58.000534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-12-06 03:34:58.000567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-12-06 03:34:58.000704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-12-06 03:34:58.000738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-12-06 03:34:58.000935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-12-06 03:34:58.000981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-12-06 03:34:58.001188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-12-06 03:34:58.001222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-12-06 03:34:58.001481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-12-06 03:34:58.001514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-12-06 03:34:58.001706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-12-06 03:34:58.001740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-12-06 03:34:58.001958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-12-06 03:34:58.001995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-12-06 03:34:58.002199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-12-06 03:34:58.002234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-12-06 03:34:58.002533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-12-06 03:34:58.002567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-12-06 03:34:58.002852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-12-06 03:34:58.002887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-12-06 03:34:58.003176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-12-06 03:34:58.003212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-12-06 03:34:58.003443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-12-06 03:34:58.003479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-12-06 03:34:58.003741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-12-06 03:34:58.003775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-12-06 03:34:58.004083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-12-06 03:34:58.004120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-12-06 03:34:58.004408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-12-06 03:34:58.004442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-12-06 03:34:58.004720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-12-06 03:34:58.004755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-12-06 03:34:58.004942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-12-06 03:34:58.004987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-12-06 03:34:58.005177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-12-06 03:34:58.005211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-12-06 03:34:58.005450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-12-06 03:34:58.005485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-12-06 03:34:58.005744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-12-06 03:34:58.005778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-12-06 03:34:58.006012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-12-06 03:34:58.006047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-12-06 03:34:58.006252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-12-06 03:34:58.006270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-12-06 03:34:58.006524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-12-06 03:34:58.006559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-12-06 03:34:58.006819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-12-06 03:34:58.006854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-12-06 03:34:58.007132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-12-06 03:34:58.007168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-12-06 03:34:58.007477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-12-06 03:34:58.007511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-12-06 03:34:58.007800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-12-06 03:34:58.007817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-12-06 03:34:58.007984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-12-06 03:34:58.008002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-12-06 03:34:58.008251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-12-06 03:34:58.008285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-12-06 03:34:58.008543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-12-06 03:34:58.008578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-12-06 03:34:58.008884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-12-06 03:34:58.008918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-12-06 03:34:58.009138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-12-06 03:34:58.009173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-12-06 03:34:58.009382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-12-06 03:34:58.009400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-12-06 03:34:58.009516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-12-06 03:34:58.009551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-12-06 03:34:58.009753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-12-06 03:34:58.009786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-12-06 03:34:58.009957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-12-06 03:34:58.009998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-12-06 03:34:58.010278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-12-06 03:34:58.010296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-12-06 03:34:58.010415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-12-06 03:34:58.010432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-12-06 03:34:58.010684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-12-06 03:34:58.010719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-12-06 03:34:58.010932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-12-06 03:34:58.010977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-12-06 03:34:58.011260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-12-06 03:34:58.011294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-12-06 03:34:58.011575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-12-06 03:34:58.011611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-12-06 03:34:58.011897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-12-06 03:34:58.011915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-12-06 03:34:58.012168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-12-06 03:34:58.012187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-12-06 03:34:58.012406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-12-06 03:34:58.012424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-12-06 03:34:58.012601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-12-06 03:34:58.012619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-12-06 03:34:58.012792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-12-06 03:34:58.012809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-12-06 03:34:58.013052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-12-06 03:34:58.013089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-12-06 03:34:58.013349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-12-06 03:34:58.013385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-12-06 03:34:58.013666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-12-06 03:34:58.013700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-12-06 03:34:58.013924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-12-06 03:34:58.013984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-12-06 03:34:58.014247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-12-06 03:34:58.014281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-12-06 03:34:58.014411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-12-06 03:34:58.014428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-12-06 03:34:58.014605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-12-06 03:34:58.014639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-12-06 03:34:58.014864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-12-06 03:34:58.014898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-12-06 03:34:58.015172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-12-06 03:34:58.015207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-12-06 03:34:58.015487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-12-06 03:34:58.015521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-12-06 03:34:58.015806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-12-06 03:34:58.015824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-12-06 03:34:58.016064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-12-06 03:34:58.016082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-12-06 03:34:58.016305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-12-06 03:34:58.016322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-12-06 03:34:58.016431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-12-06 03:34:58.016448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-12-06 03:34:58.016691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-12-06 03:34:58.016725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-12-06 03:34:58.016939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-12-06 03:34:58.016986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-12-06 03:34:58.017177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-12-06 03:34:58.017211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-12-06 03:34:58.017361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-12-06 03:34:58.017395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-12-06 03:34:58.017595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-12-06 03:34:58.017630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-12-06 03:34:58.017941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-12-06 03:34:58.017997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-12-06 03:34:58.018250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-12-06 03:34:58.018268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-12-06 03:34:58.018368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-12-06 03:34:58.018384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-12-06 03:34:58.018480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-12-06 03:34:58.018495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-12-06 03:34:58.018767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-12-06 03:34:58.018785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-12-06 03:34:58.019011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-12-06 03:34:58.019029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-12-06 03:34:58.019181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-12-06 03:34:58.019198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-12-06 03:34:58.019374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-12-06 03:34:58.019407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-12-06 03:34:58.019613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-12-06 03:34:58.019647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-12-06 03:34:58.019847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-12-06 03:34:58.019887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-12-06 03:34:58.020099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-12-06 03:34:58.020137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-12-06 03:34:58.020397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-12-06 03:34:58.020414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-12-06 03:34:58.020631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-12-06 03:34:58.020649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-12-06 03:34:58.020805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-12-06 03:34:58.020823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-12-06 03:34:58.020992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-12-06 03:34:58.021009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-12-06 03:34:58.021184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-12-06 03:34:58.021202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-12-06 03:34:58.021375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-12-06 03:34:58.021411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-12-06 03:34:58.021703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-12-06 03:34:58.021739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-12-06 03:34:58.022036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-12-06 03:34:58.022073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-12-06 03:34:58.022292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-12-06 03:34:58.022328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-12-06 03:34:58.022563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-12-06 03:34:58.022596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-12-06 03:34:58.022875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-12-06 03:34:58.022909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-12-06 03:34:58.023207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-12-06 03:34:58.023242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-12-06 03:34:58.023511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-12-06 03:34:58.023529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-12-06 03:34:58.023627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-12-06 03:34:58.023643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-12-06 03:34:58.023886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-12-06 03:34:58.023905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-12-06 03:34:58.024154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-12-06 03:34:58.024172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-12-06 03:34:58.024423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-12-06 03:34:58.024441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-12-06 03:34:58.024627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-12-06 03:34:58.024645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-12-06 03:34:58.024808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-12-06 03:34:58.024825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-12-06 03:34:58.025015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-12-06 03:34:58.025034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.113 [2024-12-06 03:34:58.025210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-12-06 03:34:58.025227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-12-06 03:34:58.025394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-12-06 03:34:58.025412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-12-06 03:34:58.025636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-12-06 03:34:58.025653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-12-06 03:34:58.025832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-12-06 03:34:58.025849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-12-06 03:34:58.026104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-12-06 03:34:58.026122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-12-06 03:34:58.026386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-12-06 03:34:58.026405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-12-06 03:34:58.026655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-12-06 03:34:58.026673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-12-06 03:34:58.026792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-12-06 03:34:58.026807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-12-06 03:34:58.027003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-12-06 03:34:58.027021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-12-06 03:34:58.027194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-12-06 03:34:58.027211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-12-06 03:34:58.027457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-12-06 03:34:58.027474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-12-06 03:34:58.027725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-12-06 03:34:58.027743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-12-06 03:34:58.027980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-12-06 03:34:58.027998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-12-06 03:34:58.028155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-12-06 03:34:58.028173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-12-06 03:34:58.028459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-12-06 03:34:58.028477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-12-06 03:34:58.028755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-12-06 03:34:58.028773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-12-06 03:34:58.028889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-12-06 03:34:58.028906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-12-06 03:34:58.029158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-12-06 03:34:58.029176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-12-06 03:34:58.029344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-12-06 03:34:58.029365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-12-06 03:34:58.029581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-12-06 03:34:58.029599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-12-06 03:34:58.029821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-12-06 03:34:58.029839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-12-06 03:34:58.030038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-12-06 03:34:58.030056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-12-06 03:34:58.030155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-12-06 03:34:58.030171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-12-06 03:34:58.030441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-12-06 03:34:58.030459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-12-06 03:34:58.030736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-12-06 03:34:58.030754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-12-06 03:34:58.030983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-12-06 03:34:58.031001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-12-06 03:34:58.031234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-12-06 03:34:58.031252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-12-06 03:34:58.031450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-12-06 03:34:58.031467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-12-06 03:34:58.031754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-12-06 03:34:58.031771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-12-06 03:34:58.031883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-12-06 03:34:58.031899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-12-06 03:34:58.032008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-12-06 03:34:58.032025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-12-06 03:34:58.032135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-12-06 03:34:58.032152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-12-06 03:34:58.032397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-12-06 03:34:58.032415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-12-06 03:34:58.032670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-12-06 03:34:58.032687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-12-06 03:34:58.032926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-12-06 03:34:58.032944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-12-06 03:34:58.033033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-12-06 03:34:58.033049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-12-06 03:34:58.033165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-12-06 03:34:58.033181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-12-06 03:34:58.033405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-12-06 03:34:58.033423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-12-06 03:34:58.033581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-12-06 03:34:58.033598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-12-06 03:34:58.033896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-12-06 03:34:58.033914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-12-06 03:34:58.034136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-12-06 03:34:58.034155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-12-06 03:34:58.034280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-12-06 03:34:58.034297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-12-06 03:34:58.034469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-12-06 03:34:58.034487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-12-06 03:34:58.034584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-12-06 03:34:58.034600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-12-06 03:34:58.034699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-12-06 03:34:58.034714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-12-06 03:34:58.034841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116ab20 is same with the state(6) to be set 00:26:38.114 [2024-12-06 03:34:58.035069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-12-06 03:34:58.035100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-12-06 03:34:58.035222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-12-06 03:34:58.035241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-12-06 03:34:58.035467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-12-06 03:34:58.035485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-12-06 03:34:58.035708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-12-06 03:34:58.035725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-12-06 03:34:58.035939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-12-06 03:34:58.035962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-12-06 03:34:58.036158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-12-06 03:34:58.036175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-12-06 03:34:58.036342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-12-06 03:34:58.036361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-12-06 03:34:58.036477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-12-06 03:34:58.036494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-12-06 03:34:58.036646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-12-06 03:34:58.036664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-12-06 03:34:58.036814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-12-06 03:34:58.036832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-12-06 03:34:58.037082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-12-06 03:34:58.037100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-12-06 03:34:58.037333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-12-06 03:34:58.037350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.115 [2024-12-06 03:34:58.037621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-12-06 03:34:58.037639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-12-06 03:34:58.037847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-12-06 03:34:58.037896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-12-06 03:34:58.038143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-12-06 03:34:58.038183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-12-06 03:34:58.038481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-12-06 03:34:58.038503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-12-06 03:34:58.038682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-12-06 03:34:58.038698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-12-06 03:34:58.038850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-12-06 03:34:58.038867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-12-06 03:34:58.039055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-12-06 03:34:58.039075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-12-06 03:34:58.039323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-12-06 03:34:58.039343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-12-06 03:34:58.039498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-12-06 03:34:58.039516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-12-06 03:34:58.039636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-12-06 03:34:58.039656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-12-06 03:34:58.039904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-12-06 03:34:58.039920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-12-06 03:34:58.040095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-12-06 03:34:58.040115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-12-06 03:34:58.040271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-12-06 03:34:58.040289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-12-06 03:34:58.040470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-12-06 03:34:58.040487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-12-06 03:34:58.040595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-12-06 03:34:58.040616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-12-06 03:34:58.040788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-12-06 03:34:58.040805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-12-06 03:34:58.040965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-12-06 03:34:58.040984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-12-06 03:34:58.041258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-12-06 03:34:58.041276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-12-06 03:34:58.041456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-12-06 03:34:58.041474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-12-06 03:34:58.041668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-12-06 03:34:58.041686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-12-06 03:34:58.041933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-12-06 03:34:58.041963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-12-06 03:34:58.042206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-12-06 03:34:58.042223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-12-06 03:34:58.042451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-12-06 03:34:58.042469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-12-06 03:34:58.042646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-12-06 03:34:58.042683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-12-06 03:34:58.042911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-12-06 03:34:58.042957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-12-06 03:34:58.043244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-12-06 03:34:58.043280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-12-06 03:34:58.043518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-12-06 03:34:58.043553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-12-06 03:34:58.043767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-12-06 03:34:58.043802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-12-06 03:34:58.043968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-12-06 03:34:58.044004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-12-06 03:34:58.044198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-12-06 03:34:58.044234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-12-06 03:34:58.044492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-12-06 03:34:58.044528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-12-06 03:34:58.044814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-12-06 03:34:58.044849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-12-06 03:34:58.045073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-12-06 03:34:58.045109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-12-06 03:34:58.045248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-12-06 03:34:58.045284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-12-06 03:34:58.045403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-12-06 03:34:58.045422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-12-06 03:34:58.045596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-12-06 03:34:58.045614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-12-06 03:34:58.045838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-12-06 03:34:58.045857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-12-06 03:34:58.046013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-12-06 03:34:58.046032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-12-06 03:34:58.046255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-12-06 03:34:58.046273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-12-06 03:34:58.046433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-12-06 03:34:58.046468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-12-06 03:34:58.046673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-12-06 03:34:58.046707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-12-06 03:34:58.046970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-12-06 03:34:58.047017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-12-06 03:34:58.047156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-12-06 03:34:58.047190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-12-06 03:34:58.047436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-12-06 03:34:58.047473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-12-06 03:34:58.047753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-12-06 03:34:58.047789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-12-06 03:34:58.048028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-12-06 03:34:58.048067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-12-06 03:34:58.048361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-12-06 03:34:58.048376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-12-06 03:34:58.048578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-12-06 03:34:58.048591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-12-06 03:34:58.048811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-12-06 03:34:58.048825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-12-06 03:34:58.049066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-12-06 03:34:58.049101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-12-06 03:34:58.049319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-12-06 03:34:58.049353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-12-06 03:34:58.049616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-12-06 03:34:58.049651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-12-06 03:34:58.049937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-12-06 03:34:58.049985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-12-06 03:34:58.050245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-12-06 03:34:58.050283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-12-06 03:34:58.050498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-12-06 03:34:58.050549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-12-06 03:34:58.050748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-12-06 03:34:58.050782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-12-06 03:34:58.050910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-12-06 03:34:58.050943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-12-06 03:34:58.051169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-12-06 03:34:58.051207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-12-06 03:34:58.051416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-12-06 03:34:58.051430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-12-06 03:34:58.051660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-12-06 03:34:58.051695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-12-06 03:34:58.051970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-12-06 03:34:58.052005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-12-06 03:34:58.052218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-12-06 03:34:58.052254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-12-06 03:34:58.052455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-12-06 03:34:58.052492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-12-06 03:34:58.052694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-12-06 03:34:58.052728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-12-06 03:34:58.053011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-12-06 03:34:58.053045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-12-06 03:34:58.053243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-12-06 03:34:58.053279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-12-06 03:34:58.053494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-12-06 03:34:58.053530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-12-06 03:34:58.053685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-12-06 03:34:58.053720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-12-06 03:34:58.053962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-12-06 03:34:58.054000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-12-06 03:34:58.054278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-12-06 03:34:58.054312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-12-06 03:34:58.054528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-12-06 03:34:58.054560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-12-06 03:34:58.054785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-12-06 03:34:58.054820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-12-06 03:34:58.055101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-12-06 03:34:58.055115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-12-06 03:34:58.055272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-12-06 03:34:58.055287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-12-06 03:34:58.055531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-12-06 03:34:58.055565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-12-06 03:34:58.055772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-12-06 03:34:58.055806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-12-06 03:34:58.056074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-12-06 03:34:58.056109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-12-06 03:34:58.056434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-12-06 03:34:58.056467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-12-06 03:34:58.056620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-12-06 03:34:58.056655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-12-06 03:34:58.056775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-12-06 03:34:58.056809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-12-06 03:34:58.057093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-12-06 03:34:58.057129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-12-06 03:34:58.057403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-12-06 03:34:58.057445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-12-06 03:34:58.057748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-12-06 03:34:58.057784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-12-06 03:34:58.057958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-12-06 03:34:58.057972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-12-06 03:34:58.058155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-12-06 03:34:58.058171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-12-06 03:34:58.058317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-12-06 03:34:58.058351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-12-06 03:34:58.058612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-12-06 03:34:58.058646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-12-06 03:34:58.058966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-12-06 03:34:58.059002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-12-06 03:34:58.059195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-12-06 03:34:58.059232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-12-06 03:34:58.059503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-12-06 03:34:58.059537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-12-06 03:34:58.059761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-12-06 03:34:58.059797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-12-06 03:34:58.059922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-12-06 03:34:58.059967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-12-06 03:34:58.060162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-12-06 03:34:58.060196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-12-06 03:34:58.060404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-12-06 03:34:58.060438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-12-06 03:34:58.060674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-12-06 03:34:58.060705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-12-06 03:34:58.060811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-12-06 03:34:58.060824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-12-06 03:34:58.061044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-12-06 03:34:58.061059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-12-06 03:34:58.061292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-12-06 03:34:58.061330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-12-06 03:34:58.061540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-12-06 03:34:58.061573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-12-06 03:34:58.061858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-12-06 03:34:58.061892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-12-06 03:34:58.062139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-12-06 03:34:58.062177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-12-06 03:34:58.062460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-12-06 03:34:58.062495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-12-06 03:34:58.062678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-12-06 03:34:58.062692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-12-06 03:34:58.062914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-12-06 03:34:58.062928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-12-06 03:34:58.063026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-12-06 03:34:58.063039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-12-06 03:34:58.063181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-12-06 03:34:58.063194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-12-06 03:34:58.063280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-12-06 03:34:58.063292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-12-06 03:34:58.063531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-12-06 03:34:58.063544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-12-06 03:34:58.063728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-12-06 03:34:58.063742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-12-06 03:34:58.063910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-12-06 03:34:58.063924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-12-06 03:34:58.064004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-12-06 03:34:58.064017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-12-06 03:34:58.064163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-12-06 03:34:58.064175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-12-06 03:34:58.064320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-12-06 03:34:58.064333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-12-06 03:34:58.064477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-12-06 03:34:58.064490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-12-06 03:34:58.064667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-12-06 03:34:58.064681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-12-06 03:34:58.064932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-12-06 03:34:58.064978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-12-06 03:34:58.065238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-12-06 03:34:58.065272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-12-06 03:34:58.065548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-12-06 03:34:58.065562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-12-06 03:34:58.065731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-12-06 03:34:58.065745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-12-06 03:34:58.065989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-12-06 03:34:58.066004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-12-06 03:34:58.066184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-12-06 03:34:58.066199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-12-06 03:34:58.066329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-12-06 03:34:58.066345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-12-06 03:34:58.066563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-12-06 03:34:58.066577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-12-06 03:34:58.066826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-12-06 03:34:58.066840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-12-06 03:34:58.067104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-12-06 03:34:58.067118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-12-06 03:34:58.067276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-12-06 03:34:58.067289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-12-06 03:34:58.067543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-12-06 03:34:58.067557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-12-06 03:34:58.067646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-12-06 03:34:58.067657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-12-06 03:34:58.067804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-12-06 03:34:58.067816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-12-06 03:34:58.068048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-12-06 03:34:58.068063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-12-06 03:34:58.068251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-12-06 03:34:58.068265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-12-06 03:34:58.068400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-12-06 03:34:58.068414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-12-06 03:34:58.068561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-12-06 03:34:58.068574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-12-06 03:34:58.068805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-12-06 03:34:58.068819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-12-06 03:34:58.069099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-12-06 03:34:58.069115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-12-06 03:34:58.069335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-12-06 03:34:58.069350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-12-06 03:34:58.069519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-12-06 03:34:58.069553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-12-06 03:34:58.069883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-12-06 03:34:58.069931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-12-06 03:34:58.070117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-12-06 03:34:58.070163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-12-06 03:34:58.070407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-12-06 03:34:58.070452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-12-06 03:34:58.070758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-12-06 03:34:58.070802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-12-06 03:34:58.070989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-12-06 03:34:58.071004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-12-06 03:34:58.071214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-12-06 03:34:58.071228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-12-06 03:34:58.071465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-12-06 03:34:58.071479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-12-06 03:34:58.071566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-12-06 03:34:58.071580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-12-06 03:34:58.071794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-12-06 03:34:58.071809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-12-06 03:34:58.071968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-12-06 03:34:58.071982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-12-06 03:34:58.072196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-12-06 03:34:58.072210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-12-06 03:34:58.072436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-12-06 03:34:58.072450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-12-06 03:34:58.072646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-12-06 03:34:58.072660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-12-06 03:34:58.072804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-12-06 03:34:58.072818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-12-06 03:34:58.072973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-12-06 03:34:58.072988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-12-06 03:34:58.073097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-12-06 03:34:58.073109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-12-06 03:34:58.073287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-12-06 03:34:58.073301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-12-06 03:34:58.073453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-12-06 03:34:58.073466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-12-06 03:34:58.073705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-12-06 03:34:58.073718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-12-06 03:34:58.073962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-12-06 03:34:58.073976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-12-06 03:34:58.074071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-12-06 03:34:58.074085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-12-06 03:34:58.074298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-12-06 03:34:58.074313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-12-06 03:34:58.074532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-12-06 03:34:58.074545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-12-06 03:34:58.074762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-12-06 03:34:58.074777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-12-06 03:34:58.074963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-12-06 03:34:58.075006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-12-06 03:34:58.075196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-12-06 03:34:58.075233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-12-06 03:34:58.075543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-12-06 03:34:58.075577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-12-06 03:34:58.075858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-12-06 03:34:58.075892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-12-06 03:34:58.076106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-12-06 03:34:58.076142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-12-06 03:34:58.076291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-12-06 03:34:58.076304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-12-06 03:34:58.076447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-12-06 03:34:58.076461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-12-06 03:34:58.076649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-12-06 03:34:58.076663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-12-06 03:34:58.076921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-12-06 03:34:58.076935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-12-06 03:34:58.077089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-12-06 03:34:58.077103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-12-06 03:34:58.077305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-12-06 03:34:58.077318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-12-06 03:34:58.077534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-12-06 03:34:58.077549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-12-06 03:34:58.077770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-12-06 03:34:58.077784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-12-06 03:34:58.077941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-12-06 03:34:58.077969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-12-06 03:34:58.078136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-12-06 03:34:58.078151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-12-06 03:34:58.078328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-12-06 03:34:58.078342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-12-06 03:34:58.078565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-12-06 03:34:58.078578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-12-06 03:34:58.078749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-12-06 03:34:58.078762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-12-06 03:34:58.078934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-12-06 03:34:58.078956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-12-06 03:34:58.079101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-12-06 03:34:58.079114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-12-06 03:34:58.079230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-12-06 03:34:58.079245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-12-06 03:34:58.079459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-12-06 03:34:58.079475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-12-06 03:34:58.079757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-12-06 03:34:58.079792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-12-06 03:34:58.080007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-12-06 03:34:58.080044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-12-06 03:34:58.080250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-12-06 03:34:58.080284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-12-06 03:34:58.080510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-12-06 03:34:58.080544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-12-06 03:34:58.080718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-12-06 03:34:58.080731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-12-06 03:34:58.080964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-12-06 03:34:58.080978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-12-06 03:34:58.081143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-12-06 03:34:58.081158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-12-06 03:34:58.081276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-12-06 03:34:58.081288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-12-06 03:34:58.081447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-12-06 03:34:58.081460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-12-06 03:34:58.081636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-12-06 03:34:58.081649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-12-06 03:34:58.081866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-12-06 03:34:58.081880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-12-06 03:34:58.082074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-12-06 03:34:58.082087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-12-06 03:34:58.082306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-12-06 03:34:58.082320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-12-06 03:34:58.082460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-12-06 03:34:58.082474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-12-06 03:34:58.082630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-12-06 03:34:58.082644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-12-06 03:34:58.082858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-12-06 03:34:58.082872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-12-06 03:34:58.083097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-12-06 03:34:58.083110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-12-06 03:34:58.083329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-12-06 03:34:58.083342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-12-06 03:34:58.083494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-12-06 03:34:58.083510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-12-06 03:34:58.083735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-12-06 03:34:58.083749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-12-06 03:34:58.083908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-12-06 03:34:58.083921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-12-06 03:34:58.084065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-12-06 03:34:58.084080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-12-06 03:34:58.084296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-12-06 03:34:58.084310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-12-06 03:34:58.084409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-12-06 03:34:58.084421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-12-06 03:34:58.084658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-12-06 03:34:58.084692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-12-06 03:34:58.084823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-12-06 03:34:58.084857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-12-06 03:34:58.085180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-12-06 03:34:58.085221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-12-06 03:34:58.085364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-12-06 03:34:58.085380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-12-06 03:34:58.085555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-12-06 03:34:58.085589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-12-06 03:34:58.085872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-12-06 03:34:58.085907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-12-06 03:34:58.086193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-12-06 03:34:58.086230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-12-06 03:34:58.086427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-12-06 03:34:58.086460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-12-06 03:34:58.086734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-12-06 03:34:58.086748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-12-06 03:34:58.086898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-12-06 03:34:58.086912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-12-06 03:34:58.087148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-12-06 03:34:58.087163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-12-06 03:34:58.087305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-12-06 03:34:58.087317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-12-06 03:34:58.087503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-12-06 03:34:58.087517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-12-06 03:34:58.087703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-12-06 03:34:58.087718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-12-06 03:34:58.087909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-12-06 03:34:58.087943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-12-06 03:34:58.088237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-12-06 03:34:58.088272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.123 [2024-12-06 03:34:58.088556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-12-06 03:34:58.088591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-12-06 03:34:58.088810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-12-06 03:34:58.088844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-12-06 03:34:58.089051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-12-06 03:34:58.089088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-12-06 03:34:58.089373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-12-06 03:34:58.089408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-12-06 03:34:58.089689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-12-06 03:34:58.089703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-12-06 03:34:58.089805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-12-06 03:34:58.089817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-12-06 03:34:58.089981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-12-06 03:34:58.089994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-12-06 03:34:58.090214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-12-06 03:34:58.090228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-12-06 03:34:58.090319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-12-06 03:34:58.090331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-12-06 03:34:58.090544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-12-06 03:34:58.090558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-12-06 03:34:58.090770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-12-06 03:34:58.090783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-12-06 03:34:58.091089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-12-06 03:34:58.091103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-12-06 03:34:58.091210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-12-06 03:34:58.091223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-12-06 03:34:58.091406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-12-06 03:34:58.091420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-12-06 03:34:58.091643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-12-06 03:34:58.091657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-12-06 03:34:58.091832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-12-06 03:34:58.091846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-12-06 03:34:58.092041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-12-06 03:34:58.092055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-12-06 03:34:58.092204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-12-06 03:34:58.092218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-12-06 03:34:58.092378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-12-06 03:34:58.092393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-12-06 03:34:58.092537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-12-06 03:34:58.092551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-12-06 03:34:58.092635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-12-06 03:34:58.092647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-12-06 03:34:58.092883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-12-06 03:34:58.092897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-12-06 03:34:58.093059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-12-06 03:34:58.093074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-12-06 03:34:58.093233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-12-06 03:34:58.093246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-12-06 03:34:58.093462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-12-06 03:34:58.093475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-12-06 03:34:58.093625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-12-06 03:34:58.093661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-12-06 03:34:58.093962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-12-06 03:34:58.093998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-12-06 03:34:58.094199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-12-06 03:34:58.094235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-12-06 03:34:58.094448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-12-06 03:34:58.094481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-12-06 03:34:58.094738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-12-06 03:34:58.094772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-12-06 03:34:58.095037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-12-06 03:34:58.095074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-12-06 03:34:58.095201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-12-06 03:34:58.095235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-12-06 03:34:58.095446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-12-06 03:34:58.095482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-12-06 03:34:58.095697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-12-06 03:34:58.095710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-12-06 03:34:58.095959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-12-06 03:34:58.095989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-12-06 03:34:58.096161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-12-06 03:34:58.096175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-12-06 03:34:58.096393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-12-06 03:34:58.096429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-12-06 03:34:58.096688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-12-06 03:34:58.096723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-12-06 03:34:58.097036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-12-06 03:34:58.097071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-12-06 03:34:58.097354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-12-06 03:34:58.097390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-12-06 03:34:58.097588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-12-06 03:34:58.097602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-12-06 03:34:58.097700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-12-06 03:34:58.097746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-12-06 03:34:58.097894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-12-06 03:34:58.097927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-12-06 03:34:58.098214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-12-06 03:34:58.098250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-12-06 03:34:58.098446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-12-06 03:34:58.098460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-12-06 03:34:58.098712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-12-06 03:34:58.098747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-12-06 03:34:58.098936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-12-06 03:34:58.098983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-12-06 03:34:58.099181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-12-06 03:34:58.099216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-12-06 03:34:58.099448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-12-06 03:34:58.099483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-12-06 03:34:58.099774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-12-06 03:34:58.099808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-12-06 03:34:58.100090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-12-06 03:34:58.100133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-12-06 03:34:58.100311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-12-06 03:34:58.100324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-12-06 03:34:58.100447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-12-06 03:34:58.100481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-12-06 03:34:58.100719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-12-06 03:34:58.100754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-12-06 03:34:58.101037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-12-06 03:34:58.101074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-12-06 03:34:58.101218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-12-06 03:34:58.101252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-12-06 03:34:58.101440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-12-06 03:34:58.101474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-12-06 03:34:58.101697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-12-06 03:34:58.101711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-12-06 03:34:58.101968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-12-06 03:34:58.102010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-12-06 03:34:58.102148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-12-06 03:34:58.102181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-12-06 03:34:58.102336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-12-06 03:34:58.102372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-12-06 03:34:58.102645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-12-06 03:34:58.102659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-12-06 03:34:58.102816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-12-06 03:34:58.102850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-12-06 03:34:58.103047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-12-06 03:34:58.103082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-12-06 03:34:58.103354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-12-06 03:34:58.103391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-12-06 03:34:58.103636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-12-06 03:34:58.103650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-12-06 03:34:58.103756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-12-06 03:34:58.103768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-12-06 03:34:58.103987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-12-06 03:34:58.104001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-12-06 03:34:58.104215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-12-06 03:34:58.104228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-12-06 03:34:58.104348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-12-06 03:34:58.104361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-12-06 03:34:58.104627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-12-06 03:34:58.104661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-12-06 03:34:58.104894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-12-06 03:34:58.104928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-12-06 03:34:58.105180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-12-06 03:34:58.105214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-12-06 03:34:58.105515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-12-06 03:34:58.105551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-12-06 03:34:58.105811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-12-06 03:34:58.105824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-12-06 03:34:58.105919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-12-06 03:34:58.105931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-12-06 03:34:58.106086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-12-06 03:34:58.106100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-12-06 03:34:58.106253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-12-06 03:34:58.106287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-12-06 03:34:58.106482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-12-06 03:34:58.106518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-12-06 03:34:58.106731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-12-06 03:34:58.106766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-12-06 03:34:58.107048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-12-06 03:34:58.107083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-12-06 03:34:58.107320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-12-06 03:34:58.107356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-12-06 03:34:58.107628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-12-06 03:34:58.107662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-12-06 03:34:58.107959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-12-06 03:34:58.107997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-12-06 03:34:58.108286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-12-06 03:34:58.108321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-12-06 03:34:58.108605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-12-06 03:34:58.108640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-12-06 03:34:58.108897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-12-06 03:34:58.108933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-12-06 03:34:58.109070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-12-06 03:34:58.109104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-12-06 03:34:58.109389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-12-06 03:34:58.109423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-12-06 03:34:58.109705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-12-06 03:34:58.109739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-12-06 03:34:58.110028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-12-06 03:34:58.110064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-12-06 03:34:58.110258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-12-06 03:34:58.110291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-12-06 03:34:58.110554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-12-06 03:34:58.110569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-12-06 03:34:58.110802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-12-06 03:34:58.110815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-12-06 03:34:58.110982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-12-06 03:34:58.110996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-12-06 03:34:58.111211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-12-06 03:34:58.111226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-12-06 03:34:58.111440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-12-06 03:34:58.111454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-12-06 03:34:58.111560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-12-06 03:34:58.111572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-12-06 03:34:58.111716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-12-06 03:34:58.111733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-12-06 03:34:58.111953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-12-06 03:34:58.111967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-12-06 03:34:58.112178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-12-06 03:34:58.112191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-12-06 03:34:58.112357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-12-06 03:34:58.112371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-12-06 03:34:58.112587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-12-06 03:34:58.112600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-12-06 03:34:58.112760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-12-06 03:34:58.112773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-12-06 03:34:58.112936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-12-06 03:34:58.112955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-12-06 03:34:58.113114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-12-06 03:34:58.113128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-12-06 03:34:58.113365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-12-06 03:34:58.113378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-12-06 03:34:58.113474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-12-06 03:34:58.113487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-12-06 03:34:58.113713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-12-06 03:34:58.113726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-12-06 03:34:58.113992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-12-06 03:34:58.114006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-12-06 03:34:58.114197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-12-06 03:34:58.114212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-12-06 03:34:58.114310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-12-06 03:34:58.114321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-12-06 03:34:58.114477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-12-06 03:34:58.114491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-12-06 03:34:58.114644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-12-06 03:34:58.114658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-12-06 03:34:58.114806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-12-06 03:34:58.114820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-12-06 03:34:58.115046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-12-06 03:34:58.115060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-12-06 03:34:58.115158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-12-06 03:34:58.115170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-12-06 03:34:58.115338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-12-06 03:34:58.115351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-12-06 03:34:58.115544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-12-06 03:34:58.115558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-12-06 03:34:58.115797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-12-06 03:34:58.115809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-12-06 03:34:58.115974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-12-06 03:34:58.115987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-12-06 03:34:58.116162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-12-06 03:34:58.116176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-12-06 03:34:58.116419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-12-06 03:34:58.116433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-12-06 03:34:58.116664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-12-06 03:34:58.116677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-12-06 03:34:58.116829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-12-06 03:34:58.116841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-12-06 03:34:58.117107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-12-06 03:34:58.117121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-12-06 03:34:58.117277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-12-06 03:34:58.117291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-12-06 03:34:58.117469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-12-06 03:34:58.117483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-12-06 03:34:58.117730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-12-06 03:34:58.117742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-12-06 03:34:58.117980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-12-06 03:34:58.117994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-12-06 03:34:58.118153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-12-06 03:34:58.118166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-12-06 03:34:58.118349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-12-06 03:34:58.118362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-12-06 03:34:58.118443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-12-06 03:34:58.118455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-12-06 03:34:58.118613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-12-06 03:34:58.118626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-12-06 03:34:58.118845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-12-06 03:34:58.118859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-12-06 03:34:58.118930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-12-06 03:34:58.118942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-12-06 03:34:58.119020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-12-06 03:34:58.119032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-12-06 03:34:58.119203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-12-06 03:34:58.119215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-12-06 03:34:58.119372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-12-06 03:34:58.119387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-12-06 03:34:58.119623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-12-06 03:34:58.119636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-12-06 03:34:58.119867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-12-06 03:34:58.119880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-12-06 03:34:58.120030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-12-06 03:34:58.120043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.128 [2024-12-06 03:34:58.120200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-12-06 03:34:58.120213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-12-06 03:34:58.120366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-12-06 03:34:58.120379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-12-06 03:34:58.120617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-12-06 03:34:58.120632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-12-06 03:34:58.120788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-12-06 03:34:58.120802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-12-06 03:34:58.120953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-12-06 03:34:58.120967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-12-06 03:34:58.121050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-12-06 03:34:58.121062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-12-06 03:34:58.121229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-12-06 03:34:58.121242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-12-06 03:34:58.121413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-12-06 03:34:58.121426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-12-06 03:34:58.121645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-12-06 03:34:58.121658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-12-06 03:34:58.121792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-12-06 03:34:58.121806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-12-06 03:34:58.121952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-12-06 03:34:58.121966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-12-06 03:34:58.122066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-12-06 03:34:58.122080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-12-06 03:34:58.122237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-12-06 03:34:58.122252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-12-06 03:34:58.122396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-12-06 03:34:58.122410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-12-06 03:34:58.122643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-12-06 03:34:58.122657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-12-06 03:34:58.122870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-12-06 03:34:58.122884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-12-06 03:34:58.123148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-12-06 03:34:58.123161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-12-06 03:34:58.123370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-12-06 03:34:58.123384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-12-06 03:34:58.123595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-12-06 03:34:58.123607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-12-06 03:34:58.123778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-12-06 03:34:58.123791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-12-06 03:34:58.124024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-12-06 03:34:58.124038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-12-06 03:34:58.124229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-12-06 03:34:58.124242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-12-06 03:34:58.124475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-12-06 03:34:58.124487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-12-06 03:34:58.124632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-12-06 03:34:58.124645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-12-06 03:34:58.124814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-12-06 03:34:58.124828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-12-06 03:34:58.125006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-12-06 03:34:58.125020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-12-06 03:34:58.125248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-12-06 03:34:58.125261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-12-06 03:34:58.125400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-12-06 03:34:58.125413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-12-06 03:34:58.125562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-12-06 03:34:58.125575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-12-06 03:34:58.125667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-12-06 03:34:58.125679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.129 [2024-12-06 03:34:58.125821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-12-06 03:34:58.125833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-12-06 03:34:58.125978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-12-06 03:34:58.125991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-12-06 03:34:58.126151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-12-06 03:34:58.126164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-12-06 03:34:58.126411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-12-06 03:34:58.126425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-12-06 03:34:58.126591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-12-06 03:34:58.126605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-12-06 03:34:58.126879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-12-06 03:34:58.126909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-12-06 03:34:58.127156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-12-06 03:34:58.127173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-12-06 03:34:58.127343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-12-06 03:34:58.127356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-12-06 03:34:58.127607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-12-06 03:34:58.127642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-12-06 03:34:58.127965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-12-06 03:34:58.128001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-12-06 03:34:58.128272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-12-06 03:34:58.128286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-12-06 03:34:58.128494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-12-06 03:34:58.128507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-12-06 03:34:58.128746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-12-06 03:34:58.128759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-12-06 03:34:58.128924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-12-06 03:34:58.128937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-12-06 03:34:58.129177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-12-06 03:34:58.129190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-12-06 03:34:58.129412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-12-06 03:34:58.129426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-12-06 03:34:58.129608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-12-06 03:34:58.129621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-12-06 03:34:58.129835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-12-06 03:34:58.129849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-12-06 03:34:58.130112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-12-06 03:34:58.130126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-12-06 03:34:58.130334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-12-06 03:34:58.130347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-12-06 03:34:58.130525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-12-06 03:34:58.130539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-12-06 03:34:58.130701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-12-06 03:34:58.130714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-12-06 03:34:58.130867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-12-06 03:34:58.130880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-12-06 03:34:58.130984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-12-06 03:34:58.130996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-12-06 03:34:58.131202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-12-06 03:34:58.131217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-12-06 03:34:58.131492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-12-06 03:34:58.131504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-12-06 03:34:58.131766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-12-06 03:34:58.131779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-12-06 03:34:58.131929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-12-06 03:34:58.131942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-12-06 03:34:58.132107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-12-06 03:34:58.132121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-12-06 03:34:58.132264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-12-06 03:34:58.132277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-12-06 03:34:58.132430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-12-06 03:34:58.132443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-12-06 03:34:58.132607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-12-06 03:34:58.132620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-12-06 03:34:58.132760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-12-06 03:34:58.132773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-12-06 03:34:58.132958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-12-06 03:34:58.132972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-12-06 03:34:58.133116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-12-06 03:34:58.133129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-12-06 03:34:58.133287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-12-06 03:34:58.133301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-12-06 03:34:58.133439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-12-06 03:34:58.133452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-12-06 03:34:58.133546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-12-06 03:34:58.133558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-12-06 03:34:58.133765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-12-06 03:34:58.133778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-12-06 03:34:58.133990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-12-06 03:34:58.134004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-12-06 03:34:58.134097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-12-06 03:34:58.134108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-12-06 03:34:58.134261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-12-06 03:34:58.134274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-12-06 03:34:58.134420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-12-06 03:34:58.134455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-12-06 03:34:58.134661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-12-06 03:34:58.134695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-12-06 03:34:58.134886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-12-06 03:34:58.134920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-12-06 03:34:58.135136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-12-06 03:34:58.135170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-12-06 03:34:58.135373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-12-06 03:34:58.135413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-12-06 03:34:58.135668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-12-06 03:34:58.135701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-12-06 03:34:58.135899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-12-06 03:34:58.135912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-12-06 03:34:58.135990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-12-06 03:34:58.136002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-12-06 03:34:58.136211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-12-06 03:34:58.136223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-12-06 03:34:58.136383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-12-06 03:34:58.136396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-12-06 03:34:58.136565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-12-06 03:34:58.136577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-12-06 03:34:58.136831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-12-06 03:34:58.136844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-12-06 03:34:58.137063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-12-06 03:34:58.137077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-12-06 03:34:58.137322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-12-06 03:34:58.137335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-12-06 03:34:58.137577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-12-06 03:34:58.137591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-12-06 03:34:58.137692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-12-06 03:34:58.137704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.131 [2024-12-06 03:34:58.137942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-12-06 03:34:58.137959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-12-06 03:34:58.138202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-12-06 03:34:58.138216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-12-06 03:34:58.138304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-12-06 03:34:58.138315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-12-06 03:34:58.138476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-12-06 03:34:58.138490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-12-06 03:34:58.138670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-12-06 03:34:58.138703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-12-06 03:34:58.138848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-12-06 03:34:58.138881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-12-06 03:34:58.139132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-12-06 03:34:58.139168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-12-06 03:34:58.139413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-12-06 03:34:58.139431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-12-06 03:34:58.139638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-12-06 03:34:58.139651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-12-06 03:34:58.139859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-12-06 03:34:58.139872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-12-06 03:34:58.140117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-12-06 03:34:58.140130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-12-06 03:34:58.140365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-12-06 03:34:58.140400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-12-06 03:34:58.140584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-12-06 03:34:58.140617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-12-06 03:34:58.140823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-12-06 03:34:58.140856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-12-06 03:34:58.141118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-12-06 03:34:58.141152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-12-06 03:34:58.141433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-12-06 03:34:58.141509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-12-06 03:34:58.141801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-12-06 03:34:58.141820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-12-06 03:34:58.142059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-12-06 03:34:58.142076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-12-06 03:34:58.142172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-12-06 03:34:58.142188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-12-06 03:34:58.142290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-12-06 03:34:58.142305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-12-06 03:34:58.142551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-12-06 03:34:58.142569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-12-06 03:34:58.142671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-12-06 03:34:58.142687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-12-06 03:34:58.142849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-12-06 03:34:58.142865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-12-06 03:34:58.143055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-12-06 03:34:58.143072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-12-06 03:34:58.143286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-12-06 03:34:58.143303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-12-06 03:34:58.143481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-12-06 03:34:58.143497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-12-06 03:34:58.143725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-12-06 03:34:58.143742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-12-06 03:34:58.144004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-12-06 03:34:58.144021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-12-06 03:34:58.144238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-12-06 03:34:58.144259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-12-06 03:34:58.144425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-12-06 03:34:58.144442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-12-06 03:34:58.144541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-12-06 03:34:58.144556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-12-06 03:34:58.144714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-12-06 03:34:58.144731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-12-06 03:34:58.144957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-12-06 03:34:58.144975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-12-06 03:34:58.145228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-12-06 03:34:58.145245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-12-06 03:34:58.145407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-12-06 03:34:58.145423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-12-06 03:34:58.145592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-12-06 03:34:58.145609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-12-06 03:34:58.145831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-12-06 03:34:58.145848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-12-06 03:34:58.145957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-12-06 03:34:58.145973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-12-06 03:34:58.146085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-12-06 03:34:58.146100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-12-06 03:34:58.146275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-12-06 03:34:58.146293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-12-06 03:34:58.146475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-12-06 03:34:58.146492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-12-06 03:34:58.146726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-12-06 03:34:58.146760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-12-06 03:34:58.146959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-12-06 03:34:58.146995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-12-06 03:34:58.147258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-12-06 03:34:58.147311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-12-06 03:34:58.147502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-12-06 03:34:58.147536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-12-06 03:34:58.147805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-12-06 03:34:58.147840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-12-06 03:34:58.148026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-12-06 03:34:58.148060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-12-06 03:34:58.148336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-12-06 03:34:58.148382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-12-06 03:34:58.148557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-12-06 03:34:58.148574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-12-06 03:34:58.148665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-12-06 03:34:58.148680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-12-06 03:34:58.148838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-12-06 03:34:58.148855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-12-06 03:34:58.149070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-12-06 03:34:58.149087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-12-06 03:34:58.149246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-12-06 03:34:58.149262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-12-06 03:34:58.149476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-12-06 03:34:58.149492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-12-06 03:34:58.149731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-12-06 03:34:58.149748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-12-06 03:34:58.149945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-12-06 03:34:58.149993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-12-06 03:34:58.150244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-12-06 03:34:58.150263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-12-06 03:34:58.150531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-12-06 03:34:58.150567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-12-06 03:34:58.150763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-12-06 03:34:58.150796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-12-06 03:34:58.151000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-12-06 03:34:58.151036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-12-06 03:34:58.151310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-12-06 03:34:58.151343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-12-06 03:34:58.151541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-12-06 03:34:58.151557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-12-06 03:34:58.151667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-12-06 03:34:58.151682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-12-06 03:34:58.151908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-12-06 03:34:58.151924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-12-06 03:34:58.152025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-12-06 03:34:58.152042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-12-06 03:34:58.152196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-12-06 03:34:58.152212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-12-06 03:34:58.152389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-12-06 03:34:58.152405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-12-06 03:34:58.152567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-12-06 03:34:58.152583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-12-06 03:34:58.152741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-12-06 03:34:58.152779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-12-06 03:34:58.153041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-12-06 03:34:58.153075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-12-06 03:34:58.153376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-12-06 03:34:58.153408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-12-06 03:34:58.153615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-12-06 03:34:58.153648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-12-06 03:34:58.153844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-12-06 03:34:58.153878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-12-06 03:34:58.154069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-12-06 03:34:58.154103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-12-06 03:34:58.154374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-12-06 03:34:58.154407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-12-06 03:34:58.154596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-12-06 03:34:58.154612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-12-06 03:34:58.154770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-12-06 03:34:58.154788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-12-06 03:34:58.155002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-12-06 03:34:58.155019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-12-06 03:34:58.155202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-12-06 03:34:58.155218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-12-06 03:34:58.155379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-12-06 03:34:58.155395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-12-06 03:34:58.155571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-12-06 03:34:58.155587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-12-06 03:34:58.155772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-12-06 03:34:58.155788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-12-06 03:34:58.155994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-12-06 03:34:58.156014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-12-06 03:34:58.156197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-12-06 03:34:58.156214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-12-06 03:34:58.156367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-12-06 03:34:58.156383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-12-06 03:34:58.156619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-12-06 03:34:58.156635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-12-06 03:34:58.156827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-12-06 03:34:58.156844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-12-06 03:34:58.156995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-12-06 03:34:58.157011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-12-06 03:34:58.157177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-12-06 03:34:58.157193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-12-06 03:34:58.157286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-12-06 03:34:58.157301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-12-06 03:34:58.157539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-12-06 03:34:58.157555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-12-06 03:34:58.157769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-12-06 03:34:58.157787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-12-06 03:34:58.157957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-12-06 03:34:58.157976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-12-06 03:34:58.158168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-12-06 03:34:58.158185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-12-06 03:34:58.158333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-12-06 03:34:58.158351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-12-06 03:34:58.158565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-12-06 03:34:58.158582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-12-06 03:34:58.158749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-12-06 03:34:58.158767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-12-06 03:34:58.158970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-12-06 03:34:58.159005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-12-06 03:34:58.159290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-12-06 03:34:58.159322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-12-06 03:34:58.159589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-12-06 03:34:58.159605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-12-06 03:34:58.159829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-12-06 03:34:58.159846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-12-06 03:34:58.160004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-12-06 03:34:58.160020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-12-06 03:34:58.160183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-12-06 03:34:58.160199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-12-06 03:34:58.160303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-12-06 03:34:58.160318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-12-06 03:34:58.160540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-12-06 03:34:58.160557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-12-06 03:34:58.160735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-12-06 03:34:58.160751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-12-06 03:34:58.160913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-12-06 03:34:58.160929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-12-06 03:34:58.161089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-12-06 03:34:58.161123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-12-06 03:34:58.161372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-12-06 03:34:58.161405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-12-06 03:34:58.161685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-12-06 03:34:58.161725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-12-06 03:34:58.161929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-12-06 03:34:58.161973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-12-06 03:34:58.162271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-12-06 03:34:58.162305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-12-06 03:34:58.162435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-12-06 03:34:58.162452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-12-06 03:34:58.162611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-12-06 03:34:58.162627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-12-06 03:34:58.162784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-12-06 03:34:58.162801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-12-06 03:34:58.162966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-12-06 03:34:58.162984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-12-06 03:34:58.163208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-12-06 03:34:58.163226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-12-06 03:34:58.163463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-12-06 03:34:58.163479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-12-06 03:34:58.163642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-12-06 03:34:58.163660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-12-06 03:34:58.163755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-12-06 03:34:58.163771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-12-06 03:34:58.163977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-12-06 03:34:58.163996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-12-06 03:34:58.164211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-12-06 03:34:58.164230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-12-06 03:34:58.164398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-12-06 03:34:58.164414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-12-06 03:34:58.164608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-12-06 03:34:58.164642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-12-06 03:34:58.164827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-12-06 03:34:58.164860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-12-06 03:34:58.165171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-12-06 03:34:58.165206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-12-06 03:34:58.165388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-12-06 03:34:58.165405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-12-06 03:34:58.165646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-12-06 03:34:58.165663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-12-06 03:34:58.165772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-12-06 03:34:58.165791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-12-06 03:34:58.165958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-12-06 03:34:58.165995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-12-06 03:34:58.166190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-12-06 03:34:58.166223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-12-06 03:34:58.166347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-12-06 03:34:58.166380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-12-06 03:34:58.166566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-12-06 03:34:58.166607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-12-06 03:34:58.166721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-12-06 03:34:58.166737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-12-06 03:34:58.166963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-12-06 03:34:58.166999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-12-06 03:34:58.167206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-12-06 03:34:58.167240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-12-06 03:34:58.167498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-12-06 03:34:58.167538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-12-06 03:34:58.167771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-12-06 03:34:58.167787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-12-06 03:34:58.167955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-12-06 03:34:58.167972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-12-06 03:34:58.168118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-12-06 03:34:58.168136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-12-06 03:34:58.168285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-12-06 03:34:58.168302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-12-06 03:34:58.168519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-12-06 03:34:58.168552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.136 [2024-12-06 03:34:58.168738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-12-06 03:34:58.168771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-12-06 03:34:58.169060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-12-06 03:34:58.169077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-12-06 03:34:58.169165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-12-06 03:34:58.169181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-12-06 03:34:58.169335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-12-06 03:34:58.169350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-12-06 03:34:58.169443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-12-06 03:34:58.169458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-12-06 03:34:58.169566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-12-06 03:34:58.169581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-12-06 03:34:58.169679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-12-06 03:34:58.169695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-12-06 03:34:58.169852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-12-06 03:34:58.169869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-12-06 03:34:58.170181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-12-06 03:34:58.170216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-12-06 03:34:58.170380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-12-06 03:34:58.170394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-12-06 03:34:58.170626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-12-06 03:34:58.170640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-12-06 03:34:58.170912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-12-06 03:34:58.170925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-12-06 03:34:58.171144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-12-06 03:34:58.171159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-12-06 03:34:58.171251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-12-06 03:34:58.171262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-12-06 03:34:58.171485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-12-06 03:34:58.171498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-12-06 03:34:58.171637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-12-06 03:34:58.171651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-12-06 03:34:58.171808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-12-06 03:34:58.171821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-12-06 03:34:58.171970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-12-06 03:34:58.172008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-12-06 03:34:58.172294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-12-06 03:34:58.172329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-12-06 03:34:58.172595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-12-06 03:34:58.172613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-12-06 03:34:58.172731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-12-06 03:34:58.172748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-12-06 03:34:58.172959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-12-06 03:34:58.173000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-12-06 03:34:58.173190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-12-06 03:34:58.173223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-12-06 03:34:58.173501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-12-06 03:34:58.173541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-12-06 03:34:58.173684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-12-06 03:34:58.173701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-12-06 03:34:58.173873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-12-06 03:34:58.173906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-12-06 03:34:58.174175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-12-06 03:34:58.174210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-12-06 03:34:58.174456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-12-06 03:34:58.174474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-12-06 03:34:58.174711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-12-06 03:34:58.174728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.137 [2024-12-06 03:34:58.174968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.137 [2024-12-06 03:34:58.174986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.137 qpair failed and we were unable to recover it. 00:26:38.137 [2024-12-06 03:34:58.175206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.137 [2024-12-06 03:34:58.175222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.137 qpair failed and we were unable to recover it. 00:26:38.137 [2024-12-06 03:34:58.175332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.137 [2024-12-06 03:34:58.175347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.137 qpair failed and we were unable to recover it. 00:26:38.137 [2024-12-06 03:34:58.175536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.137 [2024-12-06 03:34:58.175569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.137 qpair failed and we were unable to recover it. 00:26:38.137 [2024-12-06 03:34:58.175845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.137 [2024-12-06 03:34:58.175880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.137 qpair failed and we were unable to recover it. 00:26:38.137 [2024-12-06 03:34:58.176162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.137 [2024-12-06 03:34:58.176195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.137 qpair failed and we were unable to recover it. 00:26:38.137 [2024-12-06 03:34:58.176429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.137 [2024-12-06 03:34:58.176466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.137 qpair failed and we were unable to recover it. 00:26:38.137 [2024-12-06 03:34:58.176670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.137 [2024-12-06 03:34:58.176703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.137 qpair failed and we were unable to recover it. 00:26:38.137 [2024-12-06 03:34:58.176927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.137 [2024-12-06 03:34:58.176973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.137 qpair failed and we were unable to recover it. 00:26:38.137 [2024-12-06 03:34:58.177252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.137 [2024-12-06 03:34:58.177285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.137 qpair failed and we were unable to recover it. 00:26:38.137 [2024-12-06 03:34:58.177564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.137 [2024-12-06 03:34:58.177597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.137 qpair failed and we were unable to recover it. 00:26:38.137 [2024-12-06 03:34:58.177794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.137 [2024-12-06 03:34:58.177829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.137 qpair failed and we were unable to recover it. 00:26:38.137 [2024-12-06 03:34:58.178120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.137 [2024-12-06 03:34:58.178155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.137 qpair failed and we were unable to recover it. 00:26:38.137 [2024-12-06 03:34:58.178379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.137 [2024-12-06 03:34:58.178393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.137 qpair failed and we were unable to recover it. 00:26:38.137 [2024-12-06 03:34:58.178530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.137 [2024-12-06 03:34:58.178543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.137 qpair failed and we were unable to recover it. 00:26:38.137 [2024-12-06 03:34:58.178693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.137 [2024-12-06 03:34:58.178707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.137 qpair failed and we were unable to recover it. 00:26:38.137 [2024-12-06 03:34:58.178912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.137 [2024-12-06 03:34:58.178925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.137 qpair failed and we were unable to recover it. 00:26:38.137 [2024-12-06 03:34:58.179100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.137 [2024-12-06 03:34:58.179113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.137 qpair failed and we were unable to recover it. 00:26:38.137 [2024-12-06 03:34:58.179329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.137 [2024-12-06 03:34:58.179341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.137 qpair failed and we were unable to recover it. 00:26:38.137 [2024-12-06 03:34:58.179530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.137 [2024-12-06 03:34:58.179548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.137 qpair failed and we were unable to recover it. 00:26:38.137 [2024-12-06 03:34:58.179631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.137 [2024-12-06 03:34:58.179642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.137 qpair failed and we were unable to recover it. 00:26:38.137 [2024-12-06 03:34:58.179804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.137 [2024-12-06 03:34:58.179816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.137 qpair failed and we were unable to recover it. 00:26:38.137 [2024-12-06 03:34:58.180048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.137 [2024-12-06 03:34:58.180062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.137 qpair failed and we were unable to recover it. 00:26:38.137 [2024-12-06 03:34:58.180293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.137 [2024-12-06 03:34:58.180306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.137 qpair failed and we were unable to recover it. 00:26:38.137 [2024-12-06 03:34:58.180473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.137 [2024-12-06 03:34:58.180487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.137 qpair failed and we were unable to recover it. 00:26:38.137 [2024-12-06 03:34:58.180707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.137 [2024-12-06 03:34:58.180740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.137 qpair failed and we were unable to recover it. 00:26:38.137 [2024-12-06 03:34:58.180981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.137 [2024-12-06 03:34:58.181017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.137 qpair failed and we were unable to recover it. 00:26:38.137 [2024-12-06 03:34:58.181204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.137 [2024-12-06 03:34:58.181236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.137 qpair failed and we were unable to recover it. 00:26:38.137 [2024-12-06 03:34:58.181427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.138 [2024-12-06 03:34:58.181460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.138 qpair failed and we were unable to recover it. 00:26:38.138 [2024-12-06 03:34:58.181594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.138 [2024-12-06 03:34:58.181627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.138 qpair failed and we were unable to recover it. 00:26:38.138 [2024-12-06 03:34:58.181898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.138 [2024-12-06 03:34:58.181911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.138 qpair failed and we were unable to recover it. 00:26:38.138 [2024-12-06 03:34:58.182103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.138 [2024-12-06 03:34:58.182116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.138 qpair failed and we were unable to recover it. 00:26:38.138 [2024-12-06 03:34:58.182265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.138 [2024-12-06 03:34:58.182278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.138 qpair failed and we were unable to recover it. 00:26:38.138 [2024-12-06 03:34:58.182495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.138 [2024-12-06 03:34:58.182508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.138 qpair failed and we were unable to recover it. 00:26:38.138 [2024-12-06 03:34:58.182684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.138 [2024-12-06 03:34:58.182698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.138 qpair failed and we were unable to recover it. 00:26:38.138 [2024-12-06 03:34:58.182890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.138 [2024-12-06 03:34:58.182903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.138 qpair failed and we were unable to recover it. 00:26:38.138 [2024-12-06 03:34:58.183128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.138 [2024-12-06 03:34:58.183162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.138 qpair failed and we were unable to recover it. 00:26:38.138 [2024-12-06 03:34:58.183435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.138 [2024-12-06 03:34:58.183469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.138 qpair failed and we were unable to recover it. 00:26:38.138 [2024-12-06 03:34:58.183718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.138 [2024-12-06 03:34:58.183752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.138 qpair failed and we were unable to recover it. 00:26:38.138 [2024-12-06 03:34:58.183960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.138 [2024-12-06 03:34:58.183996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.138 qpair failed and we were unable to recover it. 00:26:38.138 [2024-12-06 03:34:58.184209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.138 [2024-12-06 03:34:58.184242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.138 qpair failed and we were unable to recover it. 00:26:38.138 [2024-12-06 03:34:58.184492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.138 [2024-12-06 03:34:58.184526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.138 qpair failed and we were unable to recover it. 00:26:38.138 [2024-12-06 03:34:58.184794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.138 [2024-12-06 03:34:58.184808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.138 qpair failed and we were unable to recover it. 00:26:38.138 [2024-12-06 03:34:58.184944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.138 [2024-12-06 03:34:58.184963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.138 qpair failed and we were unable to recover it. 00:26:38.138 [2024-12-06 03:34:58.185180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.138 [2024-12-06 03:34:58.185194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.138 qpair failed and we were unable to recover it. 00:26:38.138 [2024-12-06 03:34:58.185336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.138 [2024-12-06 03:34:58.185349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.138 qpair failed and we were unable to recover it. 00:26:38.138 [2024-12-06 03:34:58.185609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.138 [2024-12-06 03:34:58.185622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.138 qpair failed and we were unable to recover it. 00:26:38.138 [2024-12-06 03:34:58.185786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.138 [2024-12-06 03:34:58.185819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.138 qpair failed and we were unable to recover it. 00:26:38.138 [2024-12-06 03:34:58.185971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.138 [2024-12-06 03:34:58.186006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.138 qpair failed and we were unable to recover it. 00:26:38.138 [2024-12-06 03:34:58.186325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.138 [2024-12-06 03:34:58.186358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.138 qpair failed and we were unable to recover it. 00:26:38.138 [2024-12-06 03:34:58.186487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.138 [2024-12-06 03:34:58.186499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.138 qpair failed and we were unable to recover it. 00:26:38.138 [2024-12-06 03:34:58.186730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.138 [2024-12-06 03:34:58.186742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.138 qpair failed and we were unable to recover it. 00:26:38.138 [2024-12-06 03:34:58.186998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.138 [2024-12-06 03:34:58.187011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.138 qpair failed and we were unable to recover it. 00:26:38.138 [2024-12-06 03:34:58.187220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.138 [2024-12-06 03:34:58.187232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.138 qpair failed and we were unable to recover it. 00:26:38.138 [2024-12-06 03:34:58.187447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.138 [2024-12-06 03:34:58.187459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.138 qpair failed and we were unable to recover it. 00:26:38.138 [2024-12-06 03:34:58.187666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.138 [2024-12-06 03:34:58.187680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.138 qpair failed and we were unable to recover it. 00:26:38.138 [2024-12-06 03:34:58.187765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.138 [2024-12-06 03:34:58.187776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.139 qpair failed and we were unable to recover it. 00:26:38.139 [2024-12-06 03:34:58.187941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.139 [2024-12-06 03:34:58.187967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.139 qpair failed and we were unable to recover it. 00:26:38.139 [2024-12-06 03:34:58.188109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.139 [2024-12-06 03:34:58.188121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.139 qpair failed and we were unable to recover it. 00:26:38.139 [2024-12-06 03:34:58.188313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.139 [2024-12-06 03:34:58.188351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.139 qpair failed and we were unable to recover it. 00:26:38.139 [2024-12-06 03:34:58.188628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.139 [2024-12-06 03:34:58.188662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.139 qpair failed and we were unable to recover it. 00:26:38.139 [2024-12-06 03:34:58.188946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.139 [2024-12-06 03:34:58.188992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.139 qpair failed and we were unable to recover it. 00:26:38.139 [2024-12-06 03:34:58.189263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.139 [2024-12-06 03:34:58.189296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.139 qpair failed and we were unable to recover it. 00:26:38.139 [2024-12-06 03:34:58.189499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.139 [2024-12-06 03:34:58.189512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.139 qpair failed and we were unable to recover it. 00:26:38.139 [2024-12-06 03:34:58.189660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.139 [2024-12-06 03:34:58.189672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.139 qpair failed and we were unable to recover it. 00:26:38.139 [2024-12-06 03:34:58.189900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.139 [2024-12-06 03:34:58.189913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.139 qpair failed and we were unable to recover it. 00:26:38.139 [2024-12-06 03:34:58.190070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.139 [2024-12-06 03:34:58.190083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.139 qpair failed and we were unable to recover it. 00:26:38.139 [2024-12-06 03:34:58.190218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.139 [2024-12-06 03:34:58.190231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.139 qpair failed and we were unable to recover it. 00:26:38.139 [2024-12-06 03:34:58.190465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.139 [2024-12-06 03:34:58.190477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.139 qpair failed and we were unable to recover it. 00:26:38.139 [2024-12-06 03:34:58.190574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.139 [2024-12-06 03:34:58.190586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.139 qpair failed and we were unable to recover it. 00:26:38.139 [2024-12-06 03:34:58.190817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.139 [2024-12-06 03:34:58.190830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.139 qpair failed and we were unable to recover it. 00:26:38.139 [2024-12-06 03:34:58.190934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.139 [2024-12-06 03:34:58.190946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.139 qpair failed and we were unable to recover it. 00:26:38.139 [2024-12-06 03:34:58.191090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.139 [2024-12-06 03:34:58.191103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.139 qpair failed and we were unable to recover it. 00:26:38.139 [2024-12-06 03:34:58.191254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.139 [2024-12-06 03:34:58.191266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.139 qpair failed and we were unable to recover it. 00:26:38.139 [2024-12-06 03:34:58.191462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.139 [2024-12-06 03:34:58.191474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.139 qpair failed and we were unable to recover it. 00:26:38.139 [2024-12-06 03:34:58.191630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.139 [2024-12-06 03:34:58.191642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.139 qpair failed and we were unable to recover it. 00:26:38.139 [2024-12-06 03:34:58.191795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.139 [2024-12-06 03:34:58.191808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.139 qpair failed and we were unable to recover it. 00:26:38.139 [2024-12-06 03:34:58.191958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.139 [2024-12-06 03:34:58.191971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.139 qpair failed and we were unable to recover it. 00:26:38.139 [2024-12-06 03:34:58.192176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.139 [2024-12-06 03:34:58.192189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.139 qpair failed and we were unable to recover it. 00:26:38.139 [2024-12-06 03:34:58.192391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.139 [2024-12-06 03:34:58.192403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.139 qpair failed and we were unable to recover it. 00:26:38.139 [2024-12-06 03:34:58.192489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.139 [2024-12-06 03:34:58.192501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.139 qpair failed and we were unable to recover it. 00:26:38.139 [2024-12-06 03:34:58.192660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.139 [2024-12-06 03:34:58.192672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.140 qpair failed and we were unable to recover it. 00:26:38.140 [2024-12-06 03:34:58.192768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.140 [2024-12-06 03:34:58.192780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.140 qpair failed and we were unable to recover it. 00:26:38.140 [2024-12-06 03:34:58.192916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.140 [2024-12-06 03:34:58.192928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.140 qpair failed and we were unable to recover it. 00:26:38.140 [2024-12-06 03:34:58.193078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.140 [2024-12-06 03:34:58.193092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.140 qpair failed and we were unable to recover it. 00:26:38.140 [2024-12-06 03:34:58.193256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.140 [2024-12-06 03:34:58.193268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.140 qpair failed and we were unable to recover it. 00:26:38.140 [2024-12-06 03:34:58.193480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.140 [2024-12-06 03:34:58.193518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.140 qpair failed and we were unable to recover it. 00:26:38.140 [2024-12-06 03:34:58.193643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.140 [2024-12-06 03:34:58.193661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.140 qpair failed and we were unable to recover it. 00:26:38.140 [2024-12-06 03:34:58.193837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.140 [2024-12-06 03:34:58.193853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.140 qpair failed and we were unable to recover it. 00:26:38.140 [2024-12-06 03:34:58.194110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.140 [2024-12-06 03:34:58.194128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.140 qpair failed and we were unable to recover it. 00:26:38.140 [2024-12-06 03:34:58.194287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.140 [2024-12-06 03:34:58.194304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.140 qpair failed and we were unable to recover it. 00:26:38.140 [2024-12-06 03:34:58.194461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.140 [2024-12-06 03:34:58.194478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.140 qpair failed and we were unable to recover it. 00:26:38.140 [2024-12-06 03:34:58.194660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.140 [2024-12-06 03:34:58.194676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.140 qpair failed and we were unable to recover it. 00:26:38.140 [2024-12-06 03:34:58.194893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.140 [2024-12-06 03:34:58.194909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.140 qpair failed and we were unable to recover it. 00:26:38.140 [2024-12-06 03:34:58.195116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.140 [2024-12-06 03:34:58.195133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.140 qpair failed and we were unable to recover it. 00:26:38.140 [2024-12-06 03:34:58.195293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.140 [2024-12-06 03:34:58.195310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.140 qpair failed and we were unable to recover it. 00:26:38.140 [2024-12-06 03:34:58.195528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.140 [2024-12-06 03:34:58.195545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.140 qpair failed and we were unable to recover it. 00:26:38.140 [2024-12-06 03:34:58.195650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.140 [2024-12-06 03:34:58.195665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.140 qpair failed and we were unable to recover it. 00:26:38.140 [2024-12-06 03:34:58.195828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.140 [2024-12-06 03:34:58.195844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.140 qpair failed and we were unable to recover it. 00:26:38.140 [2024-12-06 03:34:58.196064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.140 [2024-12-06 03:34:58.196086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.140 qpair failed and we were unable to recover it. 00:26:38.140 [2024-12-06 03:34:58.196265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.140 [2024-12-06 03:34:58.196282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.140 qpair failed and we were unable to recover it. 00:26:38.140 [2024-12-06 03:34:58.196498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.140 [2024-12-06 03:34:58.196515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.140 qpair failed and we were unable to recover it. 00:26:38.140 [2024-12-06 03:34:58.196613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.140 [2024-12-06 03:34:58.196628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.140 qpair failed and we were unable to recover it. 00:26:38.140 [2024-12-06 03:34:58.196796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.140 [2024-12-06 03:34:58.196813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.140 qpair failed and we were unable to recover it. 00:26:38.140 [2024-12-06 03:34:58.196901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.140 [2024-12-06 03:34:58.196916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.140 qpair failed and we were unable to recover it. 00:26:38.140 [2024-12-06 03:34:58.197066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.140 [2024-12-06 03:34:58.197083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.140 qpair failed and we were unable to recover it. 00:26:38.140 [2024-12-06 03:34:58.197248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.140 [2024-12-06 03:34:58.197265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.140 qpair failed and we were unable to recover it. 00:26:38.140 [2024-12-06 03:34:58.197366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.140 [2024-12-06 03:34:58.197381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.140 qpair failed and we were unable to recover it. 00:26:38.140 [2024-12-06 03:34:58.197575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.140 [2024-12-06 03:34:58.197592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.140 qpair failed and we were unable to recover it. 00:26:38.140 [2024-12-06 03:34:58.197706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.140 [2024-12-06 03:34:58.197722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.140 qpair failed and we were unable to recover it. 00:26:38.140 [2024-12-06 03:34:58.197985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.140 [2024-12-06 03:34:58.198002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.140 qpair failed and we were unable to recover it. 00:26:38.140 [2024-12-06 03:34:58.198158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.140 [2024-12-06 03:34:58.198174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.140 qpair failed and we were unable to recover it. 00:26:38.140 [2024-12-06 03:34:58.198282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.140 [2024-12-06 03:34:58.198298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.140 qpair failed and we were unable to recover it. 00:26:38.140 [2024-12-06 03:34:58.198404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.140 [2024-12-06 03:34:58.198421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.140 qpair failed and we were unable to recover it. 00:26:38.141 [2024-12-06 03:34:58.198525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.141 [2024-12-06 03:34:58.198540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.141 qpair failed and we were unable to recover it. 00:26:38.141 [2024-12-06 03:34:58.198716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.141 [2024-12-06 03:34:58.198732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.141 qpair failed and we were unable to recover it. 00:26:38.141 [2024-12-06 03:34:58.198880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.141 [2024-12-06 03:34:58.198897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.141 qpair failed and we were unable to recover it. 00:26:38.141 [2024-12-06 03:34:58.198993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.141 [2024-12-06 03:34:58.199009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.141 qpair failed and we were unable to recover it. 00:26:38.141 [2024-12-06 03:34:58.199186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.141 [2024-12-06 03:34:58.199203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.141 qpair failed and we were unable to recover it. 00:26:38.141 [2024-12-06 03:34:58.199416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.141 [2024-12-06 03:34:58.199433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.141 qpair failed and we were unable to recover it. 00:26:38.141 [2024-12-06 03:34:58.199524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.141 [2024-12-06 03:34:58.199541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.141 qpair failed and we were unable to recover it. 00:26:38.141 [2024-12-06 03:34:58.199802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.141 [2024-12-06 03:34:58.199818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.141 qpair failed and we were unable to recover it. 00:26:38.141 [2024-12-06 03:34:58.199909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.141 [2024-12-06 03:34:58.199924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.141 qpair failed and we were unable to recover it. 00:26:38.141 [2024-12-06 03:34:58.200166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.141 [2024-12-06 03:34:58.200184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.141 qpair failed and we were unable to recover it. 00:26:38.141 [2024-12-06 03:34:58.200348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.141 [2024-12-06 03:34:58.200365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.141 qpair failed and we were unable to recover it. 00:26:38.141 [2024-12-06 03:34:58.200514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.141 [2024-12-06 03:34:58.200530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.141 qpair failed and we were unable to recover it. 00:26:38.141 [2024-12-06 03:34:58.200772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.141 [2024-12-06 03:34:58.200799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.141 qpair failed and we were unable to recover it. 00:26:38.141 [2024-12-06 03:34:58.200964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.141 [2024-12-06 03:34:58.200981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.141 qpair failed and we were unable to recover it. 00:26:38.141 [2024-12-06 03:34:58.201128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.141 [2024-12-06 03:34:58.201146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.141 qpair failed and we were unable to recover it. 00:26:38.141 [2024-12-06 03:34:58.201238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.141 [2024-12-06 03:34:58.201253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.141 qpair failed and we were unable to recover it. 00:26:38.141 [2024-12-06 03:34:58.201490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.141 [2024-12-06 03:34:58.201507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.141 qpair failed and we were unable to recover it. 00:26:38.141 [2024-12-06 03:34:58.201675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.141 [2024-12-06 03:34:58.201691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.141 qpair failed and we were unable to recover it. 00:26:38.141 [2024-12-06 03:34:58.201868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.141 [2024-12-06 03:34:58.201885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.141 qpair failed and we were unable to recover it. 00:26:38.141 [2024-12-06 03:34:58.201995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.141 [2024-12-06 03:34:58.202013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.141 qpair failed and we were unable to recover it. 00:26:38.141 [2024-12-06 03:34:58.202111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.141 [2024-12-06 03:34:58.202126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.141 qpair failed and we were unable to recover it. 00:26:38.141 [2024-12-06 03:34:58.202337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.141 [2024-12-06 03:34:58.202369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.141 qpair failed and we were unable to recover it. 00:26:38.141 [2024-12-06 03:34:58.202501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.141 [2024-12-06 03:34:58.202534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.141 qpair failed and we were unable to recover it. 00:26:38.141 [2024-12-06 03:34:58.202725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.141 [2024-12-06 03:34:58.202758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.141 qpair failed and we were unable to recover it. 00:26:38.141 [2024-12-06 03:34:58.202957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.141 [2024-12-06 03:34:58.202974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.141 qpair failed and we were unable to recover it. 00:26:38.141 [2024-12-06 03:34:58.203184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.141 [2024-12-06 03:34:58.203201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.141 qpair failed and we were unable to recover it. 00:26:38.141 [2024-12-06 03:34:58.203370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.141 [2024-12-06 03:34:58.203387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.141 qpair failed and we were unable to recover it. 00:26:38.141 [2024-12-06 03:34:58.203472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.141 [2024-12-06 03:34:58.203486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.141 qpair failed and we were unable to recover it. 00:26:38.141 [2024-12-06 03:34:58.203677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.141 [2024-12-06 03:34:58.203693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.141 qpair failed and we were unable to recover it. 00:26:38.141 [2024-12-06 03:34:58.203839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.141 [2024-12-06 03:34:58.203855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.141 qpair failed and we were unable to recover it. 00:26:38.141 [2024-12-06 03:34:58.203958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.141 [2024-12-06 03:34:58.203975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.141 qpair failed and we were unable to recover it. 00:26:38.141 [2024-12-06 03:34:58.204137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.141 [2024-12-06 03:34:58.204153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.141 qpair failed and we were unable to recover it. 00:26:38.141 [2024-12-06 03:34:58.204302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.141 [2024-12-06 03:34:58.204318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.142 qpair failed and we were unable to recover it. 00:26:38.142 [2024-12-06 03:34:58.204604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.142 [2024-12-06 03:34:58.204620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.142 qpair failed and we were unable to recover it. 00:26:38.142 [2024-12-06 03:34:58.204874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.142 [2024-12-06 03:34:58.204908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.142 qpair failed and we were unable to recover it. 00:26:38.142 [2024-12-06 03:34:58.205195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.142 [2024-12-06 03:34:58.205229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.142 qpair failed and we were unable to recover it. 00:26:38.142 [2024-12-06 03:34:58.205473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.142 [2024-12-06 03:34:58.205491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.142 qpair failed and we were unable to recover it. 00:26:38.142 [2024-12-06 03:34:58.205659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.142 [2024-12-06 03:34:58.205677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-12-06 03:34:58.205865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-12-06 03:34:58.205882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-12-06 03:34:58.206117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-12-06 03:34:58.206139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-12-06 03:34:58.206294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-12-06 03:34:58.206310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-12-06 03:34:58.206480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-12-06 03:34:58.206496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-12-06 03:34:58.206674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-12-06 03:34:58.206691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-12-06 03:34:58.206849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-12-06 03:34:58.206864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-12-06 03:34:58.207150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-12-06 03:34:58.207164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-12-06 03:34:58.207388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-12-06 03:34:58.207401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-12-06 03:34:58.207503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-12-06 03:34:58.207517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-12-06 03:34:58.207697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-12-06 03:34:58.207711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-12-06 03:34:58.207820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-12-06 03:34:58.207834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-12-06 03:34:58.207993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-12-06 03:34:58.208006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-12-06 03:34:58.208243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-12-06 03:34:58.208257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-12-06 03:34:58.208441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-12-06 03:34:58.208454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-12-06 03:34:58.208646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-12-06 03:34:58.208659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-12-06 03:34:58.208826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-12-06 03:34:58.208840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-12-06 03:34:58.209071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-12-06 03:34:58.209086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-12-06 03:34:58.209300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-12-06 03:34:58.209315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-12-06 03:34:58.209411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-12-06 03:34:58.209426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-12-06 03:34:58.209522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-12-06 03:34:58.209535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-12-06 03:34:58.209822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-12-06 03:34:58.209836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-12-06 03:34:58.210002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-12-06 03:34:58.210036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-12-06 03:34:58.210226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-12-06 03:34:58.210257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-12-06 03:34:58.210405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-12-06 03:34:58.210437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-12-06 03:34:58.210622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-12-06 03:34:58.210655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-12-06 03:34:58.210899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-12-06 03:34:58.210930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-12-06 03:34:58.211190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-12-06 03:34:58.211221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-12-06 03:34:58.211474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-12-06 03:34:58.211505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-12-06 03:34:58.211767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-12-06 03:34:58.211788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-12-06 03:34:58.212006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-12-06 03:34:58.212023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-12-06 03:34:58.212124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-12-06 03:34:58.212139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-12-06 03:34:58.212393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-12-06 03:34:58.212410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-12-06 03:34:58.212640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-12-06 03:34:58.212656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-12-06 03:34:58.212895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-12-06 03:34:58.212913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-12-06 03:34:58.213165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-12-06 03:34:58.213182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-12-06 03:34:58.213348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-12-06 03:34:58.213365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-12-06 03:34:58.213513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-12-06 03:34:58.213530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-12-06 03:34:58.213677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-12-06 03:34:58.213693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-12-06 03:34:58.213799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-12-06 03:34:58.213815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-12-06 03:34:58.213900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-12-06 03:34:58.213917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-12-06 03:34:58.214124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-12-06 03:34:58.214141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-12-06 03:34:58.214315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-12-06 03:34:58.214348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-12-06 03:34:58.214609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-12-06 03:34:58.214644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-12-06 03:34:58.214780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-12-06 03:34:58.214798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-12-06 03:34:58.214979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-12-06 03:34:58.214996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-12-06 03:34:58.215186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-12-06 03:34:58.215203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-12-06 03:34:58.215300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-12-06 03:34:58.215318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-12-06 03:34:58.215479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-12-06 03:34:58.215496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-12-06 03:34:58.215640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-12-06 03:34:58.215657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-12-06 03:34:58.215868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-12-06 03:34:58.215887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-12-06 03:34:58.216045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-12-06 03:34:58.216060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-12-06 03:34:58.216219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-12-06 03:34:58.216234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-12-06 03:34:58.216400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-12-06 03:34:58.216416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-12-06 03:34:58.216602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-12-06 03:34:58.216619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-12-06 03:34:58.216781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-12-06 03:34:58.216796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-12-06 03:34:58.216984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-12-06 03:34:58.217004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-12-06 03:34:58.217166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-12-06 03:34:58.217199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-12-06 03:34:58.217394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-12-06 03:34:58.217426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-12-06 03:34:58.217645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-12-06 03:34:58.217676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-12-06 03:34:58.217938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-12-06 03:34:58.217959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-12-06 03:34:58.218077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-12-06 03:34:58.218092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-12-06 03:34:58.218241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-12-06 03:34:58.218258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-12-06 03:34:58.218471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-12-06 03:34:58.218486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-12-06 03:34:58.218704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-12-06 03:34:58.218721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-12-06 03:34:58.218895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-12-06 03:34:58.218911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.424 [2024-12-06 03:34:58.219148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-12-06 03:34:58.219164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-12-06 03:34:58.219355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-12-06 03:34:58.219371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-12-06 03:34:58.219608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-12-06 03:34:58.219623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-12-06 03:34:58.219736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-12-06 03:34:58.219768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-12-06 03:34:58.219928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-12-06 03:34:58.219978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-12-06 03:34:58.220250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-12-06 03:34:58.220283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-12-06 03:34:58.220430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-12-06 03:34:58.220462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-12-06 03:34:58.220714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-12-06 03:34:58.220747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-12-06 03:34:58.220886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-12-06 03:34:58.220901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-12-06 03:34:58.221015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-12-06 03:34:58.221030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-12-06 03:34:58.221270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-12-06 03:34:58.221286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-12-06 03:34:58.221450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-12-06 03:34:58.221465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-12-06 03:34:58.221576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-12-06 03:34:58.221590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-12-06 03:34:58.221685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-12-06 03:34:58.221701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-12-06 03:34:58.221801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-12-06 03:34:58.221816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-12-06 03:34:58.221895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-12-06 03:34:58.221911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-12-06 03:34:58.221989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-12-06 03:34:58.222005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-12-06 03:34:58.222091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-12-06 03:34:58.222106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-12-06 03:34:58.222196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-12-06 03:34:58.222212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-12-06 03:34:58.222356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-12-06 03:34:58.222371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-12-06 03:34:58.222587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-12-06 03:34:58.222602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-12-06 03:34:58.222751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-12-06 03:34:58.222766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-12-06 03:34:58.222926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-12-06 03:34:58.222943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-12-06 03:34:58.223237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-12-06 03:34:58.223272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-12-06 03:34:58.223400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-12-06 03:34:58.223444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-12-06 03:34:58.223604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-12-06 03:34:58.223619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-12-06 03:34:58.223706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-12-06 03:34:58.223722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-12-06 03:34:58.223802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-12-06 03:34:58.223817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-12-06 03:34:58.223977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-12-06 03:34:58.223994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-12-06 03:34:58.224162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-12-06 03:34:58.224178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-12-06 03:34:58.224267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-12-06 03:34:58.224282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-12-06 03:34:58.224439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-12-06 03:34:58.224458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-12-06 03:34:58.224544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-12-06 03:34:58.224559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-12-06 03:34:58.224665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-12-06 03:34:58.224681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-12-06 03:34:58.224788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-12-06 03:34:58.224804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-12-06 03:34:58.224972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-12-06 03:34:58.224987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-12-06 03:34:58.225152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-12-06 03:34:58.225167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-12-06 03:34:58.225262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-12-06 03:34:58.225283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-12-06 03:34:58.225388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-12-06 03:34:58.225409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-12-06 03:34:58.225513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-12-06 03:34:58.225534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-12-06 03:34:58.225721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-12-06 03:34:58.225748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-12-06 03:34:58.225955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-12-06 03:34:58.225991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-12-06 03:34:58.226170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-12-06 03:34:58.226202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-12-06 03:34:58.226303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-12-06 03:34:58.226317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-12-06 03:34:58.226418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-12-06 03:34:58.226431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-12-06 03:34:58.226501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-12-06 03:34:58.226512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-12-06 03:34:58.226666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-12-06 03:34:58.226678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-12-06 03:34:58.226829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-12-06 03:34:58.226840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-12-06 03:34:58.226934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-12-06 03:34:58.226945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-12-06 03:34:58.227056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-12-06 03:34:58.227068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-12-06 03:34:58.227148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-12-06 03:34:58.227160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-12-06 03:34:58.227297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-12-06 03:34:58.227309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-12-06 03:34:58.227464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-12-06 03:34:58.227475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-12-06 03:34:58.227564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-12-06 03:34:58.227576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-12-06 03:34:58.227659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-12-06 03:34:58.227672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-12-06 03:34:58.227826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-12-06 03:34:58.227838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-12-06 03:34:58.227971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-12-06 03:34:58.227984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-12-06 03:34:58.228169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-12-06 03:34:58.228181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-12-06 03:34:58.228274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-12-06 03:34:58.228285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-12-06 03:34:58.228447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-12-06 03:34:58.228459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-12-06 03:34:58.228544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-12-06 03:34:58.228554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-12-06 03:34:58.228696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-12-06 03:34:58.228707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-12-06 03:34:58.228854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-12-06 03:34:58.228866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-12-06 03:34:58.229019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-12-06 03:34:58.229031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-12-06 03:34:58.229115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-12-06 03:34:58.229127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-12-06 03:34:58.229277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-12-06 03:34:58.229287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-12-06 03:34:58.229427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-12-06 03:34:58.229439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-12-06 03:34:58.229533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-12-06 03:34:58.229543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-12-06 03:34:58.229637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-12-06 03:34:58.229648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-12-06 03:34:58.229873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-12-06 03:34:58.229885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-12-06 03:34:58.229973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-12-06 03:34:58.229985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-12-06 03:34:58.230046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-12-06 03:34:58.230059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-12-06 03:34:58.230142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-12-06 03:34:58.230153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-12-06 03:34:58.230256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-12-06 03:34:58.230267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-12-06 03:34:58.230350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-12-06 03:34:58.230361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.426 [2024-12-06 03:34:58.230432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-12-06 03:34:58.230443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-12-06 03:34:58.230522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-12-06 03:34:58.230533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-12-06 03:34:58.230629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-12-06 03:34:58.230641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-12-06 03:34:58.230707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-12-06 03:34:58.230717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-12-06 03:34:58.230792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-12-06 03:34:58.230803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-12-06 03:34:58.230885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-12-06 03:34:58.230896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-12-06 03:34:58.231064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-12-06 03:34:58.231076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-12-06 03:34:58.231243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-12-06 03:34:58.231255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-12-06 03:34:58.231406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-12-06 03:34:58.231418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-12-06 03:34:58.231554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-12-06 03:34:58.231564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-12-06 03:34:58.231654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-12-06 03:34:58.231666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-12-06 03:34:58.231747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-12-06 03:34:58.231759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-12-06 03:34:58.231962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-12-06 03:34:58.231974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-12-06 03:34:58.232055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-12-06 03:34:58.232066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-12-06 03:34:58.232214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-12-06 03:34:58.232227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-12-06 03:34:58.232412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-12-06 03:34:58.232423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-12-06 03:34:58.232505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-12-06 03:34:58.232516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-12-06 03:34:58.232576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-12-06 03:34:58.232588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-12-06 03:34:58.232653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-12-06 03:34:58.232664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-12-06 03:34:58.232725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-12-06 03:34:58.232736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-12-06 03:34:58.232810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-12-06 03:34:58.232821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-12-06 03:34:58.232992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-12-06 03:34:58.233003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-12-06 03:34:58.233152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-12-06 03:34:58.233163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-12-06 03:34:58.233252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-12-06 03:34:58.233281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-12-06 03:34:58.233461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-12-06 03:34:58.233481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-12-06 03:34:58.233654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-12-06 03:34:58.233670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-12-06 03:34:58.233773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-12-06 03:34:58.233788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-12-06 03:34:58.233891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-12-06 03:34:58.233907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-12-06 03:34:58.234013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-12-06 03:34:58.234030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-12-06 03:34:58.234194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-12-06 03:34:58.234209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-12-06 03:34:58.234307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-12-06 03:34:58.234322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-12-06 03:34:58.234496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-12-06 03:34:58.234513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-12-06 03:34:58.234594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-12-06 03:34:58.234609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-12-06 03:34:58.234798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-12-06 03:34:58.234813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-12-06 03:34:58.234902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-12-06 03:34:58.234917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-12-06 03:34:58.235160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-12-06 03:34:58.235175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-12-06 03:34:58.235321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-12-06 03:34:58.235341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-12-06 03:34:58.235473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-12-06 03:34:58.235488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-12-06 03:34:58.235595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-12-06 03:34:58.235610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-12-06 03:34:58.235718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-12-06 03:34:58.235733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-12-06 03:34:58.235824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-12-06 03:34:58.235839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-12-06 03:34:58.235992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-12-06 03:34:58.236010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-12-06 03:34:58.236117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-12-06 03:34:58.236132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-12-06 03:34:58.236244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-12-06 03:34:58.236266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-12-06 03:34:58.236433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-12-06 03:34:58.236445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-12-06 03:34:58.236522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-12-06 03:34:58.236533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-12-06 03:34:58.236617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-12-06 03:34:58.236628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-12-06 03:34:58.236766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-12-06 03:34:58.236777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-12-06 03:34:58.236929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-12-06 03:34:58.236940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-12-06 03:34:58.237094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-12-06 03:34:58.237105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-12-06 03:34:58.237322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-12-06 03:34:58.237332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-12-06 03:34:58.237483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-12-06 03:34:58.237494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-12-06 03:34:58.237649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-12-06 03:34:58.237660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-12-06 03:34:58.237741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-12-06 03:34:58.237752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-12-06 03:34:58.237907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-12-06 03:34:58.237919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-12-06 03:34:58.238008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-12-06 03:34:58.238020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-12-06 03:34:58.238194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-12-06 03:34:58.238205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-12-06 03:34:58.238296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-12-06 03:34:58.238307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-12-06 03:34:58.238378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-12-06 03:34:58.238388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-12-06 03:34:58.238486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-12-06 03:34:58.238497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-12-06 03:34:58.238586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-12-06 03:34:58.238596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-12-06 03:34:58.238665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-12-06 03:34:58.238675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-12-06 03:34:58.238737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-12-06 03:34:58.238747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-12-06 03:34:58.238820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-12-06 03:34:58.238830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-12-06 03:34:58.238985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-12-06 03:34:58.238996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-12-06 03:34:58.239081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-12-06 03:34:58.239091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-12-06 03:34:58.239171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-12-06 03:34:58.239181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-12-06 03:34:58.239338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-12-06 03:34:58.239349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-12-06 03:34:58.239416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-12-06 03:34:58.239427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-12-06 03:34:58.239521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-12-06 03:34:58.239531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-12-06 03:34:58.239680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-12-06 03:34:58.239691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-12-06 03:34:58.239775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-12-06 03:34:58.239787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-12-06 03:34:58.239928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-12-06 03:34:58.239939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-12-06 03:34:58.240033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-12-06 03:34:58.240043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-12-06 03:34:58.240112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-12-06 03:34:58.240123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-12-06 03:34:58.240311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-12-06 03:34:58.240321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-12-06 03:34:58.240414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-12-06 03:34:58.240427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-12-06 03:34:58.240514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-12-06 03:34:58.240524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-12-06 03:34:58.240599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-12-06 03:34:58.240609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-12-06 03:34:58.240708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-12-06 03:34:58.240719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-12-06 03:34:58.240865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-12-06 03:34:58.240875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-12-06 03:34:58.240957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-12-06 03:34:58.240968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-12-06 03:34:58.241103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-12-06 03:34:58.241114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-12-06 03:34:58.241259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-12-06 03:34:58.241270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-12-06 03:34:58.241339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-12-06 03:34:58.241349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-12-06 03:34:58.241446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-12-06 03:34:58.241457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-12-06 03:34:58.241589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-12-06 03:34:58.241599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-12-06 03:34:58.241682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-12-06 03:34:58.241692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-12-06 03:34:58.241784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-12-06 03:34:58.241795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-12-06 03:34:58.241872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-12-06 03:34:58.241882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-12-06 03:34:58.241977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-12-06 03:34:58.241989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-12-06 03:34:58.242134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-12-06 03:34:58.242144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-12-06 03:34:58.242241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-12-06 03:34:58.242251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-12-06 03:34:58.242455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-12-06 03:34:58.242465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-12-06 03:34:58.242534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-12-06 03:34:58.242545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-12-06 03:34:58.242609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-12-06 03:34:58.242619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-12-06 03:34:58.242683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-12-06 03:34:58.242693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-12-06 03:34:58.242785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-12-06 03:34:58.242795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-12-06 03:34:58.242989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-12-06 03:34:58.243000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-12-06 03:34:58.243140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-12-06 03:34:58.243150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-12-06 03:34:58.243236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-12-06 03:34:58.243247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-12-06 03:34:58.243312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-12-06 03:34:58.243322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-12-06 03:34:58.243462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-12-06 03:34:58.243472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-12-06 03:34:58.243557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-12-06 03:34:58.243568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-12-06 03:34:58.243727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-12-06 03:34:58.243738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-12-06 03:34:58.243792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-12-06 03:34:58.243802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-12-06 03:34:58.243881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-12-06 03:34:58.243891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-12-06 03:34:58.243967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-12-06 03:34:58.243978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-12-06 03:34:58.244055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-12-06 03:34:58.244067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-12-06 03:34:58.244154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-12-06 03:34:58.244165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-12-06 03:34:58.244247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-12-06 03:34:58.244258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-12-06 03:34:58.244345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-12-06 03:34:58.244356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-12-06 03:34:58.244504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-12-06 03:34:58.244536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-12-06 03:34:58.244646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-12-06 03:34:58.244676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-12-06 03:34:58.244805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-12-06 03:34:58.244837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-12-06 03:34:58.245030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-12-06 03:34:58.245064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-12-06 03:34:58.245182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-12-06 03:34:58.245220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-12-06 03:34:58.245344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-12-06 03:34:58.245377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-12-06 03:34:58.245647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-12-06 03:34:58.245679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-12-06 03:34:58.245869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-12-06 03:34:58.245879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-12-06 03:34:58.246125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-12-06 03:34:58.246137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-12-06 03:34:58.246304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-12-06 03:34:58.246337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-12-06 03:34:58.246515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-12-06 03:34:58.246546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-12-06 03:34:58.246658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-12-06 03:34:58.246689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-12-06 03:34:58.246892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-12-06 03:34:58.246903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-12-06 03:34:58.247031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-12-06 03:34:58.247041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-12-06 03:34:58.247190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-12-06 03:34:58.247200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-12-06 03:34:58.247344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-12-06 03:34:58.247354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-12-06 03:34:58.247489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-12-06 03:34:58.247499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-12-06 03:34:58.247651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-12-06 03:34:58.247662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-12-06 03:34:58.247739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-12-06 03:34:58.247750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-12-06 03:34:58.247896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-12-06 03:34:58.247907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-12-06 03:34:58.247984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-12-06 03:34:58.247995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-12-06 03:34:58.248095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-12-06 03:34:58.248106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-12-06 03:34:58.248262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-12-06 03:34:58.248272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-12-06 03:34:58.248428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-12-06 03:34:58.248439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-12-06 03:34:58.248513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-12-06 03:34:58.248524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-12-06 03:34:58.248607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-12-06 03:34:58.248618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-12-06 03:34:58.248768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-12-06 03:34:58.248779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-12-06 03:34:58.248848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-12-06 03:34:58.248859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-12-06 03:34:58.249003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-12-06 03:34:58.249015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-12-06 03:34:58.249126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-12-06 03:34:58.249137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-12-06 03:34:58.249289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-12-06 03:34:58.249300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.430 [2024-12-06 03:34:58.249384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-12-06 03:34:58.249395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-12-06 03:34:58.249608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-12-06 03:34:58.249618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-12-06 03:34:58.249772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-12-06 03:34:58.249783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-12-06 03:34:58.249852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-12-06 03:34:58.249862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-12-06 03:34:58.250003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-12-06 03:34:58.250015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-12-06 03:34:58.250121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-12-06 03:34:58.250132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-12-06 03:34:58.250351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-12-06 03:34:58.250363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-12-06 03:34:58.250509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-12-06 03:34:58.250520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-12-06 03:34:58.250615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-12-06 03:34:58.250627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-12-06 03:34:58.250746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-12-06 03:34:58.250778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-12-06 03:34:58.250999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-12-06 03:34:58.251048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-12-06 03:34:58.251272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-12-06 03:34:58.251306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-12-06 03:34:58.251519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-12-06 03:34:58.251551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-12-06 03:34:58.251735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-12-06 03:34:58.251767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-12-06 03:34:58.251925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-12-06 03:34:58.251972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-12-06 03:34:58.252091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-12-06 03:34:58.252122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-12-06 03:34:58.252366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-12-06 03:34:58.252399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-12-06 03:34:58.252523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-12-06 03:34:58.252538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-12-06 03:34:58.252628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-12-06 03:34:58.252643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-12-06 03:34:58.252795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-12-06 03:34:58.252809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-12-06 03:34:58.252907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-12-06 03:34:58.252922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-12-06 03:34:58.253089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-12-06 03:34:58.253105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-12-06 03:34:58.253264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-12-06 03:34:58.253278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-12-06 03:34:58.253433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-12-06 03:34:58.253449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-12-06 03:34:58.253540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-12-06 03:34:58.253556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-12-06 03:34:58.253648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-12-06 03:34:58.253660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-12-06 03:34:58.253864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-12-06 03:34:58.253874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-12-06 03:34:58.253964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-12-06 03:34:58.253974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-12-06 03:34:58.254152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-12-06 03:34:58.254163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-12-06 03:34:58.254232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-12-06 03:34:58.254243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-12-06 03:34:58.254325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-12-06 03:34:58.254335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-12-06 03:34:58.254408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-12-06 03:34:58.254418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-12-06 03:34:58.254507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-12-06 03:34:58.254518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-12-06 03:34:58.254578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-12-06 03:34:58.254588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-12-06 03:34:58.254655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-12-06 03:34:58.254665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-12-06 03:34:58.254802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-12-06 03:34:58.254813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-12-06 03:34:58.254892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-12-06 03:34:58.254903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-12-06 03:34:58.255044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-12-06 03:34:58.255055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-12-06 03:34:58.255256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-12-06 03:34:58.255266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-12-06 03:34:58.255345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-12-06 03:34:58.255355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-12-06 03:34:58.255507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-12-06 03:34:58.255521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-12-06 03:34:58.255595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-12-06 03:34:58.255605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-12-06 03:34:58.255685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-12-06 03:34:58.255695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-12-06 03:34:58.255873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-12-06 03:34:58.255905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-12-06 03:34:58.256112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-12-06 03:34:58.256145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-12-06 03:34:58.256326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-12-06 03:34:58.256358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-12-06 03:34:58.256466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-12-06 03:34:58.256497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-12-06 03:34:58.256717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-12-06 03:34:58.256749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-12-06 03:34:58.256963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-12-06 03:34:58.256974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-12-06 03:34:58.257040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-12-06 03:34:58.257051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-12-06 03:34:58.257176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-12-06 03:34:58.257187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-12-06 03:34:58.257342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-12-06 03:34:58.257353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-12-06 03:34:58.257428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-12-06 03:34:58.257438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-12-06 03:34:58.257520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-12-06 03:34:58.257530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-12-06 03:34:58.257737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-12-06 03:34:58.257747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-12-06 03:34:58.257967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-12-06 03:34:58.257978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-12-06 03:34:58.258138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-12-06 03:34:58.258149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-12-06 03:34:58.258220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-12-06 03:34:58.258230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-12-06 03:34:58.258319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-12-06 03:34:58.258330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-12-06 03:34:58.258594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-12-06 03:34:58.258604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-12-06 03:34:58.258824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-12-06 03:34:58.258835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-12-06 03:34:58.258928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-12-06 03:34:58.258939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-12-06 03:34:58.259014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-12-06 03:34:58.259026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-12-06 03:34:58.259163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-12-06 03:34:58.259173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-12-06 03:34:58.259251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-12-06 03:34:58.259261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-12-06 03:34:58.259348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-12-06 03:34:58.259359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-12-06 03:34:58.259477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-12-06 03:34:58.259509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-12-06 03:34:58.259714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-12-06 03:34:58.259753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-12-06 03:34:58.260006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-12-06 03:34:58.260042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-12-06 03:34:58.260269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-12-06 03:34:58.260305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-12-06 03:34:58.260393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-12-06 03:34:58.260408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-12-06 03:34:58.260485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-12-06 03:34:58.260500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-12-06 03:34:58.260652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-12-06 03:34:58.260667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-12-06 03:34:58.260833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-12-06 03:34:58.260865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-12-06 03:34:58.261003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-12-06 03:34:58.261041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-12-06 03:34:58.261238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-12-06 03:34:58.261268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-12-06 03:34:58.261549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-12-06 03:34:58.261581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-12-06 03:34:58.261760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-12-06 03:34:58.261791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-12-06 03:34:58.262082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-12-06 03:34:58.262110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-12-06 03:34:58.262327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-12-06 03:34:58.262342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-12-06 03:34:58.262549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-12-06 03:34:58.262564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-12-06 03:34:58.262795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-12-06 03:34:58.262809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-12-06 03:34:58.262966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-12-06 03:34:58.262981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-12-06 03:34:58.263081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-12-06 03:34:58.263096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-12-06 03:34:58.263202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-12-06 03:34:58.263217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-12-06 03:34:58.263425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-12-06 03:34:58.263440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-12-06 03:34:58.263707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-12-06 03:34:58.263739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-12-06 03:34:58.263985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-12-06 03:34:58.264020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-12-06 03:34:58.264269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-12-06 03:34:58.264302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-12-06 03:34:58.264575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-12-06 03:34:58.264608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-12-06 03:34:58.264891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-12-06 03:34:58.264906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-12-06 03:34:58.265068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-12-06 03:34:58.265083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-12-06 03:34:58.265237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-12-06 03:34:58.265253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-12-06 03:34:58.265365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-12-06 03:34:58.265381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-12-06 03:34:58.265552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-12-06 03:34:58.265571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-12-06 03:34:58.265710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-12-06 03:34:58.265725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-12-06 03:34:58.265931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-12-06 03:34:58.265946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-12-06 03:34:58.266180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-12-06 03:34:58.266195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-12-06 03:34:58.266379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-12-06 03:34:58.266394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-12-06 03:34:58.266557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-12-06 03:34:58.266572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-12-06 03:34:58.266727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-12-06 03:34:58.266742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-12-06 03:34:58.267004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-12-06 03:34:58.267041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-12-06 03:34:58.267224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-12-06 03:34:58.267256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-12-06 03:34:58.267450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-12-06 03:34:58.267483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-12-06 03:34:58.267686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-12-06 03:34:58.267719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-12-06 03:34:58.267990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-12-06 03:34:58.268025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-12-06 03:34:58.268217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-12-06 03:34:58.268249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-12-06 03:34:58.268523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-12-06 03:34:58.268556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.433 [2024-12-06 03:34:58.268811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-12-06 03:34:58.268844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-12-06 03:34:58.269064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-12-06 03:34:58.269079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-12-06 03:34:58.269291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-12-06 03:34:58.269306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-12-06 03:34:58.269537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-12-06 03:34:58.269552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-12-06 03:34:58.269711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-12-06 03:34:58.269726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-12-06 03:34:58.269937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-12-06 03:34:58.269956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-12-06 03:34:58.270141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-12-06 03:34:58.270156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-12-06 03:34:58.270242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-12-06 03:34:58.270257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-12-06 03:34:58.270488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-12-06 03:34:58.270503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-12-06 03:34:58.270609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-12-06 03:34:58.270624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-12-06 03:34:58.270858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-12-06 03:34:58.270873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-12-06 03:34:58.271085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-12-06 03:34:58.271119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-12-06 03:34:58.271381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-12-06 03:34:58.271413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-12-06 03:34:58.271704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-12-06 03:34:58.271735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-12-06 03:34:58.271986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-12-06 03:34:58.272002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-12-06 03:34:58.272150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-12-06 03:34:58.272172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-12-06 03:34:58.272323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-12-06 03:34:58.272338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-12-06 03:34:58.272502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-12-06 03:34:58.272517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-12-06 03:34:58.272698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-12-06 03:34:58.272712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-12-06 03:34:58.272958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-12-06 03:34:58.272973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-12-06 03:34:58.273064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-12-06 03:34:58.273079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-12-06 03:34:58.273285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-12-06 03:34:58.273299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-12-06 03:34:58.273506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-12-06 03:34:58.273520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-12-06 03:34:58.273693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-12-06 03:34:58.273708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-12-06 03:34:58.273874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-12-06 03:34:58.273888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-12-06 03:34:58.273978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-12-06 03:34:58.273993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-12-06 03:34:58.274216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-12-06 03:34:58.274231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-12-06 03:34:58.274377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-12-06 03:34:58.274392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-12-06 03:34:58.274573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-12-06 03:34:58.274588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-12-06 03:34:58.274779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-12-06 03:34:58.274794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-12-06 03:34:58.275026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-12-06 03:34:58.275042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-12-06 03:34:58.275296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-12-06 03:34:58.275311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-12-06 03:34:58.275543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-12-06 03:34:58.275558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-12-06 03:34:58.275716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-12-06 03:34:58.275731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-12-06 03:34:58.275983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-12-06 03:34:58.276017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-12-06 03:34:58.276292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-12-06 03:34:58.276325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-12-06 03:34:58.276606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-12-06 03:34:58.276639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-12-06 03:34:58.276831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-12-06 03:34:58.276847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-12-06 03:34:58.277013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-12-06 03:34:58.277046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-12-06 03:34:58.277316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-12-06 03:34:58.277348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-12-06 03:34:58.277622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-12-06 03:34:58.277639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-12-06 03:34:58.277801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-12-06 03:34:58.277816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-12-06 03:34:58.277981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-12-06 03:34:58.277996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-12-06 03:34:58.278245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-12-06 03:34:58.278260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-12-06 03:34:58.278498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-12-06 03:34:58.278513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-12-06 03:34:58.278738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-12-06 03:34:58.278752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-12-06 03:34:58.278927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-12-06 03:34:58.278941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-12-06 03:34:58.279171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-12-06 03:34:58.279186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-12-06 03:34:58.279418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-12-06 03:34:58.279434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-12-06 03:34:58.279627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-12-06 03:34:58.279660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-12-06 03:34:58.279908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-12-06 03:34:58.279940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-12-06 03:34:58.280243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-12-06 03:34:58.280259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-12-06 03:34:58.280414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-12-06 03:34:58.280429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-12-06 03:34:58.280659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-12-06 03:34:58.280674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-12-06 03:34:58.280853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-12-06 03:34:58.280869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-12-06 03:34:58.281119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-12-06 03:34:58.281152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-12-06 03:34:58.281351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-12-06 03:34:58.281383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-12-06 03:34:58.281590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-12-06 03:34:58.281621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-12-06 03:34:58.281801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-12-06 03:34:58.281832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-12-06 03:34:58.282072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-12-06 03:34:58.282105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-12-06 03:34:58.282351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-12-06 03:34:58.282382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-12-06 03:34:58.282574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-12-06 03:34:58.282588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-12-06 03:34:58.282819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-12-06 03:34:58.282834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-12-06 03:34:58.282926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-12-06 03:34:58.282941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-12-06 03:34:58.283155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-12-06 03:34:58.283170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-12-06 03:34:58.283312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-12-06 03:34:58.283327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-12-06 03:34:58.283554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-12-06 03:34:58.283567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-12-06 03:34:58.283830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-12-06 03:34:58.283845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-12-06 03:34:58.283990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-12-06 03:34:58.284006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-12-06 03:34:58.284164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-12-06 03:34:58.284178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-12-06 03:34:58.284403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-12-06 03:34:58.284419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-12-06 03:34:58.284682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-12-06 03:34:58.284697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-12-06 03:34:58.284959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-12-06 03:34:58.284975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-12-06 03:34:58.285139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-12-06 03:34:58.285153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-12-06 03:34:58.285319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-12-06 03:34:58.285350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.435 [2024-12-06 03:34:58.285570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-12-06 03:34:58.285601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-12-06 03:34:58.285909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-12-06 03:34:58.285923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-12-06 03:34:58.286183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-12-06 03:34:58.286199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-12-06 03:34:58.286362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-12-06 03:34:58.286376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-12-06 03:34:58.286574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-12-06 03:34:58.286605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-12-06 03:34:58.286878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-12-06 03:34:58.286912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-12-06 03:34:58.287136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-12-06 03:34:58.287180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-12-06 03:34:58.287457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-12-06 03:34:58.287490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-12-06 03:34:58.287671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-12-06 03:34:58.287702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-12-06 03:34:58.287837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-12-06 03:34:58.287868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-12-06 03:34:58.288134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-12-06 03:34:58.288168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-12-06 03:34:58.288354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-12-06 03:34:58.288368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-12-06 03:34:58.288448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-12-06 03:34:58.288463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-12-06 03:34:58.288679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-12-06 03:34:58.288694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-12-06 03:34:58.288876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-12-06 03:34:58.288891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-12-06 03:34:58.289067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-12-06 03:34:58.289100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-12-06 03:34:58.289416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-12-06 03:34:58.289449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-12-06 03:34:58.289702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-12-06 03:34:58.289746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-12-06 03:34:58.289901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-12-06 03:34:58.289916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-12-06 03:34:58.290147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-12-06 03:34:58.290163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-12-06 03:34:58.290405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-12-06 03:34:58.290419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-12-06 03:34:58.290670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-12-06 03:34:58.290684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-12-06 03:34:58.290843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-12-06 03:34:58.290858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-12-06 03:34:58.291069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-12-06 03:34:58.291084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-12-06 03:34:58.291169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-12-06 03:34:58.291184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-12-06 03:34:58.291366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-12-06 03:34:58.291380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-12-06 03:34:58.291614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-12-06 03:34:58.291628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-12-06 03:34:58.291736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-12-06 03:34:58.291750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-12-06 03:34:58.291928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-12-06 03:34:58.291942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-12-06 03:34:58.292199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-12-06 03:34:58.292215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-12-06 03:34:58.292374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-12-06 03:34:58.292389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-12-06 03:34:58.292622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-12-06 03:34:58.292638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-12-06 03:34:58.292794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-12-06 03:34:58.292809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-12-06 03:34:58.293009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-12-06 03:34:58.293049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-12-06 03:34:58.293262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-12-06 03:34:58.293295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-12-06 03:34:58.293498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-12-06 03:34:58.293532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-12-06 03:34:58.293782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-12-06 03:34:58.293797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-12-06 03:34:58.293957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-12-06 03:34:58.293973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-12-06 03:34:58.294124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-12-06 03:34:58.294138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-12-06 03:34:58.294230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-12-06 03:34:58.294245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-12-06 03:34:58.294476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-12-06 03:34:58.294491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-12-06 03:34:58.294703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-12-06 03:34:58.294718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-12-06 03:34:58.294895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-12-06 03:34:58.294909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-12-06 03:34:58.295159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-12-06 03:34:58.295174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-12-06 03:34:58.295335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-12-06 03:34:58.295351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-12-06 03:34:58.295509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-12-06 03:34:58.295525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-12-06 03:34:58.295756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-12-06 03:34:58.295772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-12-06 03:34:58.295967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-12-06 03:34:58.295984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-12-06 03:34:58.296142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-12-06 03:34:58.296157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-12-06 03:34:58.296386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-12-06 03:34:58.296401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-12-06 03:34:58.296649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-12-06 03:34:58.296664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-12-06 03:34:58.296907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-12-06 03:34:58.296921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-12-06 03:34:58.297193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-12-06 03:34:58.297208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-12-06 03:34:58.297424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-12-06 03:34:58.297439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-12-06 03:34:58.297621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-12-06 03:34:58.297636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-12-06 03:34:58.297817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-12-06 03:34:58.297832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-12-06 03:34:58.297975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-12-06 03:34:58.297990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-12-06 03:34:58.298173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-12-06 03:34:58.298204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-12-06 03:34:58.298452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-12-06 03:34:58.298484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-12-06 03:34:58.298696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-12-06 03:34:58.298729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-12-06 03:34:58.298936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-12-06 03:34:58.298985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-12-06 03:34:58.299260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-12-06 03:34:58.299292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-12-06 03:34:58.299528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-12-06 03:34:58.299561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-12-06 03:34:58.299736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-12-06 03:34:58.299752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-12-06 03:34:58.299985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-12-06 03:34:58.300019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-12-06 03:34:58.300286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-12-06 03:34:58.300319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-12-06 03:34:58.300523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-12-06 03:34:58.300555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-12-06 03:34:58.300788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-12-06 03:34:58.300819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-12-06 03:34:58.301097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-12-06 03:34:58.301130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-12-06 03:34:58.301398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-12-06 03:34:58.301413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.437 [2024-12-06 03:34:58.301652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-12-06 03:34:58.301667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-12-06 03:34:58.301780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-12-06 03:34:58.301795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-12-06 03:34:58.301957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-12-06 03:34:58.301973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-12-06 03:34:58.302140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-12-06 03:34:58.302172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-12-06 03:34:58.302378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-12-06 03:34:58.302412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-12-06 03:34:58.302621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-12-06 03:34:58.302653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-12-06 03:34:58.302803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-12-06 03:34:58.302831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-12-06 03:34:58.303161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-12-06 03:34:58.303199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-12-06 03:34:58.303473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-12-06 03:34:58.303505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-12-06 03:34:58.303715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-12-06 03:34:58.303748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-12-06 03:34:58.304041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-12-06 03:34:58.304053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-12-06 03:34:58.304272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-12-06 03:34:58.304282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-12-06 03:34:58.304432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-12-06 03:34:58.304442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-12-06 03:34:58.304670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-12-06 03:34:58.304681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-12-06 03:34:58.304905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-12-06 03:34:58.304915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-12-06 03:34:58.305087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-12-06 03:34:58.305098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-12-06 03:34:58.305319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-12-06 03:34:58.305329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-12-06 03:34:58.305475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-12-06 03:34:58.305489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-12-06 03:34:58.305632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-12-06 03:34:58.305662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-12-06 03:34:58.305938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-12-06 03:34:58.305982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-12-06 03:34:58.306175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-12-06 03:34:58.306207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-12-06 03:34:58.306330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-12-06 03:34:58.306363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-12-06 03:34:58.306548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-12-06 03:34:58.306580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-12-06 03:34:58.306850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-12-06 03:34:58.306881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-12-06 03:34:58.307019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-12-06 03:34:58.307053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-12-06 03:34:58.307351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-12-06 03:34:58.307383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-12-06 03:34:58.307668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-12-06 03:34:58.307699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-12-06 03:34:58.307974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-12-06 03:34:58.307985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-12-06 03:34:58.308209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-12-06 03:34:58.308220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-12-06 03:34:58.308443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-12-06 03:34:58.308453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-12-06 03:34:58.308585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-12-06 03:34:58.308595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-12-06 03:34:58.308731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-12-06 03:34:58.308743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-12-06 03:34:58.308965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-12-06 03:34:58.308976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-12-06 03:34:58.309175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-12-06 03:34:58.309185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-12-06 03:34:58.309408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-12-06 03:34:58.309419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-12-06 03:34:58.309595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-12-06 03:34:58.309606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-12-06 03:34:58.309839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-12-06 03:34:58.309870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-12-06 03:34:58.310167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-12-06 03:34:58.310200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-12-06 03:34:58.310378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-12-06 03:34:58.310411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-12-06 03:34:58.310652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-12-06 03:34:58.310683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-12-06 03:34:58.310897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-12-06 03:34:58.310907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-12-06 03:34:58.311133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-12-06 03:34:58.311144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-12-06 03:34:58.311378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-12-06 03:34:58.311388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-12-06 03:34:58.311621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-12-06 03:34:58.311631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-12-06 03:34:58.311780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-12-06 03:34:58.311791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-12-06 03:34:58.312008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-12-06 03:34:58.312019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-12-06 03:34:58.312173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-12-06 03:34:58.312205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-12-06 03:34:58.312395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-12-06 03:34:58.312427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-12-06 03:34:58.312747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-12-06 03:34:58.312778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-12-06 03:34:58.312991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-12-06 03:34:58.313002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-12-06 03:34:58.313174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-12-06 03:34:58.313206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-12-06 03:34:58.313418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-12-06 03:34:58.313449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-12-06 03:34:58.313636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-12-06 03:34:58.313668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-12-06 03:34:58.313922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-12-06 03:34:58.313932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-12-06 03:34:58.314070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-12-06 03:34:58.314080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-12-06 03:34:58.314286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-12-06 03:34:58.314318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-12-06 03:34:58.314563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-12-06 03:34:58.314595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-12-06 03:34:58.314850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-12-06 03:34:58.314888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-12-06 03:34:58.315170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-12-06 03:34:58.315203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-12-06 03:34:58.315404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-12-06 03:34:58.315435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-12-06 03:34:58.315692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-12-06 03:34:58.315724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-12-06 03:34:58.316018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-12-06 03:34:58.316051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-12-06 03:34:58.316275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-12-06 03:34:58.316306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-12-06 03:34:58.316549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-12-06 03:34:58.316579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-12-06 03:34:58.316780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-12-06 03:34:58.316791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-12-06 03:34:58.317015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-12-06 03:34:58.317026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-12-06 03:34:58.317102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-12-06 03:34:58.317113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-12-06 03:34:58.317292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-12-06 03:34:58.317302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-12-06 03:34:58.317456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-12-06 03:34:58.317488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-12-06 03:34:58.317677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-12-06 03:34:58.317708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-12-06 03:34:58.317975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-12-06 03:34:58.318008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-12-06 03:34:58.318207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-12-06 03:34:58.318218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-12-06 03:34:58.318447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-12-06 03:34:58.318479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-12-06 03:34:58.318741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-12-06 03:34:58.318773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.439 [2024-12-06 03:34:58.319064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-12-06 03:34:58.319074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-12-06 03:34:58.319220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-12-06 03:34:58.319231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-12-06 03:34:58.319476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-12-06 03:34:58.319507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-12-06 03:34:58.319801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-12-06 03:34:58.319833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-12-06 03:34:58.320099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-12-06 03:34:58.320111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-12-06 03:34:58.320339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-12-06 03:34:58.320351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-12-06 03:34:58.320509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-12-06 03:34:58.320520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-12-06 03:34:58.320741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-12-06 03:34:58.320751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-12-06 03:34:58.320968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-12-06 03:34:58.320980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-12-06 03:34:58.321212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-12-06 03:34:58.321245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-12-06 03:34:58.321609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-12-06 03:34:58.321682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-12-06 03:34:58.321879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-12-06 03:34:58.321896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-12-06 03:34:58.322124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-12-06 03:34:58.322159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-12-06 03:34:58.322435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-12-06 03:34:58.322468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-12-06 03:34:58.322616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-12-06 03:34:58.322651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-12-06 03:34:58.322895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-12-06 03:34:58.322928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-12-06 03:34:58.323132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-12-06 03:34:58.323148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-12-06 03:34:58.323292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-12-06 03:34:58.323307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-12-06 03:34:58.323538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-12-06 03:34:58.323553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-12-06 03:34:58.323783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-12-06 03:34:58.323797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-12-06 03:34:58.323959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-12-06 03:34:58.323975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-12-06 03:34:58.324117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-12-06 03:34:58.324132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-12-06 03:34:58.324360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-12-06 03:34:58.324375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-12-06 03:34:58.324527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-12-06 03:34:58.324542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-12-06 03:34:58.324757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-12-06 03:34:58.324772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-12-06 03:34:58.325009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-12-06 03:34:58.325044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-12-06 03:34:58.325193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-12-06 03:34:58.325227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-12-06 03:34:58.325360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-12-06 03:34:58.325393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-12-06 03:34:58.325605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-12-06 03:34:58.325637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-12-06 03:34:58.325955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-12-06 03:34:58.325971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-12-06 03:34:58.326074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-12-06 03:34:58.326088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-12-06 03:34:58.326334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-12-06 03:34:58.326349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-12-06 03:34:58.326583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-12-06 03:34:58.326599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-12-06 03:34:58.326803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-12-06 03:34:58.326818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-12-06 03:34:58.326965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-12-06 03:34:58.326980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-12-06 03:34:58.327215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-12-06 03:34:58.327230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-12-06 03:34:58.327451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-12-06 03:34:58.327465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-12-06 03:34:58.327735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-12-06 03:34:58.327752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-12-06 03:34:58.327979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-12-06 03:34:58.327995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-12-06 03:34:58.328204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-12-06 03:34:58.328219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-12-06 03:34:58.328494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-12-06 03:34:58.328508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-12-06 03:34:58.328739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-12-06 03:34:58.328755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-12-06 03:34:58.328953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-12-06 03:34:58.328968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-12-06 03:34:58.329199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-12-06 03:34:58.329213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-12-06 03:34:58.329474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-12-06 03:34:58.329489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-12-06 03:34:58.329711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-12-06 03:34:58.329746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-12-06 03:34:58.330044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-12-06 03:34:58.330078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-12-06 03:34:58.330276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-12-06 03:34:58.330307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-12-06 03:34:58.330569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-12-06 03:34:58.330601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-12-06 03:34:58.330859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-12-06 03:34:58.330874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-12-06 03:34:58.331110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-12-06 03:34:58.331125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-12-06 03:34:58.331382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-12-06 03:34:58.331397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-12-06 03:34:58.331703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-12-06 03:34:58.331718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-12-06 03:34:58.331927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-12-06 03:34:58.331942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-12-06 03:34:58.332154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-12-06 03:34:58.332169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-12-06 03:34:58.332398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-12-06 03:34:58.332413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-12-06 03:34:58.332597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-12-06 03:34:58.332611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-12-06 03:34:58.332802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-12-06 03:34:58.332816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-12-06 03:34:58.333053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-12-06 03:34:58.333067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-12-06 03:34:58.333219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-12-06 03:34:58.333234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-12-06 03:34:58.333464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-12-06 03:34:58.333479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-12-06 03:34:58.333706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-12-06 03:34:58.333720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-12-06 03:34:58.333973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-12-06 03:34:58.333988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-12-06 03:34:58.334195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-12-06 03:34:58.334210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-12-06 03:34:58.334467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-12-06 03:34:58.334484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-12-06 03:34:58.334670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-12-06 03:34:58.334685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-12-06 03:34:58.334958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-12-06 03:34:58.334973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-12-06 03:34:58.335139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-12-06 03:34:58.335153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-12-06 03:34:58.335332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-12-06 03:34:58.335347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-12-06 03:34:58.335585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-12-06 03:34:58.335600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-12-06 03:34:58.335781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-12-06 03:34:58.335795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-12-06 03:34:58.336007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-12-06 03:34:58.336022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-12-06 03:34:58.336174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-12-06 03:34:58.336189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-12-06 03:34:58.336430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-12-06 03:34:58.336461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-12-06 03:34:58.336761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-12-06 03:34:58.336795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-12-06 03:34:58.336941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-12-06 03:34:58.336963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-12-06 03:34:58.337195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-12-06 03:34:58.337210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-12-06 03:34:58.337353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-12-06 03:34:58.337368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-12-06 03:34:58.337528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-12-06 03:34:58.337543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-12-06 03:34:58.337714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-12-06 03:34:58.337729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-12-06 03:34:58.337971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-12-06 03:34:58.337987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-12-06 03:34:58.338155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-12-06 03:34:58.338170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-12-06 03:34:58.338420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-12-06 03:34:58.338434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-12-06 03:34:58.338652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-12-06 03:34:58.338667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-12-06 03:34:58.338774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-12-06 03:34:58.338788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-12-06 03:34:58.338943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-12-06 03:34:58.338986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-12-06 03:34:58.339209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-12-06 03:34:58.339242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-12-06 03:34:58.339488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-12-06 03:34:58.339520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-12-06 03:34:58.339714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-12-06 03:34:58.339745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-12-06 03:34:58.339998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-12-06 03:34:58.340031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-12-06 03:34:58.340280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-12-06 03:34:58.340296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-12-06 03:34:58.340527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-12-06 03:34:58.340545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-12-06 03:34:58.340790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-12-06 03:34:58.340806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-12-06 03:34:58.340958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-12-06 03:34:58.340974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-12-06 03:34:58.341212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-12-06 03:34:58.341243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-12-06 03:34:58.341529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-12-06 03:34:58.341562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-12-06 03:34:58.341837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-12-06 03:34:58.341869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-12-06 03:34:58.342117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-12-06 03:34:58.342132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-12-06 03:34:58.342289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-12-06 03:34:58.342304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-12-06 03:34:58.342532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-12-06 03:34:58.342547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-12-06 03:34:58.342781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-12-06 03:34:58.342796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-12-06 03:34:58.343013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-12-06 03:34:58.343030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-12-06 03:34:58.343152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-12-06 03:34:58.343184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-12-06 03:34:58.343376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-12-06 03:34:58.343408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-12-06 03:34:58.343614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-12-06 03:34:58.343647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-12-06 03:34:58.343838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-12-06 03:34:58.343853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-12-06 03:34:58.344087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-12-06 03:34:58.344102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-12-06 03:34:58.344321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-12-06 03:34:58.344336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-12-06 03:34:58.344437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-12-06 03:34:58.344451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.442 [2024-12-06 03:34:58.344711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-12-06 03:34:58.344725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-12-06 03:34:58.344956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-12-06 03:34:58.344971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-12-06 03:34:58.345179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-12-06 03:34:58.345193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-12-06 03:34:58.345365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-12-06 03:34:58.345380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-12-06 03:34:58.345531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-12-06 03:34:58.345545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-12-06 03:34:58.345800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-12-06 03:34:58.345815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-12-06 03:34:58.345906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-12-06 03:34:58.345920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-12-06 03:34:58.346145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-12-06 03:34:58.346161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-12-06 03:34:58.346425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-12-06 03:34:58.346440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-12-06 03:34:58.346547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-12-06 03:34:58.346561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-12-06 03:34:58.346745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-12-06 03:34:58.346760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-12-06 03:34:58.346934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-12-06 03:34:58.346954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-12-06 03:34:58.347111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-12-06 03:34:58.347126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-12-06 03:34:58.347350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-12-06 03:34:58.347366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-12-06 03:34:58.347586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-12-06 03:34:58.347600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-12-06 03:34:58.347831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-12-06 03:34:58.347847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-12-06 03:34:58.348011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-12-06 03:34:58.348028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-12-06 03:34:58.348101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-12-06 03:34:58.348116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-12-06 03:34:58.348323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-12-06 03:34:58.348338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-12-06 03:34:58.348571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-12-06 03:34:58.348586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-12-06 03:34:58.348766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-12-06 03:34:58.348781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-12-06 03:34:58.348987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-12-06 03:34:58.349002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-12-06 03:34:58.349161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-12-06 03:34:58.349176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-12-06 03:34:58.349297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-12-06 03:34:58.349332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-12-06 03:34:58.349520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-12-06 03:34:58.349533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-12-06 03:34:58.349690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-12-06 03:34:58.349701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-12-06 03:34:58.349880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-12-06 03:34:58.349891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-12-06 03:34:58.350061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-12-06 03:34:58.350072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-12-06 03:34:58.350216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-12-06 03:34:58.350227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-12-06 03:34:58.350458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-12-06 03:34:58.350468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-12-06 03:34:58.350693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-12-06 03:34:58.350704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-12-06 03:34:58.350889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-12-06 03:34:58.350899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-12-06 03:34:58.350990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-12-06 03:34:58.351001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-12-06 03:34:58.351102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-12-06 03:34:58.351113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-12-06 03:34:58.351336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-12-06 03:34:58.351368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-12-06 03:34:58.351559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-12-06 03:34:58.351592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-12-06 03:34:58.351859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-12-06 03:34:58.351898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-12-06 03:34:58.352098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-12-06 03:34:58.352132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-12-06 03:34:58.352407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-12-06 03:34:58.352439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-12-06 03:34:58.352697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-12-06 03:34:58.352728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-12-06 03:34:58.352970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-12-06 03:34:58.353004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-12-06 03:34:58.353127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-12-06 03:34:58.353137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-12-06 03:34:58.353335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-12-06 03:34:58.353345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-12-06 03:34:58.353592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-12-06 03:34:58.353603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-12-06 03:34:58.353692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-12-06 03:34:58.353703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-12-06 03:34:58.353858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-12-06 03:34:58.353868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-12-06 03:34:58.354019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-12-06 03:34:58.354030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-12-06 03:34:58.354227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-12-06 03:34:58.354238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-12-06 03:34:58.354368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-12-06 03:34:58.354379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-12-06 03:34:58.354510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-12-06 03:34:58.354521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-12-06 03:34:58.354685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-12-06 03:34:58.354719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-12-06 03:34:58.354985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-12-06 03:34:58.355019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-12-06 03:34:58.355215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-12-06 03:34:58.355247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-12-06 03:34:58.355410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-12-06 03:34:58.355420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-12-06 03:34:58.355635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-12-06 03:34:58.355667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-12-06 03:34:58.355871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-12-06 03:34:58.355904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-12-06 03:34:58.356118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-12-06 03:34:58.356151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-12-06 03:34:58.356341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-12-06 03:34:58.356352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-12-06 03:34:58.356495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-12-06 03:34:58.356506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-12-06 03:34:58.356730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-12-06 03:34:58.356741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-12-06 03:34:58.356820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-12-06 03:34:58.356831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-12-06 03:34:58.357031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-12-06 03:34:58.357042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-12-06 03:34:58.357266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-12-06 03:34:58.357276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-12-06 03:34:58.357439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-12-06 03:34:58.357457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-12-06 03:34:58.357675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-12-06 03:34:58.357690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-12-06 03:34:58.357845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-12-06 03:34:58.357860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-12-06 03:34:58.358091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-12-06 03:34:58.358106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-12-06 03:34:58.358387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-12-06 03:34:58.358401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-12-06 03:34:58.358690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-12-06 03:34:58.358723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-12-06 03:34:58.359020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-12-06 03:34:58.359035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-12-06 03:34:58.359257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-12-06 03:34:58.359273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-12-06 03:34:58.359433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-12-06 03:34:58.359447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-12-06 03:34:58.359684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-12-06 03:34:58.359699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-12-06 03:34:58.359966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-12-06 03:34:58.359982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-12-06 03:34:58.360139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-12-06 03:34:58.360154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-12-06 03:34:58.360368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-12-06 03:34:58.360400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.444 [2024-12-06 03:34:58.360686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-12-06 03:34:58.360717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-12-06 03:34:58.360999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-12-06 03:34:58.361033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-12-06 03:34:58.361224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-12-06 03:34:58.361238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-12-06 03:34:58.361473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-12-06 03:34:58.361487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-12-06 03:34:58.361642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-12-06 03:34:58.361684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-12-06 03:34:58.361882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-12-06 03:34:58.361914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-12-06 03:34:58.362176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-12-06 03:34:58.362209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-12-06 03:34:58.362455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-12-06 03:34:58.362485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-12-06 03:34:58.362662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-12-06 03:34:58.362693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-12-06 03:34:58.362991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-12-06 03:34:58.363024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-12-06 03:34:58.363232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-12-06 03:34:58.363264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-12-06 03:34:58.363494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-12-06 03:34:58.363509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-12-06 03:34:58.363608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-12-06 03:34:58.363622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-12-06 03:34:58.363792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-12-06 03:34:58.363806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-12-06 03:34:58.364040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-12-06 03:34:58.364055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-12-06 03:34:58.364211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-12-06 03:34:58.364226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-12-06 03:34:58.364464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-12-06 03:34:58.364494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-12-06 03:34:58.364705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-12-06 03:34:58.364737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-12-06 03:34:58.364965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-12-06 03:34:58.365000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-12-06 03:34:58.365218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-12-06 03:34:58.365235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-12-06 03:34:58.365411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-12-06 03:34:58.365423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-12-06 03:34:58.365645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-12-06 03:34:58.365656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-12-06 03:34:58.365907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-12-06 03:34:58.365938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-12-06 03:34:58.366191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-12-06 03:34:58.366224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-12-06 03:34:58.366470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-12-06 03:34:58.366481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-12-06 03:34:58.366666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-12-06 03:34:58.366677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-12-06 03:34:58.366887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-12-06 03:34:58.366919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-12-06 03:34:58.367139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-12-06 03:34:58.367177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-12-06 03:34:58.367390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-12-06 03:34:58.367422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-12-06 03:34:58.367622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-12-06 03:34:58.367654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-12-06 03:34:58.367905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-12-06 03:34:58.367938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-12-06 03:34:58.368192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-12-06 03:34:58.368224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-12-06 03:34:58.368462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-12-06 03:34:58.368473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-12-06 03:34:58.368608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-12-06 03:34:58.368618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-12-06 03:34:58.368842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-12-06 03:34:58.368873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-12-06 03:34:58.369137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-12-06 03:34:58.369170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-12-06 03:34:58.369320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-12-06 03:34:58.369352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-12-06 03:34:58.369597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-12-06 03:34:58.369629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-12-06 03:34:58.369803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-12-06 03:34:58.369814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-12-06 03:34:58.369974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-12-06 03:34:58.369999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-12-06 03:34:58.370188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-12-06 03:34:58.370222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-12-06 03:34:58.370498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-12-06 03:34:58.370528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-12-06 03:34:58.370804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-12-06 03:34:58.370836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-12-06 03:34:58.371125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-12-06 03:34:58.371159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-12-06 03:34:58.371428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-12-06 03:34:58.371460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-12-06 03:34:58.371740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-12-06 03:34:58.371751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-12-06 03:34:58.371894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-12-06 03:34:58.371905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-12-06 03:34:58.372070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-12-06 03:34:58.372104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-12-06 03:34:58.372400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-12-06 03:34:58.372432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-12-06 03:34:58.372566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-12-06 03:34:58.372608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-12-06 03:34:58.372830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-12-06 03:34:58.372841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-12-06 03:34:58.372992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-12-06 03:34:58.373003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-12-06 03:34:58.373082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-12-06 03:34:58.373094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-12-06 03:34:58.373226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-12-06 03:34:58.373236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-12-06 03:34:58.373331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-12-06 03:34:58.373343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-12-06 03:34:58.373430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-12-06 03:34:58.373440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-12-06 03:34:58.373681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-12-06 03:34:58.373713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-12-06 03:34:58.373909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-12-06 03:34:58.373941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-12-06 03:34:58.374183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-12-06 03:34:58.374222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-12-06 03:34:58.374442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-12-06 03:34:58.374453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-12-06 03:34:58.374665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-12-06 03:34:58.374677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-12-06 03:34:58.374846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-12-06 03:34:58.374876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-12-06 03:34:58.375077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-12-06 03:34:58.375111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-12-06 03:34:58.375326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-12-06 03:34:58.375360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-12-06 03:34:58.375617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-12-06 03:34:58.375648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-12-06 03:34:58.375859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-12-06 03:34:58.375892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-12-06 03:34:58.376107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-12-06 03:34:58.376142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-12-06 03:34:58.376337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-12-06 03:34:58.376375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-12-06 03:34:58.376567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-12-06 03:34:58.376599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-12-06 03:34:58.376870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-12-06 03:34:58.376903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-12-06 03:34:58.377107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-12-06 03:34:58.377141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-12-06 03:34:58.377382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-12-06 03:34:58.377393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-12-06 03:34:58.377619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-12-06 03:34:58.377630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-12-06 03:34:58.377853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-12-06 03:34:58.377864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-12-06 03:34:58.378042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-12-06 03:34:58.378053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-12-06 03:34:58.378296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-12-06 03:34:58.378330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-12-06 03:34:58.378571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-12-06 03:34:58.378603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-12-06 03:34:58.378779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-12-06 03:34:58.378810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-12-06 03:34:58.378919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-12-06 03:34:58.378930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-12-06 03:34:58.379154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-12-06 03:34:58.379165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-12-06 03:34:58.379333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-12-06 03:34:58.379343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-12-06 03:34:58.379493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-12-06 03:34:58.379504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-12-06 03:34:58.379729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-12-06 03:34:58.379739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-12-06 03:34:58.379965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-12-06 03:34:58.379976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-12-06 03:34:58.380128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-12-06 03:34:58.380139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-12-06 03:34:58.380286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-12-06 03:34:58.380297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-12-06 03:34:58.380428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-12-06 03:34:58.380438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-12-06 03:34:58.380616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-12-06 03:34:58.380628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-12-06 03:34:58.380805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-12-06 03:34:58.380837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-12-06 03:34:58.381079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-12-06 03:34:58.381111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-12-06 03:34:58.381426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-12-06 03:34:58.381458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-12-06 03:34:58.381748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-12-06 03:34:58.381780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-12-06 03:34:58.382049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-12-06 03:34:58.382082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-12-06 03:34:58.382277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-12-06 03:34:58.382309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-12-06 03:34:58.382531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-12-06 03:34:58.382542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-12-06 03:34:58.382758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-12-06 03:34:58.382768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-12-06 03:34:58.383004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-12-06 03:34:58.383015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-12-06 03:34:58.383235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-12-06 03:34:58.383245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-12-06 03:34:58.383491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-12-06 03:34:58.383501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-12-06 03:34:58.383701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-12-06 03:34:58.383712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-12-06 03:34:58.383934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-12-06 03:34:58.383945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-12-06 03:34:58.384174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-12-06 03:34:58.384185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-12-06 03:34:58.384354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-12-06 03:34:58.384365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-12-06 03:34:58.384447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-12-06 03:34:58.384457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-12-06 03:34:58.384555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-12-06 03:34:58.384565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-12-06 03:34:58.384744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-12-06 03:34:58.384754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-12-06 03:34:58.384960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-12-06 03:34:58.384971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-12-06 03:34:58.385112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-12-06 03:34:58.385125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-12-06 03:34:58.385371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-12-06 03:34:58.385382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-12-06 03:34:58.385591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-12-06 03:34:58.385601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.447 [2024-12-06 03:34:58.385824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-12-06 03:34:58.385834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-12-06 03:34:58.385999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-12-06 03:34:58.386010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-12-06 03:34:58.386109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-12-06 03:34:58.386120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-12-06 03:34:58.386318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-12-06 03:34:58.386329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-12-06 03:34:58.386535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-12-06 03:34:58.386546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-12-06 03:34:58.386644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-12-06 03:34:58.386655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-12-06 03:34:58.386845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-12-06 03:34:58.386856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-12-06 03:34:58.387062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-12-06 03:34:58.387073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-12-06 03:34:58.387293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-12-06 03:34:58.387304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-12-06 03:34:58.387526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-12-06 03:34:58.387537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-12-06 03:34:58.387680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-12-06 03:34:58.387690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-12-06 03:34:58.387868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-12-06 03:34:58.387878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-12-06 03:34:58.388040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-12-06 03:34:58.388051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-12-06 03:34:58.388277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-12-06 03:34:58.388288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-12-06 03:34:58.388447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-12-06 03:34:58.388458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-12-06 03:34:58.388735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-12-06 03:34:58.388746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-12-06 03:34:58.388970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-12-06 03:34:58.388981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-12-06 03:34:58.389270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-12-06 03:34:58.389281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-12-06 03:34:58.389474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-12-06 03:34:58.389484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-12-06 03:34:58.389721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-12-06 03:34:58.389754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-12-06 03:34:58.389959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-12-06 03:34:58.389993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-12-06 03:34:58.390234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-12-06 03:34:58.390268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-12-06 03:34:58.390557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-12-06 03:34:58.390589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-12-06 03:34:58.390870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-12-06 03:34:58.390902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-12-06 03:34:58.391053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-12-06 03:34:58.391087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-12-06 03:34:58.391228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-12-06 03:34:58.391261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-12-06 03:34:58.391478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-12-06 03:34:58.391510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-12-06 03:34:58.391784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-12-06 03:34:58.391815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-12-06 03:34:58.392003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-12-06 03:34:58.392036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-12-06 03:34:58.392231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-12-06 03:34:58.392263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-12-06 03:34:58.392546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-12-06 03:34:58.392556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-12-06 03:34:58.392720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-12-06 03:34:58.392730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-12-06 03:34:58.392962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-12-06 03:34:58.392972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-12-06 03:34:58.393223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-12-06 03:34:58.393234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-12-06 03:34:58.393458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-12-06 03:34:58.393468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-12-06 03:34:58.393640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-12-06 03:34:58.393650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-12-06 03:34:58.393857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-12-06 03:34:58.393868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-12-06 03:34:58.394048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-12-06 03:34:58.394062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.448 [2024-12-06 03:34:58.394254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-12-06 03:34:58.394265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-12-06 03:34:58.394410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-12-06 03:34:58.394420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-12-06 03:34:58.394736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-12-06 03:34:58.394748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-12-06 03:34:58.394896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-12-06 03:34:58.394929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-12-06 03:34:58.395194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-12-06 03:34:58.395227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-12-06 03:34:58.395406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-12-06 03:34:58.395439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-12-06 03:34:58.395685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-12-06 03:34:58.395718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-12-06 03:34:58.395986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-12-06 03:34:58.396019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-12-06 03:34:58.396214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-12-06 03:34:58.396225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-12-06 03:34:58.396473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-12-06 03:34:58.396483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-12-06 03:34:58.396739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-12-06 03:34:58.396750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-12-06 03:34:58.396903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-12-06 03:34:58.396913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-12-06 03:34:58.397113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-12-06 03:34:58.397124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-12-06 03:34:58.397352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-12-06 03:34:58.397363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-12-06 03:34:58.397458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-12-06 03:34:58.397468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-12-06 03:34:58.397665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-12-06 03:34:58.397676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-12-06 03:34:58.397746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-12-06 03:34:58.397757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-12-06 03:34:58.397994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-12-06 03:34:58.398005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-12-06 03:34:58.398101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-12-06 03:34:58.398112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-12-06 03:34:58.398358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-12-06 03:34:58.398368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-12-06 03:34:58.398541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-12-06 03:34:58.398551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-12-06 03:34:58.398761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-12-06 03:34:58.398772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-12-06 03:34:58.398941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-12-06 03:34:58.398955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-12-06 03:34:58.399174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-12-06 03:34:58.399185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-12-06 03:34:58.399371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-12-06 03:34:58.399381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-12-06 03:34:58.399604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-12-06 03:34:58.399614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-12-06 03:34:58.399769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-12-06 03:34:58.399781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-12-06 03:34:58.399996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-12-06 03:34:58.400007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-12-06 03:34:58.400233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-12-06 03:34:58.400243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-12-06 03:34:58.400445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-12-06 03:34:58.400455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-12-06 03:34:58.400543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-12-06 03:34:58.400553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-12-06 03:34:58.400742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-12-06 03:34:58.400752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-12-06 03:34:58.401000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-12-06 03:34:58.401012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-12-06 03:34:58.401173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-12-06 03:34:58.401184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-12-06 03:34:58.401276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-12-06 03:34:58.401288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-12-06 03:34:58.401431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-12-06 03:34:58.401442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-12-06 03:34:58.401618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-12-06 03:34:58.401629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-12-06 03:34:58.401801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-12-06 03:34:58.401811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-12-06 03:34:58.402017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-12-06 03:34:58.402028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-12-06 03:34:58.402247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-12-06 03:34:58.402260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-12-06 03:34:58.402456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-12-06 03:34:58.402466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-12-06 03:34:58.402641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-12-06 03:34:58.402652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-12-06 03:34:58.402878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-12-06 03:34:58.402888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-12-06 03:34:58.403095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-12-06 03:34:58.403106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-12-06 03:34:58.403331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-12-06 03:34:58.403342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-12-06 03:34:58.403499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-12-06 03:34:58.403512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-12-06 03:34:58.403600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-12-06 03:34:58.403610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-12-06 03:34:58.403810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-12-06 03:34:58.403821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-12-06 03:34:58.404044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-12-06 03:34:58.404055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-12-06 03:34:58.404196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-12-06 03:34:58.404206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-12-06 03:34:58.404408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-12-06 03:34:58.404419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-12-06 03:34:58.404656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-12-06 03:34:58.404667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-12-06 03:34:58.404824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-12-06 03:34:58.404834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-12-06 03:34:58.404990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-12-06 03:34:58.405002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-12-06 03:34:58.405083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-12-06 03:34:58.405094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-12-06 03:34:58.405298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-12-06 03:34:58.405309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-12-06 03:34:58.405487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-12-06 03:34:58.405498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-12-06 03:34:58.405809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-12-06 03:34:58.405819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-12-06 03:34:58.405991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-12-06 03:34:58.406002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-12-06 03:34:58.406241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-12-06 03:34:58.406252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-12-06 03:34:58.406410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-12-06 03:34:58.406421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-12-06 03:34:58.406620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-12-06 03:34:58.406631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-12-06 03:34:58.406783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-12-06 03:34:58.406793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-12-06 03:34:58.406976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-12-06 03:34:58.406986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-12-06 03:34:58.407134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-12-06 03:34:58.407145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-12-06 03:34:58.407361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-12-06 03:34:58.407372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-12-06 03:34:58.407517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-12-06 03:34:58.407528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-12-06 03:34:58.407673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-12-06 03:34:58.407704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-12-06 03:34:58.407907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-12-06 03:34:58.407940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-12-06 03:34:58.408200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-12-06 03:34:58.408233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-12-06 03:34:58.408428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-12-06 03:34:58.408439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-12-06 03:34:58.408611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-12-06 03:34:58.408622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-12-06 03:34:58.408785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-12-06 03:34:58.408796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-12-06 03:34:58.408939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-12-06 03:34:58.408960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-12-06 03:34:58.409121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-12-06 03:34:58.409155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-12-06 03:34:58.409406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-12-06 03:34:58.409438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-12-06 03:34:58.409732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-12-06 03:34:58.409764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-12-06 03:34:58.409977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-12-06 03:34:58.410011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-12-06 03:34:58.410150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-12-06 03:34:58.410181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-12-06 03:34:58.410390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-12-06 03:34:58.410430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-12-06 03:34:58.410672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-12-06 03:34:58.410682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-12-06 03:34:58.410763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-12-06 03:34:58.410774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-12-06 03:34:58.411000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-12-06 03:34:58.411010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-12-06 03:34:58.411159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-12-06 03:34:58.411170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-12-06 03:34:58.411262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-12-06 03:34:58.411273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-12-06 03:34:58.411471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-12-06 03:34:58.411482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-12-06 03:34:58.411588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-12-06 03:34:58.411599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-12-06 03:34:58.411741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-12-06 03:34:58.411752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-12-06 03:34:58.411888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-12-06 03:34:58.411899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-12-06 03:34:58.412044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-12-06 03:34:58.412056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-12-06 03:34:58.412258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-12-06 03:34:58.412268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-12-06 03:34:58.412428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-12-06 03:34:58.412438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-12-06 03:34:58.412624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-12-06 03:34:58.412656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-12-06 03:34:58.412842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-12-06 03:34:58.412874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-12-06 03:34:58.413138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-12-06 03:34:58.413170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-12-06 03:34:58.413399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-12-06 03:34:58.413410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-12-06 03:34:58.413608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-12-06 03:34:58.413618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-12-06 03:34:58.413827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-12-06 03:34:58.413858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-12-06 03:34:58.414076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-12-06 03:34:58.414110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-12-06 03:34:58.414308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-12-06 03:34:58.414339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-12-06 03:34:58.414538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-12-06 03:34:58.414570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-12-06 03:34:58.414773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-12-06 03:34:58.414806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-12-06 03:34:58.415066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-12-06 03:34:58.415077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-12-06 03:34:58.415163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-12-06 03:34:58.415173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-12-06 03:34:58.415344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-12-06 03:34:58.415355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-12-06 03:34:58.415608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-12-06 03:34:58.415619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-12-06 03:34:58.415841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-12-06 03:34:58.415851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-12-06 03:34:58.416007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-12-06 03:34:58.416018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-12-06 03:34:58.416092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-12-06 03:34:58.416103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-12-06 03:34:58.416286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-12-06 03:34:58.416317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-12-06 03:34:58.416592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-12-06 03:34:58.416625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-12-06 03:34:58.416876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-12-06 03:34:58.416909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-12-06 03:34:58.417221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-12-06 03:34:58.417254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-12-06 03:34:58.417450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-12-06 03:34:58.417461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-12-06 03:34:58.417688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-12-06 03:34:58.417698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-12-06 03:34:58.417867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-12-06 03:34:58.417899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-12-06 03:34:58.418218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-12-06 03:34:58.418252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-12-06 03:34:58.418456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-12-06 03:34:58.418488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-12-06 03:34:58.418703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-12-06 03:34:58.418735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-12-06 03:34:58.418939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-12-06 03:34:58.418991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-12-06 03:34:58.419265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-12-06 03:34:58.419297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-12-06 03:34:58.419589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-12-06 03:34:58.419622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-12-06 03:34:58.419889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-12-06 03:34:58.419921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-12-06 03:34:58.420074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-12-06 03:34:58.420107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-12-06 03:34:58.420314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-12-06 03:34:58.420345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-12-06 03:34:58.420540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-12-06 03:34:58.420572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-12-06 03:34:58.420760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-12-06 03:34:58.420792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-12-06 03:34:58.420994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-12-06 03:34:58.421027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-12-06 03:34:58.421229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-12-06 03:34:58.421263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-12-06 03:34:58.421528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-12-06 03:34:58.421539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-12-06 03:34:58.421759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-12-06 03:34:58.421791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-12-06 03:34:58.421939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-12-06 03:34:58.421983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-12-06 03:34:58.422181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-12-06 03:34:58.422222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-12-06 03:34:58.422365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-12-06 03:34:58.422376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-12-06 03:34:58.422542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-12-06 03:34:58.422574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-12-06 03:34:58.422846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-12-06 03:34:58.422878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-12-06 03:34:58.423085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-12-06 03:34:58.423118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-12-06 03:34:58.423413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-12-06 03:34:58.423446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-12-06 03:34:58.423643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-12-06 03:34:58.423675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-12-06 03:34:58.423853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-12-06 03:34:58.423884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-12-06 03:34:58.424149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-12-06 03:34:58.424162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-12-06 03:34:58.424264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-12-06 03:34:58.424276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-12-06 03:34:58.424420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-12-06 03:34:58.424431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-12-06 03:34:58.424607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-12-06 03:34:58.424639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-12-06 03:34:58.424842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-12-06 03:34:58.424874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-12-06 03:34:58.425017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-12-06 03:34:58.425052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-12-06 03:34:58.425281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-12-06 03:34:58.425314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-12-06 03:34:58.425491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-12-06 03:34:58.425502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-12-06 03:34:58.425682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-12-06 03:34:58.425715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-12-06 03:34:58.425857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-12-06 03:34:58.425889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-12-06 03:34:58.426131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-12-06 03:34:58.426164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-12-06 03:34:58.426383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-12-06 03:34:58.426394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-12-06 03:34:58.426648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-12-06 03:34:58.426658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-12-06 03:34:58.426857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-12-06 03:34:58.426868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-12-06 03:34:58.427038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-12-06 03:34:58.427049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-12-06 03:34:58.427285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-12-06 03:34:58.427316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-12-06 03:34:58.427525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-12-06 03:34:58.427558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-12-06 03:34:58.427777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-12-06 03:34:58.427809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-12-06 03:34:58.427989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-12-06 03:34:58.428023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-12-06 03:34:58.428206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-12-06 03:34:58.428256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-12-06 03:34:58.428503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-12-06 03:34:58.428514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-12-06 03:34:58.428773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-12-06 03:34:58.428784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-12-06 03:34:58.428997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-12-06 03:34:58.429009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-12-06 03:34:58.429184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-12-06 03:34:58.429195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-12-06 03:34:58.429352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-12-06 03:34:58.429383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-12-06 03:34:58.429598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-12-06 03:34:58.429632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-12-06 03:34:58.429823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-12-06 03:34:58.429855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-12-06 03:34:58.430051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-12-06 03:34:58.430088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-12-06 03:34:58.430270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-12-06 03:34:58.430301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-12-06 03:34:58.430474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-12-06 03:34:58.430484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-12-06 03:34:58.430584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-12-06 03:34:58.430596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-12-06 03:34:58.430829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-12-06 03:34:58.430839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-12-06 03:34:58.430983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-12-06 03:34:58.430994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-12-06 03:34:58.431130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-12-06 03:34:58.431142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-12-06 03:34:58.431298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-12-06 03:34:58.431308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-12-06 03:34:58.431454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-12-06 03:34:58.431466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-12-06 03:34:58.431731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-12-06 03:34:58.431742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-12-06 03:34:58.431938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-12-06 03:34:58.431953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-12-06 03:34:58.432200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-12-06 03:34:58.432212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-12-06 03:34:58.432363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-12-06 03:34:58.432373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-12-06 03:34:58.432522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-12-06 03:34:58.432532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-12-06 03:34:58.432731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-12-06 03:34:58.432741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-12-06 03:34:58.432957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-12-06 03:34:58.432968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-12-06 03:34:58.433150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-12-06 03:34:58.433160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-12-06 03:34:58.433337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-12-06 03:34:58.433348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-12-06 03:34:58.433498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-12-06 03:34:58.433508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.453 [2024-12-06 03:34:58.433664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-12-06 03:34:58.433674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-12-06 03:34:58.433750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-12-06 03:34:58.433760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-12-06 03:34:58.433977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-12-06 03:34:58.433988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-12-06 03:34:58.434240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-12-06 03:34:58.434251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-12-06 03:34:58.434399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-12-06 03:34:58.434409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-12-06 03:34:58.434544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-12-06 03:34:58.434555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-12-06 03:34:58.434747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-12-06 03:34:58.434759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-12-06 03:34:58.434904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-12-06 03:34:58.434915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-12-06 03:34:58.435066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-12-06 03:34:58.435076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-12-06 03:34:58.435241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-12-06 03:34:58.435252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-12-06 03:34:58.435458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-12-06 03:34:58.435469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-12-06 03:34:58.435636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-12-06 03:34:58.435646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-12-06 03:34:58.435735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-12-06 03:34:58.435745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-12-06 03:34:58.435892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-12-06 03:34:58.435904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-12-06 03:34:58.436094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-12-06 03:34:58.436107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-12-06 03:34:58.436254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-12-06 03:34:58.436265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-12-06 03:34:58.436400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-12-06 03:34:58.436411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-12-06 03:34:58.436511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-12-06 03:34:58.436522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-12-06 03:34:58.436698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-12-06 03:34:58.436709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-12-06 03:34:58.436878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-12-06 03:34:58.436889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-12-06 03:34:58.437111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-12-06 03:34:58.437123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-12-06 03:34:58.437322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-12-06 03:34:58.437334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-12-06 03:34:58.437485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-12-06 03:34:58.437496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-12-06 03:34:58.437655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-12-06 03:34:58.437666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-12-06 03:34:58.437828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-12-06 03:34:58.437838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-12-06 03:34:58.438001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-12-06 03:34:58.438012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-12-06 03:34:58.438236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-12-06 03:34:58.438247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-12-06 03:34:58.438354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-12-06 03:34:58.438365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-12-06 03:34:58.438451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-12-06 03:34:58.438462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-12-06 03:34:58.438620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-12-06 03:34:58.438631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-12-06 03:34:58.438723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-12-06 03:34:58.438733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-12-06 03:34:58.438879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-12-06 03:34:58.438889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-12-06 03:34:58.438990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-12-06 03:34:58.439002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-12-06 03:34:58.439154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-12-06 03:34:58.439165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-12-06 03:34:58.439317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-12-06 03:34:58.439349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-12-06 03:34:58.439636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-12-06 03:34:58.439668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-12-06 03:34:58.439797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-12-06 03:34:58.439828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-12-06 03:34:58.440029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-12-06 03:34:58.440063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.454 [2024-12-06 03:34:58.440293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-12-06 03:34:58.440325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-12-06 03:34:58.440552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-12-06 03:34:58.440563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-12-06 03:34:58.440703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-12-06 03:34:58.440714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-12-06 03:34:58.440846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-12-06 03:34:58.440857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-12-06 03:34:58.441095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-12-06 03:34:58.441107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-12-06 03:34:58.441195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-12-06 03:34:58.441205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-12-06 03:34:58.441382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-12-06 03:34:58.441394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-12-06 03:34:58.441551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-12-06 03:34:58.441562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-12-06 03:34:58.441743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-12-06 03:34:58.441754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-12-06 03:34:58.441834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-12-06 03:34:58.441845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-12-06 03:34:58.441996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-12-06 03:34:58.442006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-12-06 03:34:58.442142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-12-06 03:34:58.442153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-12-06 03:34:58.442297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-12-06 03:34:58.442308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-12-06 03:34:58.442529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-12-06 03:34:58.442539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-12-06 03:34:58.442700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-12-06 03:34:58.442711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-12-06 03:34:58.442940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-12-06 03:34:58.442957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-12-06 03:34:58.443134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-12-06 03:34:58.443144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-12-06 03:34:58.443346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-12-06 03:34:58.443356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-12-06 03:34:58.443445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-12-06 03:34:58.443456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-12-06 03:34:58.443631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-12-06 03:34:58.443642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-12-06 03:34:58.443866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-12-06 03:34:58.443876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-12-06 03:34:58.443974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-12-06 03:34:58.443985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-12-06 03:34:58.444114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-12-06 03:34:58.444124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-12-06 03:34:58.444277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-12-06 03:34:58.444288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-12-06 03:34:58.444507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-12-06 03:34:58.444517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-12-06 03:34:58.444743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-12-06 03:34:58.444753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-12-06 03:34:58.444899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-12-06 03:34:58.444909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-12-06 03:34:58.445076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-12-06 03:34:58.445087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-12-06 03:34:58.445162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-12-06 03:34:58.445172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-12-06 03:34:58.445251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-12-06 03:34:58.445262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-12-06 03:34:58.445427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-12-06 03:34:58.445438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-12-06 03:34:58.445670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-12-06 03:34:58.445680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-12-06 03:34:58.445830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-12-06 03:34:58.445841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.455 [2024-12-06 03:34:58.446012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-12-06 03:34:58.446024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-12-06 03:34:58.446107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-12-06 03:34:58.446117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-12-06 03:34:58.446263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-12-06 03:34:58.446273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-12-06 03:34:58.446414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-12-06 03:34:58.446424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-12-06 03:34:58.446648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-12-06 03:34:58.446660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-12-06 03:34:58.446800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-12-06 03:34:58.446810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-12-06 03:34:58.446957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-12-06 03:34:58.446968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-12-06 03:34:58.447102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-12-06 03:34:58.447111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-12-06 03:34:58.447195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-12-06 03:34:58.447205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-12-06 03:34:58.447455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-12-06 03:34:58.447487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-12-06 03:34:58.447802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-12-06 03:34:58.447836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-12-06 03:34:58.448008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-12-06 03:34:58.448025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-12-06 03:34:58.448186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-12-06 03:34:58.448200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-12-06 03:34:58.448378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-12-06 03:34:58.448392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-12-06 03:34:58.448567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-12-06 03:34:58.448580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-12-06 03:34:58.448739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-12-06 03:34:58.448753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-12-06 03:34:58.448928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-12-06 03:34:58.448941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-12-06 03:34:58.449128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-12-06 03:34:58.449141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-12-06 03:34:58.449233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-12-06 03:34:58.449246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-12-06 03:34:58.449387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-12-06 03:34:58.449400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-12-06 03:34:58.449537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-12-06 03:34:58.449550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-12-06 03:34:58.449781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-12-06 03:34:58.449794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-12-06 03:34:58.450018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-12-06 03:34:58.450037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-12-06 03:34:58.450132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-12-06 03:34:58.450145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-12-06 03:34:58.450282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-12-06 03:34:58.450295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-12-06 03:34:58.450420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-12-06 03:34:58.450433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-12-06 03:34:58.450650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-12-06 03:34:58.450667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-12-06 03:34:58.450819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-12-06 03:34:58.450832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-12-06 03:34:58.450936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-12-06 03:34:58.450954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-12-06 03:34:58.451100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-12-06 03:34:58.451113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-12-06 03:34:58.451265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-12-06 03:34:58.451280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-12-06 03:34:58.451372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-12-06 03:34:58.451385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-12-06 03:34:58.451633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-12-06 03:34:58.451646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-12-06 03:34:58.451830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-12-06 03:34:58.451841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-12-06 03:34:58.452070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-12-06 03:34:58.452081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-12-06 03:34:58.452233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-12-06 03:34:58.452243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-12-06 03:34:58.452396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-12-06 03:34:58.452406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-12-06 03:34:58.452568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-12-06 03:34:58.452577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.456 [2024-12-06 03:34:58.452660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-12-06 03:34:58.452670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-12-06 03:34:58.452906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-12-06 03:34:58.452916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-12-06 03:34:58.453009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-12-06 03:34:58.453019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-12-06 03:34:58.453216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-12-06 03:34:58.453225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-12-06 03:34:58.453359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-12-06 03:34:58.453369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-12-06 03:34:58.453474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-12-06 03:34:58.453484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-12-06 03:34:58.453644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-12-06 03:34:58.453653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-12-06 03:34:58.453732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-12-06 03:34:58.453742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-12-06 03:34:58.453930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-12-06 03:34:58.453940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-12-06 03:34:58.454081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-12-06 03:34:58.454091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-12-06 03:34:58.454176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-12-06 03:34:58.454186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-12-06 03:34:58.454372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-12-06 03:34:58.454390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-12-06 03:34:58.454606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-12-06 03:34:58.454620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-12-06 03:34:58.454804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-12-06 03:34:58.454819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-12-06 03:34:58.455045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-12-06 03:34:58.455060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-12-06 03:34:58.455236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-12-06 03:34:58.455251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-12-06 03:34:58.455345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-12-06 03:34:58.455359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-12-06 03:34:58.455522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-12-06 03:34:58.455535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-12-06 03:34:58.455703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-12-06 03:34:58.455717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-12-06 03:34:58.455821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-12-06 03:34:58.455835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-12-06 03:34:58.455988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-12-06 03:34:58.456003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-12-06 03:34:58.456094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-12-06 03:34:58.456107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-12-06 03:34:58.456213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-12-06 03:34:58.456226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-12-06 03:34:58.456318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-12-06 03:34:58.456332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-12-06 03:34:58.456474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-12-06 03:34:58.456492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-12-06 03:34:58.456668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-12-06 03:34:58.456683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-12-06 03:34:58.456862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-12-06 03:34:58.456874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-12-06 03:34:58.457010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-12-06 03:34:58.457019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-12-06 03:34:58.457311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-12-06 03:34:58.457322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-12-06 03:34:58.457417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-12-06 03:34:58.457428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-12-06 03:34:58.457645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-12-06 03:34:58.457655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-12-06 03:34:58.457868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-12-06 03:34:58.457878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-12-06 03:34:58.458029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-12-06 03:34:58.458040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-12-06 03:34:58.458187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-12-06 03:34:58.458197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-12-06 03:34:58.458300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-12-06 03:34:58.458310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-12-06 03:34:58.458511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-12-06 03:34:58.458521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-12-06 03:34:58.458684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-12-06 03:34:58.458693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-12-06 03:34:58.458782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-12-06 03:34:58.458793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.457 [2024-12-06 03:34:58.458954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-12-06 03:34:58.458965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-12-06 03:34:58.459108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-12-06 03:34:58.459118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-12-06 03:34:58.459386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-12-06 03:34:58.459396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-12-06 03:34:58.459544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-12-06 03:34:58.459553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-12-06 03:34:58.459780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-12-06 03:34:58.459790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-12-06 03:34:58.460019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-12-06 03:34:58.460029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-12-06 03:34:58.460167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-12-06 03:34:58.460177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-12-06 03:34:58.460350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-12-06 03:34:58.460361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-12-06 03:34:58.460447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-12-06 03:34:58.460458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-12-06 03:34:58.460615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-12-06 03:34:58.460625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-12-06 03:34:58.460846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-12-06 03:34:58.460857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-12-06 03:34:58.461072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-12-06 03:34:58.461084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-12-06 03:34:58.461246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-12-06 03:34:58.461256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-12-06 03:34:58.461400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-12-06 03:34:58.461413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-12-06 03:34:58.461516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-12-06 03:34:58.461526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-12-06 03:34:58.461674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-12-06 03:34:58.461684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-12-06 03:34:58.461826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-12-06 03:34:58.461837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-12-06 03:34:58.461929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-12-06 03:34:58.461978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-12-06 03:34:58.462166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-12-06 03:34:58.462196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-12-06 03:34:58.462452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-12-06 03:34:58.462490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-12-06 03:34:58.462632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-12-06 03:34:58.462641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-12-06 03:34:58.462850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-12-06 03:34:58.462880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-12-06 03:34:58.463096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-12-06 03:34:58.463128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-12-06 03:34:58.463329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-12-06 03:34:58.463355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-12-06 03:34:58.463444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-12-06 03:34:58.463453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-12-06 03:34:58.463697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-12-06 03:34:58.463707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-12-06 03:34:58.463890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-12-06 03:34:58.463900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-12-06 03:34:58.463997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-12-06 03:34:58.464008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-12-06 03:34:58.464201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-12-06 03:34:58.464211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-12-06 03:34:58.464366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-12-06 03:34:58.464376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-12-06 03:34:58.464469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-12-06 03:34:58.464479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-12-06 03:34:58.464688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-12-06 03:34:58.464699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-12-06 03:34:58.464952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-12-06 03:34:58.464963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-12-06 03:34:58.465138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-12-06 03:34:58.465150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-12-06 03:34:58.465351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-12-06 03:34:58.465382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-12-06 03:34:58.465601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-12-06 03:34:58.465635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-12-06 03:34:58.465778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-12-06 03:34:58.465808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-12-06 03:34:58.465939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-12-06 03:34:58.465982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-12-06 03:34:58.466167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-12-06 03:34:58.466197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-12-06 03:34:58.466397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-12-06 03:34:58.466407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-12-06 03:34:58.466514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-12-06 03:34:58.466524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-12-06 03:34:58.466766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-12-06 03:34:58.466796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-12-06 03:34:58.467014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-12-06 03:34:58.467045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-12-06 03:34:58.467245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-12-06 03:34:58.467276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-12-06 03:34:58.467473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-12-06 03:34:58.467483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-12-06 03:34:58.467623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-12-06 03:34:58.467653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-12-06 03:34:58.467843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-12-06 03:34:58.467874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-12-06 03:34:58.468098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-12-06 03:34:58.468131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-12-06 03:34:58.468308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-12-06 03:34:58.468318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-12-06 03:34:58.468470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-12-06 03:34:58.468504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-12-06 03:34:58.468641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-12-06 03:34:58.468672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-12-06 03:34:58.468809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-12-06 03:34:58.468839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-12-06 03:34:58.469082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-12-06 03:34:58.469114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-12-06 03:34:58.469241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-12-06 03:34:58.469278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-12-06 03:34:58.469542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-12-06 03:34:58.469552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-12-06 03:34:58.469786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-12-06 03:34:58.469817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-12-06 03:34:58.469977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-12-06 03:34:58.470023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-12-06 03:34:58.470255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-12-06 03:34:58.470274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-12-06 03:34:58.470452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-12-06 03:34:58.470466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-12-06 03:34:58.470565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-12-06 03:34:58.470579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-12-06 03:34:58.470729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-12-06 03:34:58.470743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-12-06 03:34:58.470922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-12-06 03:34:58.470964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-12-06 03:34:58.471167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-12-06 03:34:58.471197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-12-06 03:34:58.471442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-12-06 03:34:58.471473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-12-06 03:34:58.471697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-12-06 03:34:58.471711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-12-06 03:34:58.471892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-12-06 03:34:58.471906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-12-06 03:34:58.472097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-12-06 03:34:58.472111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-12-06 03:34:58.472233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-12-06 03:34:58.472248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-12-06 03:34:58.472400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-12-06 03:34:58.472414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-12-06 03:34:58.472540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.722 [2024-12-06 03:34:58.814817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.722 qpair failed and we were unable to recover it. 00:26:38.722 [2024-12-06 03:34:58.815128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.722 [2024-12-06 03:34:58.815146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.722 qpair failed and we were unable to recover it. 00:26:38.722 [2024-12-06 03:34:58.815294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.722 [2024-12-06 03:34:58.815306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.722 qpair failed and we were unable to recover it. 00:26:38.722 [2024-12-06 03:34:58.815514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.722 [2024-12-06 03:34:58.815526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.722 qpair failed and we were unable to recover it. 00:26:38.722 [2024-12-06 03:34:58.815758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.722 [2024-12-06 03:34:58.815769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.722 qpair failed and we were unable to recover it. 00:26:38.722 [2024-12-06 03:34:58.815921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.722 [2024-12-06 03:34:58.815932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.722 qpair failed and we were unable to recover it. 00:26:38.722 [2024-12-06 03:34:58.816134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.722 [2024-12-06 03:34:58.816146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.722 qpair failed and we were unable to recover it. 00:26:38.722 [2024-12-06 03:34:58.816287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.722 [2024-12-06 03:34:58.816299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.722 qpair failed and we were unable to recover it. 00:26:38.722 [2024-12-06 03:34:58.816399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.722 [2024-12-06 03:34:58.816412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.722 qpair failed and we were unable to recover it. 00:26:38.722 [2024-12-06 03:34:58.816565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.722 [2024-12-06 03:34:58.816577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.722 qpair failed and we were unable to recover it. 00:26:38.722 [2024-12-06 03:34:58.816673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.722 [2024-12-06 03:34:58.816686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.722 qpair failed and we were unable to recover it. 00:26:38.722 [2024-12-06 03:34:58.816766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.722 [2024-12-06 03:34:58.816778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.722 qpair failed and we were unable to recover it. 00:26:38.722 [2024-12-06 03:34:58.816954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.722 [2024-12-06 03:34:58.816967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.722 qpair failed and we were unable to recover it. 00:26:38.722 [2024-12-06 03:34:58.817055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.722 [2024-12-06 03:34:58.817066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.722 qpair failed and we were unable to recover it. 00:26:38.722 [2024-12-06 03:34:58.817133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.722 [2024-12-06 03:34:58.817145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.722 qpair failed and we were unable to recover it. 00:26:38.722 [2024-12-06 03:34:58.817235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.722 [2024-12-06 03:34:58.817247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.722 qpair failed and we were unable to recover it. 00:26:38.722 [2024-12-06 03:34:58.817402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.722 [2024-12-06 03:34:58.817415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.722 qpair failed and we were unable to recover it. 00:26:38.722 [2024-12-06 03:34:58.817568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.722 [2024-12-06 03:34:58.817580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.722 qpair failed and we were unable to recover it. 00:26:38.722 [2024-12-06 03:34:58.817754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.722 [2024-12-06 03:34:58.817766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.722 qpair failed and we were unable to recover it. 00:26:38.722 [2024-12-06 03:34:58.817847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.722 [2024-12-06 03:34:58.817860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.722 qpair failed and we were unable to recover it. 00:26:38.722 [2024-12-06 03:34:58.817944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.722 [2024-12-06 03:34:58.817960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.722 qpair failed and we were unable to recover it. 00:26:38.723 [2024-12-06 03:34:58.818048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.723 [2024-12-06 03:34:58.818060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.723 qpair failed and we were unable to recover it. 00:26:38.723 [2024-12-06 03:34:58.818200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.723 [2024-12-06 03:34:58.818212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.723 qpair failed and we were unable to recover it. 00:26:38.723 [2024-12-06 03:34:58.818371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.723 [2024-12-06 03:34:58.818383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.723 qpair failed and we were unable to recover it. 00:26:38.723 [2024-12-06 03:34:58.818478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.723 [2024-12-06 03:34:58.818494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.723 qpair failed and we were unable to recover it. 00:26:38.723 [2024-12-06 03:34:58.818643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.723 [2024-12-06 03:34:58.818655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.723 qpair failed and we were unable to recover it. 00:26:38.723 [2024-12-06 03:34:58.818751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.723 [2024-12-06 03:34:58.818764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.723 qpair failed and we were unable to recover it. 00:26:38.723 [2024-12-06 03:34:58.818847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.723 [2024-12-06 03:34:58.818859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.723 qpair failed and we were unable to recover it. 00:26:38.723 [2024-12-06 03:34:58.818944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.723 [2024-12-06 03:34:58.818961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.723 qpair failed and we were unable to recover it. 00:26:38.723 [2024-12-06 03:34:58.819057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.723 [2024-12-06 03:34:58.819069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.723 qpair failed and we were unable to recover it. 00:26:38.723 [2024-12-06 03:34:58.819230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.723 [2024-12-06 03:34:58.819242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.723 qpair failed and we were unable to recover it. 00:26:38.723 [2024-12-06 03:34:58.819323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.723 [2024-12-06 03:34:58.819335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.723 qpair failed and we were unable to recover it. 00:26:38.723 [2024-12-06 03:34:58.819418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.723 [2024-12-06 03:34:58.819430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.723 qpair failed and we were unable to recover it. 00:26:38.723 [2024-12-06 03:34:58.819527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.723 [2024-12-06 03:34:58.819539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.723 qpair failed and we were unable to recover it. 00:26:38.723 [2024-12-06 03:34:58.819622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.723 [2024-12-06 03:34:58.819634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.723 qpair failed and we were unable to recover it. 00:26:38.723 [2024-12-06 03:34:58.819771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.723 [2024-12-06 03:34:58.819783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.723 qpair failed and we were unable to recover it. 00:26:38.723 [2024-12-06 03:34:58.819926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.723 [2024-12-06 03:34:58.819938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.723 qpair failed and we were unable to recover it. 00:26:38.723 [2024-12-06 03:34:58.820088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.723 [2024-12-06 03:34:58.820100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.723 qpair failed and we were unable to recover it. 00:26:38.723 [2024-12-06 03:34:58.820175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.723 [2024-12-06 03:34:58.820187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.723 qpair failed and we were unable to recover it. 00:26:38.723 [2024-12-06 03:34:58.820260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.723 [2024-12-06 03:34:58.820272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.723 qpair failed and we were unable to recover it. 00:26:38.723 [2024-12-06 03:34:58.820331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.723 [2024-12-06 03:34:58.820343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.723 qpair failed and we were unable to recover it. 00:26:38.723 [2024-12-06 03:34:58.820423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.723 [2024-12-06 03:34:58.820435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.723 qpair failed and we were unable to recover it. 00:26:38.723 [2024-12-06 03:34:58.820531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.723 [2024-12-06 03:34:58.820543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.723 qpair failed and we were unable to recover it. 00:26:38.723 [2024-12-06 03:34:58.820692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.723 [2024-12-06 03:34:58.820705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.723 qpair failed and we were unable to recover it. 00:26:38.723 [2024-12-06 03:34:58.820772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.723 [2024-12-06 03:34:58.820785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.723 qpair failed and we were unable to recover it. 00:26:38.723 [2024-12-06 03:34:58.820873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.723 [2024-12-06 03:34:58.820886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.723 qpair failed and we were unable to recover it. 00:26:38.723 [2024-12-06 03:34:58.820989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.723 [2024-12-06 03:34:58.821001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.723 qpair failed and we were unable to recover it. 00:26:38.723 [2024-12-06 03:34:58.821067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.723 [2024-12-06 03:34:58.821079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.723 qpair failed and we were unable to recover it. 00:26:38.723 [2024-12-06 03:34:58.821174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.723 [2024-12-06 03:34:58.821186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.723 qpair failed and we were unable to recover it. 00:26:38.723 [2024-12-06 03:34:58.821297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.723 [2024-12-06 03:34:58.821309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.723 qpair failed and we were unable to recover it. 00:26:38.723 [2024-12-06 03:34:58.821441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.723 [2024-12-06 03:34:58.821452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.723 qpair failed and we were unable to recover it. 00:26:38.723 [2024-12-06 03:34:58.821522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.723 [2024-12-06 03:34:58.821534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.723 qpair failed and we were unable to recover it. 00:26:38.723 [2024-12-06 03:34:58.821600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.723 [2024-12-06 03:34:58.821611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.723 qpair failed and we were unable to recover it. 00:26:38.723 [2024-12-06 03:34:58.821703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.723 [2024-12-06 03:34:58.821715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.723 qpair failed and we were unable to recover it. 00:26:38.723 [2024-12-06 03:34:58.821855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.723 [2024-12-06 03:34:58.821866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.723 qpair failed and we were unable to recover it. 00:26:38.723 [2024-12-06 03:34:58.821962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.723 [2024-12-06 03:34:58.821974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.723 qpair failed and we were unable to recover it. 00:26:38.723 [2024-12-06 03:34:58.822108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.723 [2024-12-06 03:34:58.822119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.723 qpair failed and we were unable to recover it. 00:26:38.723 [2024-12-06 03:34:58.822201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.723 [2024-12-06 03:34:58.822213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.723 qpair failed and we were unable to recover it. 00:26:38.723 [2024-12-06 03:34:58.822346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.723 [2024-12-06 03:34:58.822358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.723 qpair failed and we were unable to recover it. 00:26:38.724 [2024-12-06 03:34:58.822436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.724 [2024-12-06 03:34:58.822448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.724 qpair failed and we were unable to recover it. 00:26:38.724 [2024-12-06 03:34:58.822647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.724 [2024-12-06 03:34:58.822659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.724 qpair failed and we were unable to recover it. 00:26:38.724 [2024-12-06 03:34:58.822808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.724 [2024-12-06 03:34:58.822820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.724 qpair failed and we were unable to recover it. 00:26:38.724 [2024-12-06 03:34:58.822962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.724 [2024-12-06 03:34:58.822975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.724 qpair failed and we were unable to recover it. 00:26:38.724 [2024-12-06 03:34:58.823108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.724 [2024-12-06 03:34:58.823120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.724 qpair failed and we were unable to recover it. 00:26:38.724 [2024-12-06 03:34:58.823269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.724 [2024-12-06 03:34:58.823283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.724 qpair failed and we were unable to recover it. 00:26:38.724 [2024-12-06 03:34:58.823525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.724 [2024-12-06 03:34:58.823537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.724 qpair failed and we were unable to recover it. 00:26:38.724 [2024-12-06 03:34:58.823625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.724 [2024-12-06 03:34:58.823637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.724 qpair failed and we were unable to recover it. 00:26:38.724 [2024-12-06 03:34:58.823850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.724 [2024-12-06 03:34:58.823862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.724 qpair failed and we were unable to recover it. 00:26:38.724 [2024-12-06 03:34:58.824022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.724 [2024-12-06 03:34:58.824034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.724 qpair failed and we were unable to recover it. 00:26:38.724 [2024-12-06 03:34:58.824116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.724 [2024-12-06 03:34:58.824127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.724 qpair failed and we were unable to recover it. 00:26:38.724 [2024-12-06 03:34:58.824279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.724 [2024-12-06 03:34:58.824291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.724 qpair failed and we were unable to recover it. 00:26:38.724 [2024-12-06 03:34:58.824391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.724 [2024-12-06 03:34:58.824402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.724 qpair failed and we were unable to recover it. 00:26:38.724 [2024-12-06 03:34:58.824492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.724 [2024-12-06 03:34:58.824504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.724 qpair failed and we were unable to recover it. 00:26:38.724 [2024-12-06 03:34:58.824588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.724 [2024-12-06 03:34:58.824599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.724 qpair failed and we were unable to recover it. 00:26:38.724 [2024-12-06 03:34:58.824667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.724 [2024-12-06 03:34:58.824678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.724 qpair failed and we were unable to recover it. 00:26:38.724 [2024-12-06 03:34:58.824828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.724 [2024-12-06 03:34:58.824839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.724 qpair failed and we were unable to recover it. 00:26:38.724 [2024-12-06 03:34:58.825021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.724 [2024-12-06 03:34:58.825034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.724 qpair failed and we were unable to recover it. 00:26:38.724 [2024-12-06 03:34:58.825110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.724 [2024-12-06 03:34:58.825122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.724 qpair failed and we were unable to recover it. 00:26:38.724 [2024-12-06 03:34:58.825276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.724 [2024-12-06 03:34:58.825289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.724 qpair failed and we were unable to recover it. 00:26:38.724 [2024-12-06 03:34:58.825368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.724 [2024-12-06 03:34:58.825380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.724 qpair failed and we were unable to recover it. 00:26:38.724 [2024-12-06 03:34:58.825508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.724 [2024-12-06 03:34:58.825520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.724 qpair failed and we were unable to recover it. 00:26:38.724 [2024-12-06 03:34:58.825607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.724 [2024-12-06 03:34:58.825619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.724 qpair failed and we were unable to recover it. 00:26:38.724 [2024-12-06 03:34:58.825717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.724 [2024-12-06 03:34:58.825729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.724 qpair failed and we were unable to recover it. 00:26:38.724 [2024-12-06 03:34:58.825803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.724 [2024-12-06 03:34:58.825816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.724 qpair failed and we were unable to recover it. 00:26:38.724 [2024-12-06 03:34:58.825954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.724 [2024-12-06 03:34:58.825967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.724 qpair failed and we were unable to recover it. 00:26:38.724 [2024-12-06 03:34:58.826106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.724 [2024-12-06 03:34:58.826118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.724 qpair failed and we were unable to recover it. 00:26:38.724 [2024-12-06 03:34:58.826183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.724 [2024-12-06 03:34:58.826196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.724 qpair failed and we were unable to recover it. 00:26:38.724 [2024-12-06 03:34:58.826421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.724 [2024-12-06 03:34:58.826440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.724 qpair failed and we were unable to recover it. 00:26:38.724 [2024-12-06 03:34:58.826520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.724 [2024-12-06 03:34:58.826533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.724 qpair failed and we were unable to recover it. 00:26:38.724 [2024-12-06 03:34:58.826671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.724 [2024-12-06 03:34:58.826682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.724 qpair failed and we were unable to recover it. 00:26:38.724 [2024-12-06 03:34:58.826836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.724 [2024-12-06 03:34:58.826847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.724 qpair failed and we were unable to recover it. 00:26:38.724 [2024-12-06 03:34:58.827000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.724 [2024-12-06 03:34:58.827012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.724 qpair failed and we were unable to recover it. 00:26:38.724 [2024-12-06 03:34:58.827159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.724 [2024-12-06 03:34:58.827170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.724 qpair failed and we were unable to recover it. 00:26:38.724 [2024-12-06 03:34:58.827302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.724 [2024-12-06 03:34:58.827314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.724 qpair failed and we were unable to recover it. 00:26:38.724 [2024-12-06 03:34:58.827459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.724 [2024-12-06 03:34:58.827471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.724 qpair failed and we were unable to recover it. 00:26:38.724 [2024-12-06 03:34:58.827603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.724 [2024-12-06 03:34:58.827615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.724 qpair failed and we were unable to recover it. 00:26:38.724 [2024-12-06 03:34:58.827759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.724 [2024-12-06 03:34:58.827770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.724 qpair failed and we were unable to recover it. 00:26:38.725 [2024-12-06 03:34:58.827863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.725 [2024-12-06 03:34:58.827875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.725 qpair failed and we were unable to recover it. 00:26:38.725 [2024-12-06 03:34:58.828023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.725 [2024-12-06 03:34:58.828036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.725 qpair failed and we were unable to recover it. 00:26:38.725 [2024-12-06 03:34:58.828169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.725 [2024-12-06 03:34:58.828181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.725 qpair failed and we were unable to recover it. 00:26:38.725 [2024-12-06 03:34:58.828255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.725 [2024-12-06 03:34:58.828267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.725 qpair failed and we were unable to recover it. 00:26:38.725 [2024-12-06 03:34:58.828413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.725 [2024-12-06 03:34:58.828425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.725 qpair failed and we were unable to recover it. 00:26:38.725 [2024-12-06 03:34:58.828517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.725 [2024-12-06 03:34:58.828528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.725 qpair failed and we were unable to recover it. 00:26:38.725 [2024-12-06 03:34:58.828674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.725 [2024-12-06 03:34:58.828687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.725 qpair failed and we were unable to recover it. 00:26:38.725 [2024-12-06 03:34:58.828831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.725 [2024-12-06 03:34:58.828845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.725 qpair failed and we were unable to recover it. 00:26:38.725 [2024-12-06 03:34:58.828971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.725 [2024-12-06 03:34:58.828984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.725 qpair failed and we were unable to recover it. 00:26:38.725 [2024-12-06 03:34:58.829073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.725 [2024-12-06 03:34:58.829085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.725 qpair failed and we were unable to recover it. 00:26:38.725 [2024-12-06 03:34:58.829150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.725 [2024-12-06 03:34:58.829162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.725 qpair failed and we were unable to recover it. 00:26:38.725 [2024-12-06 03:34:58.829249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.725 [2024-12-06 03:34:58.829261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.725 qpair failed and we were unable to recover it. 00:26:38.725 [2024-12-06 03:34:58.829345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.725 [2024-12-06 03:34:58.829356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.725 qpair failed and we were unable to recover it. 00:26:38.725 [2024-12-06 03:34:58.829423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.725 [2024-12-06 03:34:58.829435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.725 qpair failed and we were unable to recover it. 00:26:38.725 [2024-12-06 03:34:58.829515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.725 [2024-12-06 03:34:58.829527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.725 qpair failed and we were unable to recover it. 00:26:38.725 [2024-12-06 03:34:58.829591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.725 [2024-12-06 03:34:58.829603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.725 qpair failed and we were unable to recover it. 00:26:38.725 [2024-12-06 03:34:58.829673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.725 [2024-12-06 03:34:58.829694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.725 qpair failed and we were unable to recover it. 00:26:38.725 [2024-12-06 03:34:58.829764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.725 [2024-12-06 03:34:58.829775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.725 qpair failed and we were unable to recover it. 00:26:38.725 [2024-12-06 03:34:58.829856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.725 [2024-12-06 03:34:58.829868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.725 qpair failed and we were unable to recover it. 00:26:38.725 [2024-12-06 03:34:58.829952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.725 [2024-12-06 03:34:58.829965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.725 qpair failed and we were unable to recover it. 00:26:38.725 [2024-12-06 03:34:58.830104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.725 [2024-12-06 03:34:58.830116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.725 qpair failed and we were unable to recover it. 00:26:38.725 [2024-12-06 03:34:58.830250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.725 [2024-12-06 03:34:58.830262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.725 qpair failed and we were unable to recover it. 00:26:38.725 [2024-12-06 03:34:58.830344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.725 [2024-12-06 03:34:58.830356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.725 qpair failed and we were unable to recover it. 00:26:38.725 [2024-12-06 03:34:58.830500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.725 [2024-12-06 03:34:58.830512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.725 qpair failed and we were unable to recover it. 00:26:38.725 [2024-12-06 03:34:58.830601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.725 [2024-12-06 03:34:58.830612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.725 qpair failed and we were unable to recover it. 00:26:38.725 [2024-12-06 03:34:58.830707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.725 [2024-12-06 03:34:58.830720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.725 qpair failed and we were unable to recover it. 00:26:38.725 [2024-12-06 03:34:58.830790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.725 [2024-12-06 03:34:58.830802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.725 qpair failed and we were unable to recover it. 00:26:38.725 [2024-12-06 03:34:58.830876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.725 [2024-12-06 03:34:58.830889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.725 qpair failed and we were unable to recover it. 00:26:38.725 [2024-12-06 03:34:58.830987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.725 [2024-12-06 03:34:58.830999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.725 qpair failed and we were unable to recover it. 00:26:38.725 [2024-12-06 03:34:58.831199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.725 [2024-12-06 03:34:58.831212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.725 qpair failed and we were unable to recover it. 00:26:38.725 [2024-12-06 03:34:58.831283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.725 [2024-12-06 03:34:58.831294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.725 qpair failed and we were unable to recover it. 00:26:38.725 [2024-12-06 03:34:58.831374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.725 [2024-12-06 03:34:58.831386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.725 qpair failed and we were unable to recover it. 00:26:38.725 [2024-12-06 03:34:58.831537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.725 [2024-12-06 03:34:58.831548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.725 qpair failed and we were unable to recover it. 00:26:38.725 [2024-12-06 03:34:58.831632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.725 [2024-12-06 03:34:58.831643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.725 qpair failed and we were unable to recover it. 00:26:38.725 [2024-12-06 03:34:58.831784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.725 [2024-12-06 03:34:58.831796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.725 qpair failed and we were unable to recover it. 00:26:38.725 [2024-12-06 03:34:58.832093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.725 [2024-12-06 03:34:58.832106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.725 qpair failed and we were unable to recover it. 00:26:38.725 [2024-12-06 03:34:58.832248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.725 [2024-12-06 03:34:58.832260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.725 qpair failed and we were unable to recover it. 00:26:38.725 [2024-12-06 03:34:58.832413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.725 [2024-12-06 03:34:58.832425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.726 qpair failed and we were unable to recover it. 00:26:38.726 [2024-12-06 03:34:58.832515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.726 [2024-12-06 03:34:58.832526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.726 qpair failed and we were unable to recover it. 00:26:38.726 [2024-12-06 03:34:58.832607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.726 [2024-12-06 03:34:58.832619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.726 qpair failed and we were unable to recover it. 00:26:38.726 [2024-12-06 03:34:58.832685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.726 [2024-12-06 03:34:58.832697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.726 qpair failed and we were unable to recover it. 00:26:38.726 [2024-12-06 03:34:58.832850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.726 [2024-12-06 03:34:58.832862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.726 qpair failed and we were unable to recover it. 00:26:38.726 [2024-12-06 03:34:58.832944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.726 [2024-12-06 03:34:58.832959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.726 qpair failed and we were unable to recover it. 00:26:38.726 [2024-12-06 03:34:58.833026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.726 [2024-12-06 03:34:58.833038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.726 qpair failed and we were unable to recover it. 00:26:38.726 [2024-12-06 03:34:58.833103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.726 [2024-12-06 03:34:58.833124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.726 qpair failed and we were unable to recover it. 00:26:38.726 [2024-12-06 03:34:58.833325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.726 [2024-12-06 03:34:58.833337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.726 qpair failed and we were unable to recover it. 00:26:38.726 [2024-12-06 03:34:58.833474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.726 [2024-12-06 03:34:58.833485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.726 qpair failed and we were unable to recover it. 00:26:38.726 [2024-12-06 03:34:58.833561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.726 [2024-12-06 03:34:58.833575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.726 qpair failed and we were unable to recover it. 00:26:38.726 [2024-12-06 03:34:58.833807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.726 [2024-12-06 03:34:58.833818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.726 qpair failed and we were unable to recover it. 00:26:38.726 [2024-12-06 03:34:58.833966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.726 [2024-12-06 03:34:58.833978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.726 qpair failed and we were unable to recover it. 00:26:38.726 [2024-12-06 03:34:58.834044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.726 [2024-12-06 03:34:58.834056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.726 qpair failed and we were unable to recover it. 00:26:38.726 [2024-12-06 03:34:58.834123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.726 [2024-12-06 03:34:58.834134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.726 qpair failed and we were unable to recover it. 00:26:38.726 [2024-12-06 03:34:58.834201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.726 [2024-12-06 03:34:58.834212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.726 qpair failed and we were unable to recover it. 00:26:38.726 [2024-12-06 03:34:58.834389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.726 [2024-12-06 03:34:58.834400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.726 qpair failed and we were unable to recover it. 00:26:38.726 [2024-12-06 03:34:58.834488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.726 [2024-12-06 03:34:58.834500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.726 qpair failed and we were unable to recover it. 00:26:38.726 [2024-12-06 03:34:58.834634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.726 [2024-12-06 03:34:58.834646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.726 qpair failed and we were unable to recover it. 00:26:38.726 [2024-12-06 03:34:58.834714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.726 [2024-12-06 03:34:58.834727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.726 qpair failed and we were unable to recover it. 00:26:38.726 [2024-12-06 03:34:58.834874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.726 [2024-12-06 03:34:58.834886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.726 qpair failed and we were unable to recover it. 00:26:38.726 [2024-12-06 03:34:58.835039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.726 [2024-12-06 03:34:58.835051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.726 qpair failed and we were unable to recover it. 00:26:38.726 [2024-12-06 03:34:58.835278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.726 [2024-12-06 03:34:58.835291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.726 qpair failed and we were unable to recover it. 00:26:38.726 [2024-12-06 03:34:58.835375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.726 [2024-12-06 03:34:58.835387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.726 qpair failed and we were unable to recover it. 00:26:38.726 [2024-12-06 03:34:58.835613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.726 [2024-12-06 03:34:58.835625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.726 qpair failed and we were unable to recover it. 00:26:38.726 [2024-12-06 03:34:58.835825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.726 [2024-12-06 03:34:58.835837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.726 qpair failed and we were unable to recover it. 00:26:38.726 [2024-12-06 03:34:58.835935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.726 [2024-12-06 03:34:58.835952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.726 qpair failed and we were unable to recover it. 00:26:38.726 [2024-12-06 03:34:58.836022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.726 [2024-12-06 03:34:58.836034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.726 qpair failed and we were unable to recover it. 00:26:38.726 [2024-12-06 03:34:58.836113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.726 [2024-12-06 03:34:58.836125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.726 qpair failed and we were unable to recover it. 00:26:38.726 [2024-12-06 03:34:58.836270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.726 [2024-12-06 03:34:58.836282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.726 qpair failed and we were unable to recover it. 00:26:38.726 [2024-12-06 03:34:58.836522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.726 [2024-12-06 03:34:58.836535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.726 qpair failed and we were unable to recover it. 00:26:38.726 [2024-12-06 03:34:58.836629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.726 [2024-12-06 03:34:58.836641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.726 qpair failed and we were unable to recover it. 00:26:38.726 [2024-12-06 03:34:58.836754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.726 [2024-12-06 03:34:58.836767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.726 qpair failed and we were unable to recover it. 00:26:38.726 [2024-12-06 03:34:58.836847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.726 [2024-12-06 03:34:58.836859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.726 qpair failed and we were unable to recover it. 00:26:38.726 [2024-12-06 03:34:58.836956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.726 [2024-12-06 03:34:58.836968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.726 qpair failed and we were unable to recover it. 00:26:38.726 [2024-12-06 03:34:58.837037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.727 [2024-12-06 03:34:58.837049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.727 qpair failed and we were unable to recover it. 00:26:38.727 [2024-12-06 03:34:58.837120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.727 [2024-12-06 03:34:58.837133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.727 qpair failed and we were unable to recover it. 00:26:38.727 [2024-12-06 03:34:58.837278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.727 [2024-12-06 03:34:58.837291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.727 qpair failed and we were unable to recover it. 00:26:38.727 [2024-12-06 03:34:58.837375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.727 [2024-12-06 03:34:58.837387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.727 qpair failed and we were unable to recover it. 00:26:38.727 [2024-12-06 03:34:58.837521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.727 [2024-12-06 03:34:58.837532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.727 qpair failed and we were unable to recover it. 00:26:38.727 [2024-12-06 03:34:58.837611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.727 [2024-12-06 03:34:58.837623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.727 qpair failed and we were unable to recover it. 00:26:38.727 [2024-12-06 03:34:58.837694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.727 [2024-12-06 03:34:58.837706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.727 qpair failed and we were unable to recover it. 00:26:38.727 [2024-12-06 03:34:58.837791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.727 [2024-12-06 03:34:58.837802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.727 qpair failed and we were unable to recover it. 00:26:38.727 [2024-12-06 03:34:58.837878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.727 [2024-12-06 03:34:58.837891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.727 qpair failed and we were unable to recover it. 00:26:38.727 [2024-12-06 03:34:58.838059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.727 [2024-12-06 03:34:58.838072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.727 qpair failed and we were unable to recover it. 00:26:38.727 [2024-12-06 03:34:58.838158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.727 [2024-12-06 03:34:58.838170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.727 qpair failed and we were unable to recover it. 00:26:38.727 [2024-12-06 03:34:58.838305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.727 [2024-12-06 03:34:58.838317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.727 qpair failed and we were unable to recover it. 00:26:38.727 [2024-12-06 03:34:58.838521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.727 [2024-12-06 03:34:58.838532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.727 qpair failed and we were unable to recover it. 00:26:38.727 [2024-12-06 03:34:58.838673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.727 [2024-12-06 03:34:58.838685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.727 qpair failed and we were unable to recover it. 00:26:38.727 [2024-12-06 03:34:58.838752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.727 [2024-12-06 03:34:58.838764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.727 qpair failed and we were unable to recover it. 00:26:38.727 [2024-12-06 03:34:58.838920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.727 [2024-12-06 03:34:58.838934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.727 qpair failed and we were unable to recover it. 00:26:38.727 [2024-12-06 03:34:58.839019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.727 [2024-12-06 03:34:58.839032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.727 qpair failed and we were unable to recover it. 00:26:38.727 [2024-12-06 03:34:58.839111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.727 [2024-12-06 03:34:58.839123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.727 qpair failed and we were unable to recover it. 00:26:38.727 [2024-12-06 03:34:58.839260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.727 [2024-12-06 03:34:58.839272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.727 qpair failed and we were unable to recover it. 00:26:38.727 [2024-12-06 03:34:58.839548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.727 [2024-12-06 03:34:58.839561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.727 qpair failed and we were unable to recover it. 00:26:38.727 [2024-12-06 03:34:58.839637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.727 [2024-12-06 03:34:58.839649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.727 qpair failed and we were unable to recover it. 00:26:38.727 [2024-12-06 03:34:58.839796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.727 [2024-12-06 03:34:58.839809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.727 qpair failed and we were unable to recover it. 00:26:38.727 [2024-12-06 03:34:58.839956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.727 [2024-12-06 03:34:58.839969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.727 qpair failed and we were unable to recover it. 00:26:38.727 [2024-12-06 03:34:58.840113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.727 [2024-12-06 03:34:58.840125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.727 qpair failed and we were unable to recover it. 00:26:38.727 [2024-12-06 03:34:58.840267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.727 [2024-12-06 03:34:58.840278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.727 qpair failed and we were unable to recover it. 00:26:38.727 [2024-12-06 03:34:58.840407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.727 [2024-12-06 03:34:58.840419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.727 qpair failed and we were unable to recover it. 00:26:38.727 [2024-12-06 03:34:58.840618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.727 [2024-12-06 03:34:58.840630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.727 qpair failed and we were unable to recover it. 00:26:38.727 [2024-12-06 03:34:58.840780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.727 [2024-12-06 03:34:58.840792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.727 qpair failed and we were unable to recover it. 00:26:38.727 [2024-12-06 03:34:58.840936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.727 [2024-12-06 03:34:58.840952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.727 qpair failed and we were unable to recover it. 00:26:38.727 [2024-12-06 03:34:58.841109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.727 [2024-12-06 03:34:58.841121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.727 qpair failed and we were unable to recover it. 00:26:38.727 [2024-12-06 03:34:58.841202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.727 [2024-12-06 03:34:58.841214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.727 qpair failed and we were unable to recover it. 00:26:38.727 [2024-12-06 03:34:58.841352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.727 [2024-12-06 03:34:58.841364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.727 qpair failed and we were unable to recover it. 00:26:38.727 [2024-12-06 03:34:58.841453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.727 [2024-12-06 03:34:58.841466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.727 qpair failed and we were unable to recover it. 00:26:38.727 [2024-12-06 03:34:58.841611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.727 [2024-12-06 03:34:58.841623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.727 qpair failed and we were unable to recover it. 00:26:38.727 [2024-12-06 03:34:58.841695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.727 [2024-12-06 03:34:58.841708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.727 qpair failed and we were unable to recover it. 00:26:38.727 [2024-12-06 03:34:58.841790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.727 [2024-12-06 03:34:58.841803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.728 qpair failed and we were unable to recover it. 00:26:38.728 [2024-12-06 03:34:58.841945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.728 [2024-12-06 03:34:58.841969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.728 qpair failed and we were unable to recover it. 00:26:38.728 [2024-12-06 03:34:58.842106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.728 [2024-12-06 03:34:58.842118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.728 qpair failed and we were unable to recover it. 00:26:38.728 [2024-12-06 03:34:58.842320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.728 [2024-12-06 03:34:58.842333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.728 qpair failed and we were unable to recover it. 00:26:38.728 [2024-12-06 03:34:58.842474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.728 [2024-12-06 03:34:58.842486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.728 qpair failed and we were unable to recover it. 00:26:38.728 [2024-12-06 03:34:58.842554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.728 [2024-12-06 03:34:58.842566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.728 qpair failed and we were unable to recover it. 00:26:38.728 [2024-12-06 03:34:58.842778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.728 [2024-12-06 03:34:58.842790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.728 qpair failed and we were unable to recover it. 00:26:38.728 [2024-12-06 03:34:58.842877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.728 [2024-12-06 03:34:58.842889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.728 qpair failed and we were unable to recover it. 00:26:38.728 [2024-12-06 03:34:58.842975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.728 [2024-12-06 03:34:58.842987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.728 qpair failed and we were unable to recover it. 00:26:38.728 [2024-12-06 03:34:58.843089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.728 [2024-12-06 03:34:58.843102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.728 qpair failed and we were unable to recover it. 00:26:38.728 [2024-12-06 03:34:58.843252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.728 [2024-12-06 03:34:58.843264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.728 qpair failed and we were unable to recover it. 00:26:38.728 [2024-12-06 03:34:58.843393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.728 [2024-12-06 03:34:58.843405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.728 qpair failed and we were unable to recover it. 00:26:38.728 [2024-12-06 03:34:58.843556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.728 [2024-12-06 03:34:58.843570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.728 qpair failed and we were unable to recover it. 00:26:38.728 [2024-12-06 03:34:58.843821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.728 [2024-12-06 03:34:58.843832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.728 qpair failed and we were unable to recover it. 00:26:38.728 [2024-12-06 03:34:58.843971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.728 [2024-12-06 03:34:58.843983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.728 qpair failed and we were unable to recover it. 00:26:38.728 [2024-12-06 03:34:58.844118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.728 [2024-12-06 03:34:58.844130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.728 qpair failed and we were unable to recover it. 00:26:38.728 [2024-12-06 03:34:58.844228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.728 [2024-12-06 03:34:58.844240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.728 qpair failed and we were unable to recover it. 00:26:38.728 [2024-12-06 03:34:58.844332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.728 [2024-12-06 03:34:58.844345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.728 qpair failed and we were unable to recover it. 00:26:38.728 [2024-12-06 03:34:58.844416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.728 [2024-12-06 03:34:58.844428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.728 qpair failed and we were unable to recover it. 00:26:38.728 [2024-12-06 03:34:58.844626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.728 [2024-12-06 03:34:58.844639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.728 qpair failed and we were unable to recover it. 00:26:38.728 [2024-12-06 03:34:58.844769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.728 [2024-12-06 03:34:58.844783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.728 qpair failed and we were unable to recover it. 00:26:38.728 [2024-12-06 03:34:58.845036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.728 [2024-12-06 03:34:58.845048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.728 qpair failed and we were unable to recover it. 00:26:38.728 [2024-12-06 03:34:58.845206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.728 [2024-12-06 03:34:58.845218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.728 qpair failed and we were unable to recover it. 00:26:38.728 [2024-12-06 03:34:58.845362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.728 [2024-12-06 03:34:58.845373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.728 qpair failed and we were unable to recover it. 00:26:38.728 [2024-12-06 03:34:58.845472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.728 [2024-12-06 03:34:58.845483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.728 qpair failed and we were unable to recover it. 00:26:38.728 [2024-12-06 03:34:58.845640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.728 [2024-12-06 03:34:58.845652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.728 qpair failed and we were unable to recover it. 00:26:38.728 [2024-12-06 03:34:58.845871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.728 [2024-12-06 03:34:58.845883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.728 qpair failed and we were unable to recover it. 00:26:38.728 [2024-12-06 03:34:58.845967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.728 [2024-12-06 03:34:58.845979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.728 qpair failed and we were unable to recover it. 00:26:38.728 [2024-12-06 03:34:58.846222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.728 [2024-12-06 03:34:58.846234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.728 qpair failed and we were unable to recover it. 00:26:38.728 [2024-12-06 03:34:58.846318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.728 [2024-12-06 03:34:58.846330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.728 qpair failed and we were unable to recover it. 00:26:38.728 [2024-12-06 03:34:58.846532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.728 [2024-12-06 03:34:58.846544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.728 qpair failed and we were unable to recover it. 00:26:38.728 [2024-12-06 03:34:58.846679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.728 [2024-12-06 03:34:58.846692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.728 qpair failed and we were unable to recover it. 00:26:38.728 [2024-12-06 03:34:58.846781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.728 [2024-12-06 03:34:58.846793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.728 qpair failed and we were unable to recover it. 00:26:38.728 [2024-12-06 03:34:58.847013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.728 [2024-12-06 03:34:58.847026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.728 qpair failed and we were unable to recover it. 00:26:38.728 [2024-12-06 03:34:58.847183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.728 [2024-12-06 03:34:58.847195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.728 qpair failed and we were unable to recover it. 00:26:38.728 [2024-12-06 03:34:58.847343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.728 [2024-12-06 03:34:58.847355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.728 qpair failed and we were unable to recover it. 00:26:38.728 [2024-12-06 03:34:58.847556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.728 [2024-12-06 03:34:58.847567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.728 qpair failed and we were unable to recover it. 00:26:38.728 [2024-12-06 03:34:58.847696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.728 [2024-12-06 03:34:58.847708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.728 qpair failed and we were unable to recover it. 00:26:38.728 [2024-12-06 03:34:58.847933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.728 [2024-12-06 03:34:58.847945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.729 qpair failed and we were unable to recover it. 00:26:38.729 [2024-12-06 03:34:58.848145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.729 [2024-12-06 03:34:58.848158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.729 qpair failed and we were unable to recover it. 00:26:38.729 [2024-12-06 03:34:58.848359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.729 [2024-12-06 03:34:58.848372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.729 qpair failed and we were unable to recover it. 00:26:38.729 [2024-12-06 03:34:58.848579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.729 [2024-12-06 03:34:58.848592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.729 qpair failed and we were unable to recover it. 00:26:38.729 [2024-12-06 03:34:58.848813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.729 [2024-12-06 03:34:58.848826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.729 qpair failed and we were unable to recover it. 00:26:38.729 [2024-12-06 03:34:58.848966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.729 [2024-12-06 03:34:58.848980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.729 qpair failed and we were unable to recover it. 00:26:38.729 [2024-12-06 03:34:58.849134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.729 [2024-12-06 03:34:58.849146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.729 qpair failed and we were unable to recover it. 00:26:38.729 [2024-12-06 03:34:58.849273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.729 [2024-12-06 03:34:58.849285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.729 qpair failed and we were unable to recover it. 00:26:38.729 [2024-12-06 03:34:58.849481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.729 [2024-12-06 03:34:58.849494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.729 qpair failed and we were unable to recover it. 00:26:38.729 [2024-12-06 03:34:58.849749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.729 [2024-12-06 03:34:58.849762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.729 qpair failed and we were unable to recover it. 00:26:38.729 [2024-12-06 03:34:58.849894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.729 [2024-12-06 03:34:58.849906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.729 qpair failed and we were unable to recover it. 00:26:38.729 [2024-12-06 03:34:58.850058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.729 [2024-12-06 03:34:58.850070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.729 qpair failed and we were unable to recover it. 00:26:38.729 [2024-12-06 03:34:58.850246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.729 [2024-12-06 03:34:58.850258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.729 qpair failed and we were unable to recover it. 00:26:38.729 [2024-12-06 03:34:58.850489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.729 [2024-12-06 03:34:58.850501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.729 qpair failed and we were unable to recover it. 00:26:38.729 [2024-12-06 03:34:58.850679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.729 [2024-12-06 03:34:58.850692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.729 qpair failed and we were unable to recover it. 00:26:38.729 [2024-12-06 03:34:58.850835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.729 [2024-12-06 03:34:58.850847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.729 qpair failed and we were unable to recover it. 00:26:38.729 [2024-12-06 03:34:58.850991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.729 [2024-12-06 03:34:58.851004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.729 qpair failed and we were unable to recover it. 00:26:38.729 [2024-12-06 03:34:58.851149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.729 [2024-12-06 03:34:58.851163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.729 qpair failed and we were unable to recover it. 00:26:38.729 [2024-12-06 03:34:58.851368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.729 [2024-12-06 03:34:58.851389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.729 qpair failed and we were unable to recover it. 00:26:38.729 [2024-12-06 03:34:58.851575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.729 [2024-12-06 03:34:58.851591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.729 qpair failed and we were unable to recover it. 00:26:38.729 [2024-12-06 03:34:58.851733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.729 [2024-12-06 03:34:58.851746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.729 qpair failed and we were unable to recover it. 00:26:38.729 [2024-12-06 03:34:58.851945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.729 [2024-12-06 03:34:58.851963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.729 qpair failed and we were unable to recover it. 00:26:38.729 [2024-12-06 03:34:58.852108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.729 [2024-12-06 03:34:58.852125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.729 qpair failed and we were unable to recover it. 00:26:38.729 [2024-12-06 03:34:58.852273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.729 [2024-12-06 03:34:58.852285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.729 qpair failed and we were unable to recover it. 00:26:38.729 [2024-12-06 03:34:58.852430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.729 [2024-12-06 03:34:58.852444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.729 qpair failed and we were unable to recover it. 00:26:38.729 [2024-12-06 03:34:58.852667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.729 [2024-12-06 03:34:58.852680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.729 qpair failed and we were unable to recover it. 00:26:38.729 [2024-12-06 03:34:58.852764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.729 [2024-12-06 03:34:58.852777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.729 qpair failed and we were unable to recover it. 00:26:38.729 [2024-12-06 03:34:58.853032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.729 [2024-12-06 03:34:58.853045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.729 qpair failed and we were unable to recover it. 00:26:38.729 [2024-12-06 03:34:58.853196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.729 [2024-12-06 03:34:58.853208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.729 qpair failed and we were unable to recover it. 00:26:38.729 [2024-12-06 03:34:58.853300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.729 [2024-12-06 03:34:58.853312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.729 qpair failed and we were unable to recover it. 00:26:38.729 [2024-12-06 03:34:58.853519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.729 [2024-12-06 03:34:58.853532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.729 qpair failed and we were unable to recover it. 00:26:38.729 [2024-12-06 03:34:58.853752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.729 [2024-12-06 03:34:58.853769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.729 qpair failed and we were unable to recover it. 00:26:38.729 [2024-12-06 03:34:58.853979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.729 [2024-12-06 03:34:58.853992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.729 qpair failed and we were unable to recover it. 00:26:38.729 [2024-12-06 03:34:58.854146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.729 [2024-12-06 03:34:58.854159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.729 qpair failed and we were unable to recover it. 00:26:38.729 [2024-12-06 03:34:58.854333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.729 [2024-12-06 03:34:58.854346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.729 qpair failed and we were unable to recover it. 00:26:38.729 [2024-12-06 03:34:58.854440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.729 [2024-12-06 03:34:58.854451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.729 qpair failed and we were unable to recover it. 00:26:38.729 [2024-12-06 03:34:58.854596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.729 [2024-12-06 03:34:58.854609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.729 qpair failed and we were unable to recover it. 00:26:38.729 [2024-12-06 03:34:58.854685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.729 [2024-12-06 03:34:58.854696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.729 qpair failed and we were unable to recover it. 00:26:38.729 [2024-12-06 03:34:58.854848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.729 [2024-12-06 03:34:58.854859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.729 qpair failed and we were unable to recover it. 00:26:38.730 [2024-12-06 03:34:58.855008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.730 [2024-12-06 03:34:58.855021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.730 qpair failed and we were unable to recover it. 00:26:38.730 [2024-12-06 03:34:58.855240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.730 [2024-12-06 03:34:58.855252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.730 qpair failed and we were unable to recover it. 00:26:38.730 [2024-12-06 03:34:58.855483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.730 [2024-12-06 03:34:58.855496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.730 qpair failed and we were unable to recover it. 00:26:38.730 [2024-12-06 03:34:58.855651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.730 [2024-12-06 03:34:58.855667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:38.730 qpair failed and we were unable to recover it. 00:26:39.010 [2024-12-06 03:34:58.855894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.010 [2024-12-06 03:34:58.855906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.010 qpair failed and we were unable to recover it. 00:26:39.010 [2024-12-06 03:34:58.856112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.010 [2024-12-06 03:34:58.856125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.010 qpair failed and we were unable to recover it. 00:26:39.010 [2024-12-06 03:34:58.856336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.010 [2024-12-06 03:34:58.856348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.010 qpair failed and we were unable to recover it. 00:26:39.010 [2024-12-06 03:34:58.856567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.010 [2024-12-06 03:34:58.856579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.010 qpair failed and we were unable to recover it. 00:26:39.010 [2024-12-06 03:34:58.856655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.010 [2024-12-06 03:34:58.856666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.010 qpair failed and we were unable to recover it. 00:26:39.010 [2024-12-06 03:34:58.856759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.010 [2024-12-06 03:34:58.856771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.010 qpair failed and we were unable to recover it. 00:26:39.010 [2024-12-06 03:34:58.856969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.010 [2024-12-06 03:34:58.857008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.010 qpair failed and we were unable to recover it. 00:26:39.010 [2024-12-06 03:34:58.857193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.010 [2024-12-06 03:34:58.857230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.010 qpair failed and we were unable to recover it. 00:26:39.010 [2024-12-06 03:34:58.857389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.010 [2024-12-06 03:34:58.857407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.010 qpair failed and we were unable to recover it. 00:26:39.010 [2024-12-06 03:34:58.857635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.010 [2024-12-06 03:34:58.857651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.010 qpair failed and we were unable to recover it. 00:26:39.010 [2024-12-06 03:34:58.857896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.010 [2024-12-06 03:34:58.857912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.010 qpair failed and we were unable to recover it. 00:26:39.010 [2024-12-06 03:34:58.858018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.010 [2024-12-06 03:34:58.858036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.010 qpair failed and we were unable to recover it. 00:26:39.010 [2024-12-06 03:34:58.858288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.010 [2024-12-06 03:34:58.858304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.010 qpair failed and we were unable to recover it. 00:26:39.010 [2024-12-06 03:34:58.858446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.010 [2024-12-06 03:34:58.858461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.010 qpair failed and we were unable to recover it. 00:26:39.010 [2024-12-06 03:34:58.858645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.010 [2024-12-06 03:34:58.858662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.010 qpair failed and we were unable to recover it. 00:26:39.010 [2024-12-06 03:34:58.858803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.010 [2024-12-06 03:34:58.858818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.010 qpair failed and we were unable to recover it. 00:26:39.010 [2024-12-06 03:34:58.858927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.010 [2024-12-06 03:34:58.858942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.010 qpair failed and we were unable to recover it. 00:26:39.010 [2024-12-06 03:34:58.859097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.010 [2024-12-06 03:34:58.859114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.010 qpair failed and we were unable to recover it. 00:26:39.010 [2024-12-06 03:34:58.859250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.010 [2024-12-06 03:34:58.859266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.010 qpair failed and we were unable to recover it. 00:26:39.010 [2024-12-06 03:34:58.859443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.010 [2024-12-06 03:34:58.859459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.010 qpair failed and we were unable to recover it. 00:26:39.010 [2024-12-06 03:34:58.859631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.010 [2024-12-06 03:34:58.859648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.010 qpair failed and we were unable to recover it. 00:26:39.010 [2024-12-06 03:34:58.859809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.010 [2024-12-06 03:34:58.859825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.010 qpair failed and we were unable to recover it. 00:26:39.010 [2024-12-06 03:34:58.860086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.010 [2024-12-06 03:34:58.860103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.010 qpair failed and we were unable to recover it. 00:26:39.011 [2024-12-06 03:34:58.860263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.011 [2024-12-06 03:34:58.860281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.011 qpair failed and we were unable to recover it. 00:26:39.011 [2024-12-06 03:34:58.860514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.011 [2024-12-06 03:34:58.860530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.011 qpair failed and we were unable to recover it. 00:26:39.011 [2024-12-06 03:34:58.860711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.011 [2024-12-06 03:34:58.860727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.011 qpair failed and we were unable to recover it. 00:26:39.011 [2024-12-06 03:34:58.860884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.011 [2024-12-06 03:34:58.860901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.011 qpair failed and we were unable to recover it. 00:26:39.011 [2024-12-06 03:34:58.861090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.011 [2024-12-06 03:34:58.861105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.011 qpair failed and we were unable to recover it. 00:26:39.011 [2024-12-06 03:34:58.861280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.011 [2024-12-06 03:34:58.861292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.011 qpair failed and we were unable to recover it. 00:26:39.011 [2024-12-06 03:34:58.861513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.011 [2024-12-06 03:34:58.861526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.011 qpair failed and we were unable to recover it. 00:26:39.011 [2024-12-06 03:34:58.861741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.011 [2024-12-06 03:34:58.861753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.011 qpair failed and we were unable to recover it. 00:26:39.011 [2024-12-06 03:34:58.861971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.011 [2024-12-06 03:34:58.861985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.011 qpair failed and we were unable to recover it. 00:26:39.011 [2024-12-06 03:34:58.862061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.011 [2024-12-06 03:34:58.862073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.011 qpair failed and we were unable to recover it. 00:26:39.011 [2024-12-06 03:34:58.862306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.011 [2024-12-06 03:34:58.862324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.011 qpair failed and we were unable to recover it. 00:26:39.011 [2024-12-06 03:34:58.862472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.011 [2024-12-06 03:34:58.862488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.011 qpair failed and we were unable to recover it. 00:26:39.011 [2024-12-06 03:34:58.862647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.011 [2024-12-06 03:34:58.862663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.011 qpair failed and we were unable to recover it. 00:26:39.011 [2024-12-06 03:34:58.862923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.011 [2024-12-06 03:34:58.862939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.011 qpair failed and we were unable to recover it. 00:26:39.011 [2024-12-06 03:34:58.863088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.011 [2024-12-06 03:34:58.863105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.011 qpair failed and we were unable to recover it. 00:26:39.011 [2024-12-06 03:34:58.863263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.011 [2024-12-06 03:34:58.863279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.011 qpair failed and we were unable to recover it. 00:26:39.011 [2024-12-06 03:34:58.863495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.011 [2024-12-06 03:34:58.863511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.011 qpair failed and we were unable to recover it. 00:26:39.011 [2024-12-06 03:34:58.863706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.011 [2024-12-06 03:34:58.863723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.011 qpair failed and we were unable to recover it. 00:26:39.011 [2024-12-06 03:34:58.863903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.011 [2024-12-06 03:34:58.863919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.011 qpair failed and we were unable to recover it. 00:26:39.011 [2024-12-06 03:34:58.864167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.011 [2024-12-06 03:34:58.864184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.011 qpair failed and we were unable to recover it. 00:26:39.011 [2024-12-06 03:34:58.864413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.011 [2024-12-06 03:34:58.864430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.011 qpair failed and we were unable to recover it. 00:26:39.011 [2024-12-06 03:34:58.864530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.011 [2024-12-06 03:34:58.864546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.011 qpair failed and we were unable to recover it. 00:26:39.011 [2024-12-06 03:34:58.864705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.011 [2024-12-06 03:34:58.864721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.011 qpair failed and we were unable to recover it. 00:26:39.011 [2024-12-06 03:34:58.864870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.011 [2024-12-06 03:34:58.864886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.011 qpair failed and we were unable to recover it. 00:26:39.011 [2024-12-06 03:34:58.865049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.011 [2024-12-06 03:34:58.865066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.011 qpair failed and we were unable to recover it. 00:26:39.011 [2024-12-06 03:34:58.865211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.011 [2024-12-06 03:34:58.865226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.011 qpair failed and we were unable to recover it. 00:26:39.011 [2024-12-06 03:34:58.865382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.011 [2024-12-06 03:34:58.865399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.011 qpair failed and we were unable to recover it. 00:26:39.011 [2024-12-06 03:34:58.865558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.011 [2024-12-06 03:34:58.865575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.011 qpair failed and we were unable to recover it. 00:26:39.011 [2024-12-06 03:34:58.865782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.011 [2024-12-06 03:34:58.865798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.011 qpair failed and we were unable to recover it. 00:26:39.011 [2024-12-06 03:34:58.866038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.011 [2024-12-06 03:34:58.866055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.011 qpair failed and we were unable to recover it. 00:26:39.011 [2024-12-06 03:34:58.866311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.011 [2024-12-06 03:34:58.866328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.011 qpair failed and we were unable to recover it. 00:26:39.011 [2024-12-06 03:34:58.866490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.011 [2024-12-06 03:34:58.866506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.011 qpair failed and we were unable to recover it. 00:26:39.011 [2024-12-06 03:34:58.866764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.011 [2024-12-06 03:34:58.866780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.011 qpair failed and we were unable to recover it. 00:26:39.011 [2024-12-06 03:34:58.866989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.011 [2024-12-06 03:34:58.867006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.011 qpair failed and we were unable to recover it. 00:26:39.011 [2024-12-06 03:34:58.867215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.011 [2024-12-06 03:34:58.867231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.011 qpair failed and we were unable to recover it. 00:26:39.011 [2024-12-06 03:34:58.867395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.011 [2024-12-06 03:34:58.867412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.011 qpair failed and we were unable to recover it. 00:26:39.011 [2024-12-06 03:34:58.867612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.011 [2024-12-06 03:34:58.867627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.011 qpair failed and we were unable to recover it. 00:26:39.011 [2024-12-06 03:34:58.867844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.011 [2024-12-06 03:34:58.867863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.011 qpair failed and we were unable to recover it. 00:26:39.012 [2024-12-06 03:34:58.868024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.012 [2024-12-06 03:34:58.868041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.012 qpair failed and we were unable to recover it. 00:26:39.012 [2024-12-06 03:34:58.868191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.012 [2024-12-06 03:34:58.868207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.012 qpair failed and we were unable to recover it. 00:26:39.012 [2024-12-06 03:34:58.868365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.012 [2024-12-06 03:34:58.868382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.012 qpair failed and we were unable to recover it. 00:26:39.012 [2024-12-06 03:34:58.868584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.012 [2024-12-06 03:34:58.868601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.012 qpair failed and we were unable to recover it. 00:26:39.012 [2024-12-06 03:34:58.868777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.012 [2024-12-06 03:34:58.868793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.012 qpair failed and we were unable to recover it. 00:26:39.012 [2024-12-06 03:34:58.868945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.012 [2024-12-06 03:34:58.868968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.012 qpair failed and we were unable to recover it. 00:26:39.012 [2024-12-06 03:34:58.869202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.012 [2024-12-06 03:34:58.869218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.012 qpair failed and we were unable to recover it. 00:26:39.012 [2024-12-06 03:34:58.869452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.012 [2024-12-06 03:34:58.869468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.012 qpair failed and we were unable to recover it. 00:26:39.012 [2024-12-06 03:34:58.869702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.012 [2024-12-06 03:34:58.869718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.012 qpair failed and we were unable to recover it. 00:26:39.012 [2024-12-06 03:34:58.869826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.012 [2024-12-06 03:34:58.869842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.012 qpair failed and we were unable to recover it. 00:26:39.012 [2024-12-06 03:34:58.869987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.012 [2024-12-06 03:34:58.870004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.012 qpair failed and we were unable to recover it. 00:26:39.012 [2024-12-06 03:34:58.870151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.012 [2024-12-06 03:34:58.870167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.012 qpair failed and we were unable to recover it. 00:26:39.012 [2024-12-06 03:34:58.870324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.012 [2024-12-06 03:34:58.870340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.012 qpair failed and we were unable to recover it. 00:26:39.012 [2024-12-06 03:34:58.870525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.012 [2024-12-06 03:34:58.870541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.012 qpair failed and we were unable to recover it. 00:26:39.012 [2024-12-06 03:34:58.870688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.012 [2024-12-06 03:34:58.870704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.012 qpair failed and we were unable to recover it. 00:26:39.012 [2024-12-06 03:34:58.870788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.012 [2024-12-06 03:34:58.870804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.012 qpair failed and we were unable to recover it. 00:26:39.012 [2024-12-06 03:34:58.870942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.012 [2024-12-06 03:34:58.870964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.012 qpair failed and we were unable to recover it. 00:26:39.012 [2024-12-06 03:34:58.871194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.012 [2024-12-06 03:34:58.871210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.012 qpair failed and we were unable to recover it. 00:26:39.012 [2024-12-06 03:34:58.871317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.012 [2024-12-06 03:34:58.871333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.012 qpair failed and we were unable to recover it. 00:26:39.012 [2024-12-06 03:34:58.871572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.012 [2024-12-06 03:34:58.871589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.012 qpair failed and we were unable to recover it. 00:26:39.012 [2024-12-06 03:34:58.871772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.012 [2024-12-06 03:34:58.871788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.012 qpair failed and we were unable to recover it. 00:26:39.012 [2024-12-06 03:34:58.871939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.012 [2024-12-06 03:34:58.871966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.012 qpair failed and we were unable to recover it. 00:26:39.012 [2024-12-06 03:34:58.872134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.012 [2024-12-06 03:34:58.872150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.012 qpair failed and we were unable to recover it. 00:26:39.012 [2024-12-06 03:34:58.872312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.012 [2024-12-06 03:34:58.872328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.012 qpair failed and we were unable to recover it. 00:26:39.012 [2024-12-06 03:34:58.872563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.012 [2024-12-06 03:34:58.872578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.012 qpair failed and we were unable to recover it. 00:26:39.012 [2024-12-06 03:34:58.872719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.012 [2024-12-06 03:34:58.872736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.012 qpair failed and we were unable to recover it. 00:26:39.012 [2024-12-06 03:34:58.872874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.012 [2024-12-06 03:34:58.872893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.012 qpair failed and we were unable to recover it. 00:26:39.012 [2024-12-06 03:34:58.873037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.012 [2024-12-06 03:34:58.873054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.012 qpair failed and we were unable to recover it. 00:26:39.012 [2024-12-06 03:34:58.873208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.012 [2024-12-06 03:34:58.873224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.012 qpair failed and we were unable to recover it. 00:26:39.012 [2024-12-06 03:34:58.873431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.012 [2024-12-06 03:34:58.873448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.012 qpair failed and we were unable to recover it. 00:26:39.012 [2024-12-06 03:34:58.873644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.012 [2024-12-06 03:34:58.873660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.012 qpair failed and we were unable to recover it. 00:26:39.012 [2024-12-06 03:34:58.873889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.012 [2024-12-06 03:34:58.873905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.012 qpair failed and we were unable to recover it. 00:26:39.012 [2024-12-06 03:34:58.874066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.012 [2024-12-06 03:34:58.874083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.012 qpair failed and we were unable to recover it. 00:26:39.012 [2024-12-06 03:34:58.874316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.012 [2024-12-06 03:34:58.874332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.012 qpair failed and we were unable to recover it. 00:26:39.012 [2024-12-06 03:34:58.874482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.012 [2024-12-06 03:34:58.874498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.012 qpair failed and we were unable to recover it. 00:26:39.012 [2024-12-06 03:34:58.874678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.012 [2024-12-06 03:34:58.874694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.012 qpair failed and we were unable to recover it. 00:26:39.012 [2024-12-06 03:34:58.874850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.012 [2024-12-06 03:34:58.874866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.012 qpair failed and we were unable to recover it. 00:26:39.012 [2024-12-06 03:34:58.875044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.012 [2024-12-06 03:34:58.875061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.012 qpair failed and we were unable to recover it. 00:26:39.012 [2024-12-06 03:34:58.875221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.013 [2024-12-06 03:34:58.875237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.013 qpair failed and we were unable to recover it. 00:26:39.013 [2024-12-06 03:34:58.875397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.013 [2024-12-06 03:34:58.875412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.013 qpair failed and we were unable to recover it. 00:26:39.013 [2024-12-06 03:34:58.875571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.013 [2024-12-06 03:34:58.875588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.013 qpair failed and we were unable to recover it. 00:26:39.013 [2024-12-06 03:34:58.875798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.013 [2024-12-06 03:34:58.875814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.013 qpair failed and we were unable to recover it. 00:26:39.013 [2024-12-06 03:34:58.876023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.013 [2024-12-06 03:34:58.876039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.013 qpair failed and we were unable to recover it. 00:26:39.013 [2024-12-06 03:34:58.876253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.013 [2024-12-06 03:34:58.876269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.013 qpair failed and we were unable to recover it. 00:26:39.013 [2024-12-06 03:34:58.876498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.013 [2024-12-06 03:34:58.876514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.013 qpair failed and we were unable to recover it. 00:26:39.013 [2024-12-06 03:34:58.876669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.013 [2024-12-06 03:34:58.876685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.013 qpair failed and we were unable to recover it. 00:26:39.013 [2024-12-06 03:34:58.876855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.013 [2024-12-06 03:34:58.876871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.013 qpair failed and we were unable to recover it. 00:26:39.013 [2024-12-06 03:34:58.877113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.013 [2024-12-06 03:34:58.877130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.013 qpair failed and we were unable to recover it. 00:26:39.013 [2024-12-06 03:34:58.877300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.013 [2024-12-06 03:34:58.877317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.013 qpair failed and we were unable to recover it. 00:26:39.013 [2024-12-06 03:34:58.877526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.013 [2024-12-06 03:34:58.877542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.013 qpair failed and we were unable to recover it. 00:26:39.013 [2024-12-06 03:34:58.877749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.013 [2024-12-06 03:34:58.877765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.013 qpair failed and we were unable to recover it. 00:26:39.013 [2024-12-06 03:34:58.877869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.013 [2024-12-06 03:34:58.877885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.013 qpair failed and we were unable to recover it. 00:26:39.013 [2024-12-06 03:34:58.878122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.013 [2024-12-06 03:34:58.878138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.013 qpair failed and we were unable to recover it. 00:26:39.013 [2024-12-06 03:34:58.878303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.013 [2024-12-06 03:34:58.878322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.013 qpair failed and we were unable to recover it. 00:26:39.013 [2024-12-06 03:34:58.878626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.013 [2024-12-06 03:34:58.878642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.013 qpair failed and we were unable to recover it. 00:26:39.013 [2024-12-06 03:34:58.878916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.013 [2024-12-06 03:34:58.878932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.013 qpair failed and we were unable to recover it. 00:26:39.013 [2024-12-06 03:34:58.879031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.013 [2024-12-06 03:34:58.879050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.013 qpair failed and we were unable to recover it. 00:26:39.013 [2024-12-06 03:34:58.879216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.013 [2024-12-06 03:34:58.879229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.013 qpair failed and we were unable to recover it. 00:26:39.013 [2024-12-06 03:34:58.879311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.013 [2024-12-06 03:34:58.879322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.013 qpair failed and we were unable to recover it. 00:26:39.013 [2024-12-06 03:34:58.879470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.013 [2024-12-06 03:34:58.879483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.013 qpair failed and we were unable to recover it. 00:26:39.013 [2024-12-06 03:34:58.879651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.013 [2024-12-06 03:34:58.879664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.013 qpair failed and we were unable to recover it. 00:26:39.013 [2024-12-06 03:34:58.879830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.013 [2024-12-06 03:34:58.879843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.013 qpair failed and we were unable to recover it. 00:26:39.013 [2024-12-06 03:34:58.879990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.013 [2024-12-06 03:34:58.880003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.013 qpair failed and we were unable to recover it. 00:26:39.013 [2024-12-06 03:34:58.880157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.013 [2024-12-06 03:34:58.880170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.013 qpair failed and we were unable to recover it. 00:26:39.013 [2024-12-06 03:34:58.880311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.013 [2024-12-06 03:34:58.880323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.013 qpair failed and we were unable to recover it. 00:26:39.013 [2024-12-06 03:34:58.880488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.013 [2024-12-06 03:34:58.880500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.013 qpair failed and we were unable to recover it. 00:26:39.013 [2024-12-06 03:34:58.880590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.013 [2024-12-06 03:34:58.880602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.013 qpair failed and we were unable to recover it. 00:26:39.013 [2024-12-06 03:34:58.880759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.013 [2024-12-06 03:34:58.880771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.013 qpair failed and we were unable to recover it. 00:26:39.013 [2024-12-06 03:34:58.880937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.013 [2024-12-06 03:34:58.880956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.013 qpair failed and we were unable to recover it. 00:26:39.013 [2024-12-06 03:34:58.881052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.013 [2024-12-06 03:34:58.881063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.013 qpair failed and we were unable to recover it. 00:26:39.013 [2024-12-06 03:34:58.881258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.013 [2024-12-06 03:34:58.881270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.013 qpair failed and we were unable to recover it. 00:26:39.013 [2024-12-06 03:34:58.881494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.013 [2024-12-06 03:34:58.881506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.013 qpair failed and we were unable to recover it. 00:26:39.013 [2024-12-06 03:34:58.881701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.013 [2024-12-06 03:34:58.881714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.013 qpair failed and we were unable to recover it. 00:26:39.013 [2024-12-06 03:34:58.881936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.013 [2024-12-06 03:34:58.881952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.013 qpair failed and we were unable to recover it. 00:26:39.013 [2024-12-06 03:34:58.882181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.013 [2024-12-06 03:34:58.882194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.013 qpair failed and we were unable to recover it. 00:26:39.013 [2024-12-06 03:34:58.882409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.013 [2024-12-06 03:34:58.882422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.013 qpair failed and we were unable to recover it. 00:26:39.013 [2024-12-06 03:34:58.882595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.014 [2024-12-06 03:34:58.882606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.014 qpair failed and we were unable to recover it. 00:26:39.014 [2024-12-06 03:34:58.882766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.014 [2024-12-06 03:34:58.882779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.014 qpair failed and we were unable to recover it. 00:26:39.014 [2024-12-06 03:34:58.882941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.014 [2024-12-06 03:34:58.882957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.014 qpair failed and we were unable to recover it. 00:26:39.014 [2024-12-06 03:34:58.883043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.014 [2024-12-06 03:34:58.883055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.014 qpair failed and we were unable to recover it. 00:26:39.014 [2024-12-06 03:34:58.883282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.014 [2024-12-06 03:34:58.883297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.014 qpair failed and we were unable to recover it. 00:26:39.014 [2024-12-06 03:34:58.883378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.014 [2024-12-06 03:34:58.883390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.014 qpair failed and we were unable to recover it. 00:26:39.014 [2024-12-06 03:34:58.883523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.014 [2024-12-06 03:34:58.883535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.014 qpair failed and we were unable to recover it. 00:26:39.014 [2024-12-06 03:34:58.883736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.014 [2024-12-06 03:34:58.883748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.014 qpair failed and we were unable to recover it. 00:26:39.014 [2024-12-06 03:34:58.883879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.014 [2024-12-06 03:34:58.883892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.014 qpair failed and we were unable to recover it. 00:26:39.014 [2024-12-06 03:34:58.884047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.014 [2024-12-06 03:34:58.884060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.014 qpair failed and we were unable to recover it. 00:26:39.014 [2024-12-06 03:34:58.884215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.014 [2024-12-06 03:34:58.884228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.014 qpair failed and we were unable to recover it. 00:26:39.014 [2024-12-06 03:34:58.884362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.014 [2024-12-06 03:34:58.884375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.014 qpair failed and we were unable to recover it. 00:26:39.014 [2024-12-06 03:34:58.884623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.014 [2024-12-06 03:34:58.884635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.014 qpair failed and we were unable to recover it. 00:26:39.014 [2024-12-06 03:34:58.884729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.014 [2024-12-06 03:34:58.884741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.014 qpair failed and we were unable to recover it. 00:26:39.014 [2024-12-06 03:34:58.884836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.014 [2024-12-06 03:34:58.884849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.014 qpair failed and we were unable to recover it. 00:26:39.014 [2024-12-06 03:34:58.884993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.014 [2024-12-06 03:34:58.885006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.014 qpair failed and we were unable to recover it. 00:26:39.014 [2024-12-06 03:34:58.885229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.014 [2024-12-06 03:34:58.885242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.014 qpair failed and we were unable to recover it. 00:26:39.014 [2024-12-06 03:34:58.885490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.014 [2024-12-06 03:34:58.885503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.014 qpair failed and we were unable to recover it. 00:26:39.014 [2024-12-06 03:34:58.885657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.014 [2024-12-06 03:34:58.885669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.014 qpair failed and we were unable to recover it. 00:26:39.014 [2024-12-06 03:34:58.885826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.014 [2024-12-06 03:34:58.885839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.014 qpair failed and we were unable to recover it. 00:26:39.014 [2024-12-06 03:34:58.885969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.014 [2024-12-06 03:34:58.885981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.014 qpair failed and we were unable to recover it. 00:26:39.014 [2024-12-06 03:34:58.886227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.014 [2024-12-06 03:34:58.886239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.014 qpair failed and we were unable to recover it. 00:26:39.014 [2024-12-06 03:34:58.886386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.014 [2024-12-06 03:34:58.886398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.014 qpair failed and we were unable to recover it. 00:26:39.014 [2024-12-06 03:34:58.886542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.014 [2024-12-06 03:34:58.886555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.014 qpair failed and we were unable to recover it. 00:26:39.014 [2024-12-06 03:34:58.886688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.014 [2024-12-06 03:34:58.886701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.014 qpair failed and we were unable to recover it. 00:26:39.014 [2024-12-06 03:34:58.886795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.014 [2024-12-06 03:34:58.886806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.014 qpair failed and we were unable to recover it. 00:26:39.014 [2024-12-06 03:34:58.887036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.014 [2024-12-06 03:34:58.887049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.014 qpair failed and we were unable to recover it. 00:26:39.014 [2024-12-06 03:34:58.887248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.014 [2024-12-06 03:34:58.887261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.014 qpair failed and we were unable to recover it. 00:26:39.014 [2024-12-06 03:34:58.887468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.014 [2024-12-06 03:34:58.887480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.014 qpair failed and we were unable to recover it. 00:26:39.014 [2024-12-06 03:34:58.887623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.014 [2024-12-06 03:34:58.887636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.014 qpair failed and we were unable to recover it. 00:26:39.014 [2024-12-06 03:34:58.887779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.014 [2024-12-06 03:34:58.887791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.014 qpair failed and we were unable to recover it. 00:26:39.014 [2024-12-06 03:34:58.888007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.014 [2024-12-06 03:34:58.888020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.014 qpair failed and we were unable to recover it. 00:26:39.014 [2024-12-06 03:34:58.888270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.014 [2024-12-06 03:34:58.888283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.014 qpair failed and we were unable to recover it. 00:26:39.014 [2024-12-06 03:34:58.888421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.015 [2024-12-06 03:34:58.888433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.015 qpair failed and we were unable to recover it. 00:26:39.015 [2024-12-06 03:34:58.888635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.015 [2024-12-06 03:34:58.888648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.015 qpair failed and we were unable to recover it. 00:26:39.015 [2024-12-06 03:34:58.888792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.015 [2024-12-06 03:34:58.888804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.015 qpair failed and we were unable to recover it. 00:26:39.015 [2024-12-06 03:34:58.888940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.015 [2024-12-06 03:34:58.888957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.015 qpair failed and we were unable to recover it. 00:26:39.015 [2024-12-06 03:34:58.889045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.015 [2024-12-06 03:34:58.889057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.015 qpair failed and we were unable to recover it. 00:26:39.015 [2024-12-06 03:34:58.889270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.015 [2024-12-06 03:34:58.889283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.015 qpair failed and we were unable to recover it. 00:26:39.015 [2024-12-06 03:34:58.889430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.015 [2024-12-06 03:34:58.889442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.015 qpair failed and we were unable to recover it. 00:26:39.015 [2024-12-06 03:34:58.889679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.015 [2024-12-06 03:34:58.889692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.015 qpair failed and we were unable to recover it. 00:26:39.015 [2024-12-06 03:34:58.889919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.015 [2024-12-06 03:34:58.889931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.015 qpair failed and we were unable to recover it. 00:26:39.015 [2024-12-06 03:34:58.890065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.015 [2024-12-06 03:34:58.890079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.015 qpair failed and we were unable to recover it. 00:26:39.015 [2024-12-06 03:34:58.890278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.015 [2024-12-06 03:34:58.890291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.015 qpair failed and we were unable to recover it. 00:26:39.015 [2024-12-06 03:34:58.890548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.015 [2024-12-06 03:34:58.890562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.015 qpair failed and we were unable to recover it. 00:26:39.015 [2024-12-06 03:34:58.890776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.015 [2024-12-06 03:34:58.890789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.015 qpair failed and we were unable to recover it. 00:26:39.015 [2024-12-06 03:34:58.890932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.015 [2024-12-06 03:34:58.890945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.015 qpair failed and we were unable to recover it. 00:26:39.015 [2024-12-06 03:34:58.891117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.015 [2024-12-06 03:34:58.891130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.015 qpair failed and we were unable to recover it. 00:26:39.015 [2024-12-06 03:34:58.891277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.015 [2024-12-06 03:34:58.891290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.015 qpair failed and we were unable to recover it. 00:26:39.015 [2024-12-06 03:34:58.891496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.015 [2024-12-06 03:34:58.891509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.015 qpair failed and we were unable to recover it. 00:26:39.015 [2024-12-06 03:34:58.891690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.015 [2024-12-06 03:34:58.891703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.015 qpair failed and we were unable to recover it. 00:26:39.015 [2024-12-06 03:34:58.891878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.015 [2024-12-06 03:34:58.891890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.015 qpair failed and we were unable to recover it. 00:26:39.015 [2024-12-06 03:34:58.892057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.015 [2024-12-06 03:34:58.892070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.015 qpair failed and we were unable to recover it. 00:26:39.015 [2024-12-06 03:34:58.892210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.015 [2024-12-06 03:34:58.892223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.015 qpair failed and we were unable to recover it. 00:26:39.015 [2024-12-06 03:34:58.892368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.015 [2024-12-06 03:34:58.892380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.015 qpair failed and we were unable to recover it. 00:26:39.015 [2024-12-06 03:34:58.892459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.015 [2024-12-06 03:34:58.892470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.015 qpair failed and we were unable to recover it. 00:26:39.015 [2024-12-06 03:34:58.892632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.015 [2024-12-06 03:34:58.892645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.015 qpair failed and we were unable to recover it. 00:26:39.015 [2024-12-06 03:34:58.892746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.015 [2024-12-06 03:34:58.892758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.015 qpair failed and we were unable to recover it. 00:26:39.015 [2024-12-06 03:34:58.892852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.015 [2024-12-06 03:34:58.892865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.015 qpair failed and we were unable to recover it. 00:26:39.015 [2024-12-06 03:34:58.893016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.015 [2024-12-06 03:34:58.893029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.015 qpair failed and we were unable to recover it. 00:26:39.015 [2024-12-06 03:34:58.893171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.015 [2024-12-06 03:34:58.893184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.015 qpair failed and we were unable to recover it. 00:26:39.015 [2024-12-06 03:34:58.893281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.015 [2024-12-06 03:34:58.893293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.015 qpair failed and we were unable to recover it. 00:26:39.015 [2024-12-06 03:34:58.893491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.015 [2024-12-06 03:34:58.893503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.015 qpair failed and we were unable to recover it. 00:26:39.015 [2024-12-06 03:34:58.893637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.015 [2024-12-06 03:34:58.893649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.015 qpair failed and we were unable to recover it. 00:26:39.015 [2024-12-06 03:34:58.893733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.015 [2024-12-06 03:34:58.893745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.015 qpair failed and we were unable to recover it. 00:26:39.015 [2024-12-06 03:34:58.894022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.015 [2024-12-06 03:34:58.894036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.015 qpair failed and we were unable to recover it. 00:26:39.015 [2024-12-06 03:34:58.894247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.015 [2024-12-06 03:34:58.894259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.015 qpair failed and we were unable to recover it. 00:26:39.015 [2024-12-06 03:34:58.894339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.015 [2024-12-06 03:34:58.894350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.015 qpair failed and we were unable to recover it. 00:26:39.015 [2024-12-06 03:34:58.894435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.015 [2024-12-06 03:34:58.894446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.015 qpair failed and we were unable to recover it. 00:26:39.015 [2024-12-06 03:34:58.894588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.015 [2024-12-06 03:34:58.894601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.016 qpair failed and we were unable to recover it. 00:26:39.016 [2024-12-06 03:34:58.894764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.016 [2024-12-06 03:34:58.894777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.016 qpair failed and we were unable to recover it. 00:26:39.016 [2024-12-06 03:34:58.895041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.016 [2024-12-06 03:34:58.895054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.016 qpair failed and we were unable to recover it. 00:26:39.016 [2024-12-06 03:34:58.895217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.016 [2024-12-06 03:34:58.895229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.016 qpair failed and we were unable to recover it. 00:26:39.016 [2024-12-06 03:34:58.895428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.016 [2024-12-06 03:34:58.895440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.016 qpair failed and we were unable to recover it. 00:26:39.016 [2024-12-06 03:34:58.895662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.016 [2024-12-06 03:34:58.895674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.016 qpair failed and we were unable to recover it. 00:26:39.016 [2024-12-06 03:34:58.895836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.016 [2024-12-06 03:34:58.895849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.016 qpair failed and we were unable to recover it. 00:26:39.016 [2024-12-06 03:34:58.896059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.016 [2024-12-06 03:34:58.896071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.016 qpair failed and we were unable to recover it. 00:26:39.016 [2024-12-06 03:34:58.896293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.016 [2024-12-06 03:34:58.896305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.016 qpair failed and we were unable to recover it. 00:26:39.016 [2024-12-06 03:34:58.896525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.016 [2024-12-06 03:34:58.896538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.016 qpair failed and we were unable to recover it. 00:26:39.016 [2024-12-06 03:34:58.896767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.016 [2024-12-06 03:34:58.896779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.016 qpair failed and we were unable to recover it. 00:26:39.016 [2024-12-06 03:34:58.897002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.016 [2024-12-06 03:34:58.897014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.016 qpair failed and we were unable to recover it. 00:26:39.016 [2024-12-06 03:34:58.897181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.016 [2024-12-06 03:34:58.897193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.016 qpair failed and we were unable to recover it. 00:26:39.016 [2024-12-06 03:34:58.897386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.016 [2024-12-06 03:34:58.897398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.016 qpair failed and we were unable to recover it. 00:26:39.016 [2024-12-06 03:34:58.897598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.016 [2024-12-06 03:34:58.897610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.016 qpair failed and we were unable to recover it. 00:26:39.016 [2024-12-06 03:34:58.897703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.016 [2024-12-06 03:34:58.897718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.016 qpair failed and we were unable to recover it. 00:26:39.016 [2024-12-06 03:34:58.897928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.016 [2024-12-06 03:34:58.897940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.016 qpair failed and we were unable to recover it. 00:26:39.016 [2024-12-06 03:34:58.898030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.016 [2024-12-06 03:34:58.898041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.016 qpair failed and we were unable to recover it. 00:26:39.016 [2024-12-06 03:34:58.898215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.016 [2024-12-06 03:34:58.898228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.016 qpair failed and we were unable to recover it. 00:26:39.016 [2024-12-06 03:34:58.898377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.016 [2024-12-06 03:34:58.898390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.016 qpair failed and we were unable to recover it. 00:26:39.016 [2024-12-06 03:34:58.898594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.016 [2024-12-06 03:34:58.898607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.016 qpair failed and we were unable to recover it. 00:26:39.016 [2024-12-06 03:34:58.898842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.016 [2024-12-06 03:34:58.898854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.016 qpair failed and we were unable to recover it. 00:26:39.016 [2024-12-06 03:34:58.899053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.016 [2024-12-06 03:34:58.899066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.016 qpair failed and we were unable to recover it. 00:26:39.016 [2024-12-06 03:34:58.899223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.016 [2024-12-06 03:34:58.899236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.016 qpair failed and we were unable to recover it. 00:26:39.016 [2024-12-06 03:34:58.899477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.016 [2024-12-06 03:34:58.899489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.016 qpair failed and we were unable to recover it. 00:26:39.016 [2024-12-06 03:34:58.899647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.016 [2024-12-06 03:34:58.899660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.016 qpair failed and we were unable to recover it. 00:26:39.016 [2024-12-06 03:34:58.899885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.016 [2024-12-06 03:34:58.899898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.016 qpair failed and we were unable to recover it. 00:26:39.016 [2024-12-06 03:34:58.900097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.016 [2024-12-06 03:34:58.900110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.016 qpair failed and we were unable to recover it. 00:26:39.016 [2024-12-06 03:34:58.900194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.016 [2024-12-06 03:34:58.900206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.016 qpair failed and we were unable to recover it. 00:26:39.016 [2024-12-06 03:34:58.900334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.016 [2024-12-06 03:34:58.900347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.016 qpair failed and we were unable to recover it. 00:26:39.016 [2024-12-06 03:34:58.900477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.016 [2024-12-06 03:34:58.900490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.016 qpair failed and we were unable to recover it. 00:26:39.016 [2024-12-06 03:34:58.900726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.016 [2024-12-06 03:34:58.900739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.016 qpair failed and we were unable to recover it. 00:26:39.016 [2024-12-06 03:34:58.900963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.016 [2024-12-06 03:34:58.900976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.016 qpair failed and we were unable to recover it. 00:26:39.016 [2024-12-06 03:34:58.901204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.016 [2024-12-06 03:34:58.901217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.016 qpair failed and we were unable to recover it. 00:26:39.016 [2024-12-06 03:34:58.901386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.016 [2024-12-06 03:34:58.901398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.016 qpair failed and we were unable to recover it. 00:26:39.016 [2024-12-06 03:34:58.901562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.016 [2024-12-06 03:34:58.901575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.016 qpair failed and we were unable to recover it. 00:26:39.016 [2024-12-06 03:34:58.901705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.016 [2024-12-06 03:34:58.901717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.016 qpair failed and we were unable to recover it. 00:26:39.016 [2024-12-06 03:34:58.901918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.016 [2024-12-06 03:34:58.901931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.016 qpair failed and we were unable to recover it. 00:26:39.016 [2024-12-06 03:34:58.902081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.017 [2024-12-06 03:34:58.902094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.017 qpair failed and we were unable to recover it. 00:26:39.017 [2024-12-06 03:34:58.902246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.017 [2024-12-06 03:34:58.902259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.017 qpair failed and we were unable to recover it. 00:26:39.017 [2024-12-06 03:34:58.902389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.017 [2024-12-06 03:34:58.902401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.017 qpair failed and we were unable to recover it. 00:26:39.017 [2024-12-06 03:34:58.902602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.017 [2024-12-06 03:34:58.902615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.017 qpair failed and we were unable to recover it. 00:26:39.017 [2024-12-06 03:34:58.902865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.017 [2024-12-06 03:34:58.902878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.017 qpair failed and we were unable to recover it. 00:26:39.017 [2024-12-06 03:34:58.903046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.017 [2024-12-06 03:34:58.903058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.017 qpair failed and we were unable to recover it. 00:26:39.017 [2024-12-06 03:34:58.903201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.017 [2024-12-06 03:34:58.903214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.017 qpair failed and we were unable to recover it. 00:26:39.017 [2024-12-06 03:34:58.903439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.017 [2024-12-06 03:34:58.903452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.017 qpair failed and we were unable to recover it. 00:26:39.017 [2024-12-06 03:34:58.903600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.017 [2024-12-06 03:34:58.903617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.017 qpair failed and we were unable to recover it. 00:26:39.017 [2024-12-06 03:34:58.903854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.017 [2024-12-06 03:34:58.903867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.017 qpair failed and we were unable to recover it. 00:26:39.017 [2024-12-06 03:34:58.904068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.017 [2024-12-06 03:34:58.904081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.017 qpair failed and we were unable to recover it. 00:26:39.017 [2024-12-06 03:34:58.904302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.017 [2024-12-06 03:34:58.904314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.017 qpair failed and we were unable to recover it. 00:26:39.017 [2024-12-06 03:34:58.904460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.017 [2024-12-06 03:34:58.904473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.017 qpair failed and we were unable to recover it. 00:26:39.017 [2024-12-06 03:34:58.904630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.017 [2024-12-06 03:34:58.904643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.017 qpair failed and we were unable to recover it. 00:26:39.017 [2024-12-06 03:34:58.904725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.017 [2024-12-06 03:34:58.904736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.017 qpair failed and we were unable to recover it. 00:26:39.017 [2024-12-06 03:34:58.904956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.017 [2024-12-06 03:34:58.904969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.017 qpair failed and we were unable to recover it. 00:26:39.017 [2024-12-06 03:34:58.905119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.017 [2024-12-06 03:34:58.905132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.017 qpair failed and we were unable to recover it. 00:26:39.017 [2024-12-06 03:34:58.905295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.017 [2024-12-06 03:34:58.905309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.017 qpair failed and we were unable to recover it. 00:26:39.017 [2024-12-06 03:34:58.905473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.017 [2024-12-06 03:34:58.905485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.017 qpair failed and we were unable to recover it. 00:26:39.017 [2024-12-06 03:34:58.905683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.017 [2024-12-06 03:34:58.905695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.017 qpair failed and we were unable to recover it. 00:26:39.017 [2024-12-06 03:34:58.905843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.017 [2024-12-06 03:34:58.905856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.017 qpair failed and we were unable to recover it. 00:26:39.017 [2024-12-06 03:34:58.906025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.017 [2024-12-06 03:34:58.906038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.017 qpair failed and we were unable to recover it. 00:26:39.017 [2024-12-06 03:34:58.906234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.017 [2024-12-06 03:34:58.906247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.017 qpair failed and we were unable to recover it. 00:26:39.017 [2024-12-06 03:34:58.906447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.017 [2024-12-06 03:34:58.906461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.017 qpair failed and we were unable to recover it. 00:26:39.017 [2024-12-06 03:34:58.906676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.017 [2024-12-06 03:34:58.906688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.017 qpair failed and we were unable to recover it. 00:26:39.017 [2024-12-06 03:34:58.906832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.017 [2024-12-06 03:34:58.906844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.017 qpair failed and we were unable to recover it. 00:26:39.017 [2024-12-06 03:34:58.907044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.017 [2024-12-06 03:34:58.907057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.017 qpair failed and we were unable to recover it. 00:26:39.017 [2024-12-06 03:34:58.907254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.017 [2024-12-06 03:34:58.907267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.017 qpair failed and we were unable to recover it. 00:26:39.017 [2024-12-06 03:34:58.907487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.017 [2024-12-06 03:34:58.907499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.017 qpair failed and we were unable to recover it. 00:26:39.017 [2024-12-06 03:34:58.907720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.017 [2024-12-06 03:34:58.907734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.017 qpair failed and we were unable to recover it. 00:26:39.017 [2024-12-06 03:34:58.907827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.017 [2024-12-06 03:34:58.907839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.017 qpair failed and we were unable to recover it. 00:26:39.017 [2024-12-06 03:34:58.908008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.017 [2024-12-06 03:34:58.908022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.017 qpair failed and we were unable to recover it. 00:26:39.017 [2024-12-06 03:34:58.908237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.017 [2024-12-06 03:34:58.908250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.017 qpair failed and we were unable to recover it. 00:26:39.017 [2024-12-06 03:34:58.908456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.017 [2024-12-06 03:34:58.908469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.017 qpair failed and we were unable to recover it. 00:26:39.017 [2024-12-06 03:34:58.908641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.017 [2024-12-06 03:34:58.908654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.017 qpair failed and we were unable to recover it. 00:26:39.017 [2024-12-06 03:34:58.908835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.017 [2024-12-06 03:34:58.908847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.017 qpair failed and we were unable to recover it. 00:26:39.017 [2024-12-06 03:34:58.909066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.017 [2024-12-06 03:34:58.909079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.017 qpair failed and we were unable to recover it. 00:26:39.017 [2024-12-06 03:34:58.909314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.017 [2024-12-06 03:34:58.909327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.017 qpair failed and we were unable to recover it. 00:26:39.017 [2024-12-06 03:34:58.909569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.018 [2024-12-06 03:34:58.909581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.018 qpair failed and we were unable to recover it. 00:26:39.018 [2024-12-06 03:34:58.909796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.018 [2024-12-06 03:34:58.909808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.018 qpair failed and we were unable to recover it. 00:26:39.018 [2024-12-06 03:34:58.909884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.018 [2024-12-06 03:34:58.909895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.018 qpair failed and we were unable to recover it. 00:26:39.018 [2024-12-06 03:34:58.910041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.018 [2024-12-06 03:34:58.910055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.018 qpair failed and we were unable to recover it. 00:26:39.018 [2024-12-06 03:34:58.910280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.018 [2024-12-06 03:34:58.910293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.018 qpair failed and we were unable to recover it. 00:26:39.018 [2024-12-06 03:34:58.910509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.018 [2024-12-06 03:34:58.910521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.018 qpair failed and we were unable to recover it. 00:26:39.018 [2024-12-06 03:34:58.910752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.018 [2024-12-06 03:34:58.910765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.018 qpair failed and we were unable to recover it. 00:26:39.018 [2024-12-06 03:34:58.910911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.018 [2024-12-06 03:34:58.910923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.018 qpair failed and we were unable to recover it. 00:26:39.018 [2024-12-06 03:34:58.911017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.018 [2024-12-06 03:34:58.911029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.018 qpair failed and we were unable to recover it. 00:26:39.018 [2024-12-06 03:34:58.911173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.018 [2024-12-06 03:34:58.911187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.018 qpair failed and we were unable to recover it. 00:26:39.018 [2024-12-06 03:34:58.911385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.018 [2024-12-06 03:34:58.911398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.018 qpair failed and we were unable to recover it. 00:26:39.018 [2024-12-06 03:34:58.911546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.018 [2024-12-06 03:34:58.911558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.018 qpair failed and we were unable to recover it. 00:26:39.018 [2024-12-06 03:34:58.911708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.018 [2024-12-06 03:34:58.911720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.018 qpair failed and we were unable to recover it. 00:26:39.018 [2024-12-06 03:34:58.911862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.018 [2024-12-06 03:34:58.911874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.018 qpair failed and we were unable to recover it. 00:26:39.018 [2024-12-06 03:34:58.912085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.018 [2024-12-06 03:34:58.912098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.018 qpair failed and we were unable to recover it. 00:26:39.018 [2024-12-06 03:34:58.912248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.018 [2024-12-06 03:34:58.912261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.018 qpair failed and we were unable to recover it. 00:26:39.018 [2024-12-06 03:34:58.912506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.018 [2024-12-06 03:34:58.912519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.018 qpair failed and we were unable to recover it. 00:26:39.018 [2024-12-06 03:34:58.912761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.018 [2024-12-06 03:34:58.912773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.018 qpair failed and we were unable to recover it. 00:26:39.018 [2024-12-06 03:34:58.912998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.018 [2024-12-06 03:34:58.913011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.018 qpair failed and we were unable to recover it. 00:26:39.018 [2024-12-06 03:34:58.913216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.018 [2024-12-06 03:34:58.913231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.018 qpair failed and we were unable to recover it. 00:26:39.018 [2024-12-06 03:34:58.913432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.018 [2024-12-06 03:34:58.913444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.018 qpair failed and we were unable to recover it. 00:26:39.018 [2024-12-06 03:34:58.913516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.018 [2024-12-06 03:34:58.913528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.018 qpair failed and we were unable to recover it. 00:26:39.018 [2024-12-06 03:34:58.913605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.018 [2024-12-06 03:34:58.913617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.018 qpair failed and we were unable to recover it. 00:26:39.018 [2024-12-06 03:34:58.913765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.018 [2024-12-06 03:34:58.913777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.018 qpair failed and we were unable to recover it. 00:26:39.018 [2024-12-06 03:34:58.913997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.018 [2024-12-06 03:34:58.914009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.018 qpair failed and we were unable to recover it. 00:26:39.018 [2024-12-06 03:34:58.914182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.018 [2024-12-06 03:34:58.914195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.018 qpair failed and we were unable to recover it. 00:26:39.018 [2024-12-06 03:34:58.914328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.018 [2024-12-06 03:34:58.914341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.018 qpair failed and we were unable to recover it. 00:26:39.018 [2024-12-06 03:34:58.914565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.018 [2024-12-06 03:34:58.914577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.018 qpair failed and we were unable to recover it. 00:26:39.018 [2024-12-06 03:34:58.914729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.018 [2024-12-06 03:34:58.914742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.018 qpair failed and we were unable to recover it. 00:26:39.018 [2024-12-06 03:34:58.914962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.018 [2024-12-06 03:34:58.914975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.018 qpair failed and we were unable to recover it. 00:26:39.018 [2024-12-06 03:34:58.915194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.018 [2024-12-06 03:34:58.915207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.018 qpair failed and we were unable to recover it. 00:26:39.018 [2024-12-06 03:34:58.915354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.018 [2024-12-06 03:34:58.915366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.018 qpair failed and we were unable to recover it. 00:26:39.018 [2024-12-06 03:34:58.915590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.018 [2024-12-06 03:34:58.915603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.018 qpair failed and we were unable to recover it. 00:26:39.018 [2024-12-06 03:34:58.915685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.018 [2024-12-06 03:34:58.915697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.018 qpair failed and we were unable to recover it. 00:26:39.018 [2024-12-06 03:34:58.915796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.018 [2024-12-06 03:34:58.915808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.018 qpair failed and we were unable to recover it. 00:26:39.018 [2024-12-06 03:34:58.916031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.018 [2024-12-06 03:34:58.916044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.018 qpair failed and we were unable to recover it. 00:26:39.018 [2024-12-06 03:34:58.916182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.018 [2024-12-06 03:34:58.916195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.018 qpair failed and we were unable to recover it. 00:26:39.018 [2024-12-06 03:34:58.916347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.018 [2024-12-06 03:34:58.916360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.018 qpair failed and we were unable to recover it. 00:26:39.018 [2024-12-06 03:34:58.916508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.019 [2024-12-06 03:34:58.916521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.019 qpair failed and we were unable to recover it. 00:26:39.019 [2024-12-06 03:34:58.916768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.019 [2024-12-06 03:34:58.916780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.019 qpair failed and we were unable to recover it. 00:26:39.019 [2024-12-06 03:34:58.917001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.019 [2024-12-06 03:34:58.917013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.019 qpair failed and we were unable to recover it. 00:26:39.019 [2024-12-06 03:34:58.917259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.019 [2024-12-06 03:34:58.917271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.019 qpair failed and we were unable to recover it. 00:26:39.019 [2024-12-06 03:34:58.917410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.019 [2024-12-06 03:34:58.917422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.019 qpair failed and we were unable to recover it. 00:26:39.019 [2024-12-06 03:34:58.917658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.019 [2024-12-06 03:34:58.917670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.019 qpair failed and we were unable to recover it. 00:26:39.019 [2024-12-06 03:34:58.917939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.019 [2024-12-06 03:34:58.917963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.019 qpair failed and we were unable to recover it. 00:26:39.019 [2024-12-06 03:34:58.918182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.019 [2024-12-06 03:34:58.918195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.019 qpair failed and we were unable to recover it. 00:26:39.019 [2024-12-06 03:34:58.918420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.019 [2024-12-06 03:34:58.918432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.019 qpair failed and we were unable to recover it. 00:26:39.019 [2024-12-06 03:34:58.918703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.019 [2024-12-06 03:34:58.918716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.019 qpair failed and we were unable to recover it. 00:26:39.019 [2024-12-06 03:34:58.918937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.019 [2024-12-06 03:34:58.918953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.019 qpair failed and we were unable to recover it. 00:26:39.019 [2024-12-06 03:34:58.919047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.019 [2024-12-06 03:34:58.919058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.019 qpair failed and we were unable to recover it. 00:26:39.019 [2024-12-06 03:34:58.919141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.019 [2024-12-06 03:34:58.919152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.019 qpair failed and we were unable to recover it. 00:26:39.019 [2024-12-06 03:34:58.919377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.019 [2024-12-06 03:34:58.919389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.019 qpair failed and we were unable to recover it. 00:26:39.019 [2024-12-06 03:34:58.919611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.019 [2024-12-06 03:34:58.919624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.019 qpair failed and we were unable to recover it. 00:26:39.019 [2024-12-06 03:34:58.919779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.019 [2024-12-06 03:34:58.919792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.019 qpair failed and we were unable to recover it. 00:26:39.019 [2024-12-06 03:34:58.919862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.019 [2024-12-06 03:34:58.919875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.019 qpair failed and we were unable to recover it. 00:26:39.019 [2024-12-06 03:34:58.920119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.019 [2024-12-06 03:34:58.920132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.019 qpair failed and we were unable to recover it. 00:26:39.019 [2024-12-06 03:34:58.920330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.019 [2024-12-06 03:34:58.920343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.019 qpair failed and we were unable to recover it. 00:26:39.019 [2024-12-06 03:34:58.920551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.019 [2024-12-06 03:34:58.920563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.019 qpair failed and we were unable to recover it. 00:26:39.019 [2024-12-06 03:34:58.920793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.019 [2024-12-06 03:34:58.920805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.019 qpair failed and we were unable to recover it. 00:26:39.019 [2024-12-06 03:34:58.921028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.019 [2024-12-06 03:34:58.921043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.019 qpair failed and we were unable to recover it. 00:26:39.019 [2024-12-06 03:34:58.921217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.019 [2024-12-06 03:34:58.921229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.019 qpair failed and we were unable to recover it. 00:26:39.019 [2024-12-06 03:34:58.921362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.019 [2024-12-06 03:34:58.921375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.019 qpair failed and we were unable to recover it. 00:26:39.019 [2024-12-06 03:34:58.921514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.019 [2024-12-06 03:34:58.921526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.019 qpair failed and we were unable to recover it. 00:26:39.019 [2024-12-06 03:34:58.921786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.019 [2024-12-06 03:34:58.921798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.019 qpair failed and we were unable to recover it. 00:26:39.019 [2024-12-06 03:34:58.921946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.019 [2024-12-06 03:34:58.921967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.019 qpair failed and we were unable to recover it. 00:26:39.019 [2024-12-06 03:34:58.922167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.019 [2024-12-06 03:34:58.922179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.019 qpair failed and we were unable to recover it. 00:26:39.019 [2024-12-06 03:34:58.922335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.019 [2024-12-06 03:34:58.922348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.019 qpair failed and we were unable to recover it. 00:26:39.019 [2024-12-06 03:34:58.922495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.019 [2024-12-06 03:34:58.922508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.019 qpair failed and we were unable to recover it. 00:26:39.019 [2024-12-06 03:34:58.922707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.019 [2024-12-06 03:34:58.922720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.019 qpair failed and we were unable to recover it. 00:26:39.019 [2024-12-06 03:34:58.922861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.019 [2024-12-06 03:34:58.922874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.019 qpair failed and we were unable to recover it. 00:26:39.019 [2024-12-06 03:34:58.923086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.019 [2024-12-06 03:34:58.923099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.019 qpair failed and we were unable to recover it. 00:26:39.019 [2024-12-06 03:34:58.923194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.020 [2024-12-06 03:34:58.923205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.020 qpair failed and we were unable to recover it. 00:26:39.020 [2024-12-06 03:34:58.923348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.020 [2024-12-06 03:34:58.923361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.020 qpair failed and we were unable to recover it. 00:26:39.020 [2024-12-06 03:34:58.923596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.020 [2024-12-06 03:34:58.923609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.020 qpair failed and we were unable to recover it. 00:26:39.020 [2024-12-06 03:34:58.923832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.020 [2024-12-06 03:34:58.923844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.020 qpair failed and we were unable to recover it. 00:26:39.020 [2024-12-06 03:34:58.923990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.020 [2024-12-06 03:34:58.924002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.020 qpair failed and we were unable to recover it. 00:26:39.020 [2024-12-06 03:34:58.924133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.020 [2024-12-06 03:34:58.924146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.020 qpair failed and we were unable to recover it. 00:26:39.020 [2024-12-06 03:34:58.924368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.020 [2024-12-06 03:34:58.924380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.020 qpair failed and we were unable to recover it. 00:26:39.020 [2024-12-06 03:34:58.924545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.020 [2024-12-06 03:34:58.924557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.020 qpair failed and we were unable to recover it. 00:26:39.020 [2024-12-06 03:34:58.924765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.020 [2024-12-06 03:34:58.924778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.020 qpair failed and we were unable to recover it. 00:26:39.020 [2024-12-06 03:34:58.924858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.020 [2024-12-06 03:34:58.924869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.020 qpair failed and we were unable to recover it. 00:26:39.020 [2024-12-06 03:34:58.925104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.020 [2024-12-06 03:34:58.925118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.020 qpair failed and we were unable to recover it. 00:26:39.020 [2024-12-06 03:34:58.925340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.020 [2024-12-06 03:34:58.925352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.020 qpair failed and we were unable to recover it. 00:26:39.020 [2024-12-06 03:34:58.925495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.020 [2024-12-06 03:34:58.925508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.020 qpair failed and we were unable to recover it. 00:26:39.020 [2024-12-06 03:34:58.925647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.020 [2024-12-06 03:34:58.925659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.020 qpair failed and we were unable to recover it. 00:26:39.020 [2024-12-06 03:34:58.925814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.020 [2024-12-06 03:34:58.925826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.020 qpair failed and we were unable to recover it. 00:26:39.020 [2024-12-06 03:34:58.925967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.020 [2024-12-06 03:34:58.925980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.020 qpair failed and we were unable to recover it. 00:26:39.020 [2024-12-06 03:34:58.926222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.020 [2024-12-06 03:34:58.926234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.020 qpair failed and we were unable to recover it. 00:26:39.020 [2024-12-06 03:34:58.926376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.020 [2024-12-06 03:34:58.926389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.020 qpair failed and we were unable to recover it. 00:26:39.020 [2024-12-06 03:34:58.926566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.020 [2024-12-06 03:34:58.926578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.020 qpair failed and we were unable to recover it. 00:26:39.020 [2024-12-06 03:34:58.926664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.020 [2024-12-06 03:34:58.926676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.020 qpair failed and we were unable to recover it. 00:26:39.020 [2024-12-06 03:34:58.926750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.020 [2024-12-06 03:34:58.926761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.020 qpair failed and we were unable to recover it. 00:26:39.020 [2024-12-06 03:34:58.926984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.020 [2024-12-06 03:34:58.926997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.020 qpair failed and we were unable to recover it. 00:26:39.020 [2024-12-06 03:34:58.927155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.020 [2024-12-06 03:34:58.927167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.020 qpair failed and we were unable to recover it. 00:26:39.020 [2024-12-06 03:34:58.927316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.020 [2024-12-06 03:34:58.927329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.020 qpair failed and we were unable to recover it. 00:26:39.020 [2024-12-06 03:34:58.927551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.020 [2024-12-06 03:34:58.927563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.020 qpair failed and we were unable to recover it. 00:26:39.020 [2024-12-06 03:34:58.927705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.020 [2024-12-06 03:34:58.927717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.020 qpair failed and we were unable to recover it. 00:26:39.020 [2024-12-06 03:34:58.927916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.020 [2024-12-06 03:34:58.927928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.020 qpair failed and we were unable to recover it. 00:26:39.020 [2024-12-06 03:34:58.928177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.020 [2024-12-06 03:34:58.928191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.020 qpair failed and we were unable to recover it. 00:26:39.020 [2024-12-06 03:34:58.928275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.020 [2024-12-06 03:34:58.928289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.020 qpair failed and we were unable to recover it. 00:26:39.020 [2024-12-06 03:34:58.928434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.020 [2024-12-06 03:34:58.928447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.020 qpair failed and we were unable to recover it. 00:26:39.020 [2024-12-06 03:34:58.928674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.020 [2024-12-06 03:34:58.928687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.020 qpair failed and we were unable to recover it. 00:26:39.020 [2024-12-06 03:34:58.928781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.020 [2024-12-06 03:34:58.928793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.020 qpair failed and we were unable to recover it. 00:26:39.020 [2024-12-06 03:34:58.928927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.020 [2024-12-06 03:34:58.928940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.020 qpair failed and we were unable to recover it. 00:26:39.020 [2024-12-06 03:34:58.929074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.020 [2024-12-06 03:34:58.929086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.020 qpair failed and we were unable to recover it. 00:26:39.020 [2024-12-06 03:34:58.929236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.020 [2024-12-06 03:34:58.929248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.020 qpair failed and we were unable to recover it. 00:26:39.020 [2024-12-06 03:34:58.929406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.020 [2024-12-06 03:34:58.929418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.020 qpair failed and we were unable to recover it. 00:26:39.020 [2024-12-06 03:34:58.929502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.020 [2024-12-06 03:34:58.929513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.020 qpair failed and we were unable to recover it. 00:26:39.020 [2024-12-06 03:34:58.929657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.020 [2024-12-06 03:34:58.929670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.020 qpair failed and we were unable to recover it. 00:26:39.020 [2024-12-06 03:34:58.929870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.021 [2024-12-06 03:34:58.929883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.021 qpair failed and we were unable to recover it. 00:26:39.021 [2024-12-06 03:34:58.930018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.021 [2024-12-06 03:34:58.930031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.021 qpair failed and we were unable to recover it. 00:26:39.021 [2024-12-06 03:34:58.930192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.021 [2024-12-06 03:34:58.930205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.021 qpair failed and we were unable to recover it. 00:26:39.021 [2024-12-06 03:34:58.930361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.021 [2024-12-06 03:34:58.930373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.021 qpair failed and we were unable to recover it. 00:26:39.021 [2024-12-06 03:34:58.930518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.021 [2024-12-06 03:34:58.930530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.021 qpair failed and we were unable to recover it. 00:26:39.021 [2024-12-06 03:34:58.930681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.021 [2024-12-06 03:34:58.930693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.021 qpair failed and we were unable to recover it. 00:26:39.021 [2024-12-06 03:34:58.930927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.021 [2024-12-06 03:34:58.930940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.021 qpair failed and we were unable to recover it. 00:26:39.021 [2024-12-06 03:34:58.931173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.021 [2024-12-06 03:34:58.931186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.021 qpair failed and we were unable to recover it. 00:26:39.021 [2024-12-06 03:34:58.931436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.021 [2024-12-06 03:34:58.931448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.021 qpair failed and we were unable to recover it. 00:26:39.021 [2024-12-06 03:34:58.931597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.021 [2024-12-06 03:34:58.931609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.021 qpair failed and we were unable to recover it. 00:26:39.021 [2024-12-06 03:34:58.931847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.021 [2024-12-06 03:34:58.931860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.021 qpair failed and we were unable to recover it. 00:26:39.021 [2024-12-06 03:34:58.932033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.021 [2024-12-06 03:34:58.932046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.021 qpair failed and we were unable to recover it. 00:26:39.021 [2024-12-06 03:34:58.932182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.021 [2024-12-06 03:34:58.932194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.021 qpair failed and we were unable to recover it. 00:26:39.021 [2024-12-06 03:34:58.932276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.021 [2024-12-06 03:34:58.932288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.021 qpair failed and we were unable to recover it. 00:26:39.021 [2024-12-06 03:34:58.932544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.021 [2024-12-06 03:34:58.932556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.021 qpair failed and we were unable to recover it. 00:26:39.021 [2024-12-06 03:34:58.932703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.021 [2024-12-06 03:34:58.932716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.021 qpair failed and we were unable to recover it. 00:26:39.021 [2024-12-06 03:34:58.932933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.021 [2024-12-06 03:34:58.932946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.021 qpair failed and we were unable to recover it. 00:26:39.021 [2024-12-06 03:34:58.933187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.021 [2024-12-06 03:34:58.933207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.021 qpair failed and we were unable to recover it. 00:26:39.021 [2024-12-06 03:34:58.933369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.021 [2024-12-06 03:34:58.933385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.021 qpair failed and we were unable to recover it. 00:26:39.021 [2024-12-06 03:34:58.933665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.021 [2024-12-06 03:34:58.933682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.021 qpair failed and we were unable to recover it. 00:26:39.021 [2024-12-06 03:34:58.933779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.021 [2024-12-06 03:34:58.933794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.021 qpair failed and we were unable to recover it. 00:26:39.021 [2024-12-06 03:34:58.933936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.021 [2024-12-06 03:34:58.933957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.021 qpair failed and we were unable to recover it. 00:26:39.021 [2024-12-06 03:34:58.934176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.021 [2024-12-06 03:34:58.934192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.021 qpair failed and we were unable to recover it. 00:26:39.021 [2024-12-06 03:34:58.934373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.021 [2024-12-06 03:34:58.934390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.021 qpair failed and we were unable to recover it. 00:26:39.021 [2024-12-06 03:34:58.934569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.021 [2024-12-06 03:34:58.934584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.021 qpair failed and we were unable to recover it. 00:26:39.021 [2024-12-06 03:34:58.934804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.021 [2024-12-06 03:34:58.934821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.021 qpair failed and we were unable to recover it. 00:26:39.021 [2024-12-06 03:34:58.935030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.021 [2024-12-06 03:34:58.935047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.021 qpair failed and we were unable to recover it. 00:26:39.021 [2024-12-06 03:34:58.935202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.021 [2024-12-06 03:34:58.935217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.021 qpair failed and we were unable to recover it. 00:26:39.021 [2024-12-06 03:34:58.935373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.021 [2024-12-06 03:34:58.935389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.021 qpair failed and we were unable to recover it. 00:26:39.021 [2024-12-06 03:34:58.935642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.021 [2024-12-06 03:34:58.935659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.021 qpair failed and we were unable to recover it. 00:26:39.021 [2024-12-06 03:34:58.935801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.021 [2024-12-06 03:34:58.935816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.021 qpair failed and we were unable to recover it. 00:26:39.021 [2024-12-06 03:34:58.936023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.021 [2024-12-06 03:34:58.936040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.021 qpair failed and we were unable to recover it. 00:26:39.021 [2024-12-06 03:34:58.936225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.021 [2024-12-06 03:34:58.936242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.021 qpair failed and we were unable to recover it. 00:26:39.021 [2024-12-06 03:34:58.936391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.021 [2024-12-06 03:34:58.936407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.021 qpair failed and we were unable to recover it. 00:26:39.021 [2024-12-06 03:34:58.936552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.021 [2024-12-06 03:34:58.936568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.021 qpair failed and we were unable to recover it. 00:26:39.021 [2024-12-06 03:34:58.936741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.021 [2024-12-06 03:34:58.936757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.021 qpair failed and we were unable to recover it. 00:26:39.021 [2024-12-06 03:34:58.936986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.021 [2024-12-06 03:34:58.937003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.021 qpair failed and we were unable to recover it. 00:26:39.021 [2024-12-06 03:34:58.937185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.021 [2024-12-06 03:34:58.937201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.021 qpair failed and we were unable to recover it. 00:26:39.021 [2024-12-06 03:34:58.937430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.022 [2024-12-06 03:34:58.937446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.022 qpair failed and we were unable to recover it. 00:26:39.022 [2024-12-06 03:34:58.937743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.022 [2024-12-06 03:34:58.937759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.022 qpair failed and we were unable to recover it. 00:26:39.022 [2024-12-06 03:34:58.937991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.022 [2024-12-06 03:34:58.938007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.022 qpair failed and we were unable to recover it. 00:26:39.022 [2024-12-06 03:34:58.938262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.022 [2024-12-06 03:34:58.938279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.022 qpair failed and we were unable to recover it. 00:26:39.022 [2024-12-06 03:34:58.938511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.022 [2024-12-06 03:34:58.938527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.022 qpair failed and we were unable to recover it. 00:26:39.022 [2024-12-06 03:34:58.938684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.022 [2024-12-06 03:34:58.938701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.022 qpair failed and we were unable to recover it. 00:26:39.022 [2024-12-06 03:34:58.938849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.022 [2024-12-06 03:34:58.938864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.022 qpair failed and we were unable to recover it. 00:26:39.022 [2024-12-06 03:34:58.939021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.022 [2024-12-06 03:34:58.939034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.022 qpair failed and we were unable to recover it. 00:26:39.022 [2024-12-06 03:34:58.939183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.022 [2024-12-06 03:34:58.939195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.022 qpair failed and we were unable to recover it. 00:26:39.022 [2024-12-06 03:34:58.939271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.022 [2024-12-06 03:34:58.939282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.022 qpair failed and we were unable to recover it. 00:26:39.022 [2024-12-06 03:34:58.939503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.022 [2024-12-06 03:34:58.939515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.022 qpair failed and we were unable to recover it. 00:26:39.022 [2024-12-06 03:34:58.939675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.022 [2024-12-06 03:34:58.939687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.022 qpair failed and we were unable to recover it. 00:26:39.022 [2024-12-06 03:34:58.939883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.022 [2024-12-06 03:34:58.939895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.022 qpair failed and we were unable to recover it. 00:26:39.022 [2024-12-06 03:34:58.940044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.022 [2024-12-06 03:34:58.940057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.022 qpair failed and we were unable to recover it. 00:26:39.022 [2024-12-06 03:34:58.940134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.022 [2024-12-06 03:34:58.940145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.022 qpair failed and we were unable to recover it. 00:26:39.022 [2024-12-06 03:34:58.940376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.022 [2024-12-06 03:34:58.940388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.022 qpair failed and we were unable to recover it. 00:26:39.022 [2024-12-06 03:34:58.940517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.022 [2024-12-06 03:34:58.940529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.022 qpair failed and we were unable to recover it. 00:26:39.022 [2024-12-06 03:34:58.940678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.022 [2024-12-06 03:34:58.940690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.022 qpair failed and we were unable to recover it. 00:26:39.022 [2024-12-06 03:34:58.940833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.022 [2024-12-06 03:34:58.940846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.022 qpair failed and we were unable to recover it. 00:26:39.022 [2024-12-06 03:34:58.941075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.022 [2024-12-06 03:34:58.941091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.022 qpair failed and we were unable to recover it. 00:26:39.022 [2024-12-06 03:34:58.941304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.022 [2024-12-06 03:34:58.941317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.022 qpair failed and we were unable to recover it. 00:26:39.022 [2024-12-06 03:34:58.941395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.022 [2024-12-06 03:34:58.941406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.022 qpair failed and we were unable to recover it. 00:26:39.022 [2024-12-06 03:34:58.941555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.022 [2024-12-06 03:34:58.941567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.022 qpair failed and we were unable to recover it. 00:26:39.022 [2024-12-06 03:34:58.941711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.022 [2024-12-06 03:34:58.941723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.022 qpair failed and we were unable to recover it. 00:26:39.022 [2024-12-06 03:34:58.941932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.022 [2024-12-06 03:34:58.941944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.022 qpair failed and we were unable to recover it. 00:26:39.022 [2024-12-06 03:34:58.942104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.022 [2024-12-06 03:34:58.942117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.022 qpair failed and we were unable to recover it. 00:26:39.022 [2024-12-06 03:34:58.942370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.022 [2024-12-06 03:34:58.942382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.022 qpair failed and we were unable to recover it. 00:26:39.022 [2024-12-06 03:34:58.942479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.022 [2024-12-06 03:34:58.942492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.022 qpair failed and we were unable to recover it. 00:26:39.022 [2024-12-06 03:34:58.942647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.022 [2024-12-06 03:34:58.942659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.022 qpair failed and we were unable to recover it. 00:26:39.022 [2024-12-06 03:34:58.942831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.022 [2024-12-06 03:34:58.942843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.022 qpair failed and we were unable to recover it. 00:26:39.022 [2024-12-06 03:34:58.943001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.022 [2024-12-06 03:34:58.943014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.022 qpair failed and we were unable to recover it. 00:26:39.022 [2024-12-06 03:34:58.943160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.022 [2024-12-06 03:34:58.943172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.022 qpair failed and we were unable to recover it. 00:26:39.022 [2024-12-06 03:34:58.943398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.022 [2024-12-06 03:34:58.943410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.022 qpair failed and we were unable to recover it. 00:26:39.022 [2024-12-06 03:34:58.943546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.022 [2024-12-06 03:34:58.943559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.022 qpair failed and we were unable to recover it. 00:26:39.022 [2024-12-06 03:34:58.943756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.022 [2024-12-06 03:34:58.943768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.022 qpair failed and we were unable to recover it. 00:26:39.022 [2024-12-06 03:34:58.943975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.022 [2024-12-06 03:34:58.943987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.022 qpair failed and we were unable to recover it. 00:26:39.022 [2024-12-06 03:34:58.944131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.022 [2024-12-06 03:34:58.944143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.022 qpair failed and we were unable to recover it. 00:26:39.022 [2024-12-06 03:34:58.944277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.022 [2024-12-06 03:34:58.944290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.022 qpair failed and we were unable to recover it. 00:26:39.022 [2024-12-06 03:34:58.944524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.023 [2024-12-06 03:34:58.944536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.023 qpair failed and we were unable to recover it. 00:26:39.023 [2024-12-06 03:34:58.944800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.023 [2024-12-06 03:34:58.944812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.023 qpair failed and we were unable to recover it. 00:26:39.023 [2024-12-06 03:34:58.944911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.023 [2024-12-06 03:34:58.944924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.023 qpair failed and we were unable to recover it. 00:26:39.023 [2024-12-06 03:34:58.945098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.023 [2024-12-06 03:34:58.945111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.023 qpair failed and we were unable to recover it. 00:26:39.023 [2024-12-06 03:34:58.945338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.023 [2024-12-06 03:34:58.945350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.023 qpair failed and we were unable to recover it. 00:26:39.023 [2024-12-06 03:34:58.945550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.023 [2024-12-06 03:34:58.945561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.023 qpair failed and we were unable to recover it. 00:26:39.023 [2024-12-06 03:34:58.945747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.023 [2024-12-06 03:34:58.945760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.023 qpair failed and we were unable to recover it. 00:26:39.023 [2024-12-06 03:34:58.945990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.023 [2024-12-06 03:34:58.946003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.023 qpair failed and we were unable to recover it. 00:26:39.023 [2024-12-06 03:34:58.946175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.023 [2024-12-06 03:34:58.946190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.023 qpair failed and we were unable to recover it. 00:26:39.023 [2024-12-06 03:34:58.946357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.023 [2024-12-06 03:34:58.946369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.023 qpair failed and we were unable to recover it. 00:26:39.023 [2024-12-06 03:34:58.946449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.023 [2024-12-06 03:34:58.946461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.023 qpair failed and we were unable to recover it. 00:26:39.023 [2024-12-06 03:34:58.946658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.023 [2024-12-06 03:34:58.946670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.023 qpair failed and we were unable to recover it. 00:26:39.023 [2024-12-06 03:34:58.946903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.023 [2024-12-06 03:34:58.946915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.023 qpair failed and we were unable to recover it. 00:26:39.023 [2024-12-06 03:34:58.947088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.023 [2024-12-06 03:34:58.947101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.023 qpair failed and we were unable to recover it. 00:26:39.023 [2024-12-06 03:34:58.947321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.023 [2024-12-06 03:34:58.947333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.023 qpair failed and we were unable to recover it. 00:26:39.023 [2024-12-06 03:34:58.947536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.023 [2024-12-06 03:34:58.947549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.023 qpair failed and we were unable to recover it. 00:26:39.023 [2024-12-06 03:34:58.947700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.023 [2024-12-06 03:34:58.947713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.023 qpair failed and we were unable to recover it. 00:26:39.023 [2024-12-06 03:34:58.947858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.023 [2024-12-06 03:34:58.947871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.023 qpair failed and we were unable to recover it. 00:26:39.023 [2024-12-06 03:34:58.948019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.023 [2024-12-06 03:34:58.948032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.023 qpair failed and we were unable to recover it. 00:26:39.023 [2024-12-06 03:34:58.948201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.023 [2024-12-06 03:34:58.948213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.023 qpair failed and we were unable to recover it. 00:26:39.023 [2024-12-06 03:34:58.948355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.023 [2024-12-06 03:34:58.948368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.023 qpair failed and we were unable to recover it. 00:26:39.023 [2024-12-06 03:34:58.948453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.023 [2024-12-06 03:34:58.948464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.023 qpair failed and we were unable to recover it. 00:26:39.023 [2024-12-06 03:34:58.948685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.023 [2024-12-06 03:34:58.948699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.023 qpair failed and we were unable to recover it. 00:26:39.023 [2024-12-06 03:34:58.948833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.023 [2024-12-06 03:34:58.948845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.023 qpair failed and we were unable to recover it. 00:26:39.023 [2024-12-06 03:34:58.949004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.023 [2024-12-06 03:34:58.949017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.023 qpair failed and we were unable to recover it. 00:26:39.023 [2024-12-06 03:34:58.949228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.023 [2024-12-06 03:34:58.949240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.023 qpair failed and we were unable to recover it. 00:26:39.023 [2024-12-06 03:34:58.949396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.023 [2024-12-06 03:34:58.949408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.023 qpair failed and we were unable to recover it. 00:26:39.023 [2024-12-06 03:34:58.949623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.023 [2024-12-06 03:34:58.949636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.023 qpair failed and we were unable to recover it. 00:26:39.023 [2024-12-06 03:34:58.949817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.023 [2024-12-06 03:34:58.949829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.023 qpair failed and we were unable to recover it. 00:26:39.023 [2024-12-06 03:34:58.950049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.023 [2024-12-06 03:34:58.950062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.023 qpair failed and we were unable to recover it. 00:26:39.023 [2024-12-06 03:34:58.950222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.023 [2024-12-06 03:34:58.950235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.023 qpair failed and we were unable to recover it. 00:26:39.023 [2024-12-06 03:34:58.950449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.023 [2024-12-06 03:34:58.950461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.023 qpair failed and we were unable to recover it. 00:26:39.023 [2024-12-06 03:34:58.950678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.023 [2024-12-06 03:34:58.950690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.023 qpair failed and we were unable to recover it. 00:26:39.023 [2024-12-06 03:34:58.950852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.023 [2024-12-06 03:34:58.950865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.023 qpair failed and we were unable to recover it. 00:26:39.023 [2024-12-06 03:34:58.951009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.023 [2024-12-06 03:34:58.951023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.023 qpair failed and we were unable to recover it. 00:26:39.023 [2024-12-06 03:34:58.951269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.023 [2024-12-06 03:34:58.951281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.023 qpair failed and we were unable to recover it. 00:26:39.023 [2024-12-06 03:34:58.951507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.023 [2024-12-06 03:34:58.951520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.023 qpair failed and we were unable to recover it. 00:26:39.023 [2024-12-06 03:34:58.951750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.023 [2024-12-06 03:34:58.951763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.023 qpair failed and we were unable to recover it. 00:26:39.023 [2024-12-06 03:34:58.951930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.024 [2024-12-06 03:34:58.951943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.024 qpair failed and we were unable to recover it. 00:26:39.024 [2024-12-06 03:34:58.952184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.024 [2024-12-06 03:34:58.952196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.024 qpair failed and we were unable to recover it. 00:26:39.024 [2024-12-06 03:34:58.952367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.024 [2024-12-06 03:34:58.952379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.024 qpair failed and we were unable to recover it. 00:26:39.024 [2024-12-06 03:34:58.952584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.024 [2024-12-06 03:34:58.952597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.024 qpair failed and we were unable to recover it. 00:26:39.024 [2024-12-06 03:34:58.952692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.024 [2024-12-06 03:34:58.952703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.024 qpair failed and we were unable to recover it. 00:26:39.024 [2024-12-06 03:34:58.952929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.024 [2024-12-06 03:34:58.952941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.024 qpair failed and we were unable to recover it. 00:26:39.024 [2024-12-06 03:34:58.953116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.024 [2024-12-06 03:34:58.953129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.024 qpair failed and we were unable to recover it. 00:26:39.024 [2024-12-06 03:34:58.953307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.024 [2024-12-06 03:34:58.953319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.024 qpair failed and we were unable to recover it. 00:26:39.024 [2024-12-06 03:34:58.953527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.024 [2024-12-06 03:34:58.953539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.024 qpair failed and we were unable to recover it. 00:26:39.024 [2024-12-06 03:34:58.953751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.024 [2024-12-06 03:34:58.953763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.024 qpair failed and we were unable to recover it. 00:26:39.024 [2024-12-06 03:34:58.953846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.024 [2024-12-06 03:34:58.953858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.024 qpair failed and we were unable to recover it. 00:26:39.024 [2024-12-06 03:34:58.953993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.024 [2024-12-06 03:34:58.954005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.024 qpair failed and we were unable to recover it. 00:26:39.024 [2024-12-06 03:34:58.954205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.024 [2024-12-06 03:34:58.954217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.024 qpair failed and we were unable to recover it. 00:26:39.024 [2024-12-06 03:34:58.954452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.024 [2024-12-06 03:34:58.954464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.024 qpair failed and we were unable to recover it. 00:26:39.024 [2024-12-06 03:34:58.954618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.024 [2024-12-06 03:34:58.954630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.024 qpair failed and we were unable to recover it. 00:26:39.024 [2024-12-06 03:34:58.954828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.024 [2024-12-06 03:34:58.954840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.024 qpair failed and we were unable to recover it. 00:26:39.024 [2024-12-06 03:34:58.954984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.024 [2024-12-06 03:34:58.954997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.024 qpair failed and we were unable to recover it. 00:26:39.024 [2024-12-06 03:34:58.955194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.024 [2024-12-06 03:34:58.955207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.024 qpair failed and we were unable to recover it. 00:26:39.024 [2024-12-06 03:34:58.955350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.024 [2024-12-06 03:34:58.955362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.024 qpair failed and we were unable to recover it. 00:26:39.024 [2024-12-06 03:34:58.955519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.024 [2024-12-06 03:34:58.955532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.024 qpair failed and we were unable to recover it. 00:26:39.024 [2024-12-06 03:34:58.955683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.024 [2024-12-06 03:34:58.955696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.024 qpair failed and we were unable to recover it. 00:26:39.024 [2024-12-06 03:34:58.955769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.024 [2024-12-06 03:34:58.955780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.024 qpair failed and we were unable to recover it. 00:26:39.024 [2024-12-06 03:34:58.956028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.024 [2024-12-06 03:34:58.956041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.024 qpair failed and we were unable to recover it. 00:26:39.024 [2024-12-06 03:34:58.956241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.024 [2024-12-06 03:34:58.956253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.024 qpair failed and we were unable to recover it. 00:26:39.024 [2024-12-06 03:34:58.956390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.024 [2024-12-06 03:34:58.956402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.024 qpair failed and we were unable to recover it. 00:26:39.024 [2024-12-06 03:34:58.956677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.024 [2024-12-06 03:34:58.956689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.024 qpair failed and we were unable to recover it. 00:26:39.024 [2024-12-06 03:34:58.956869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.024 [2024-12-06 03:34:58.956881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.024 qpair failed and we were unable to recover it. 00:26:39.024 [2024-12-06 03:34:58.956961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.024 [2024-12-06 03:34:58.956973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.024 qpair failed and we were unable to recover it. 00:26:39.024 [2024-12-06 03:34:58.957234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.024 [2024-12-06 03:34:58.957247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.024 qpair failed and we were unable to recover it. 00:26:39.024 [2024-12-06 03:34:58.957344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.024 [2024-12-06 03:34:58.957355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.024 qpair failed and we were unable to recover it. 00:26:39.024 [2024-12-06 03:34:58.957496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.024 [2024-12-06 03:34:58.957509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.024 qpair failed and we were unable to recover it. 00:26:39.024 [2024-12-06 03:34:58.957660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.024 [2024-12-06 03:34:58.957672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.024 qpair failed and we were unable to recover it. 00:26:39.024 [2024-12-06 03:34:58.957868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.024 [2024-12-06 03:34:58.957881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.024 qpair failed and we were unable to recover it. 00:26:39.024 [2024-12-06 03:34:58.957976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.024 [2024-12-06 03:34:58.957988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.024 qpair failed and we were unable to recover it. 00:26:39.025 [2024-12-06 03:34:58.958236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.025 [2024-12-06 03:34:58.958250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.025 qpair failed and we were unable to recover it. 00:26:39.025 [2024-12-06 03:34:58.958413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.025 [2024-12-06 03:34:58.958425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.025 qpair failed and we were unable to recover it. 00:26:39.025 [2024-12-06 03:34:58.958693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.025 [2024-12-06 03:34:58.958705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.025 qpair failed and we were unable to recover it. 00:26:39.025 [2024-12-06 03:34:58.958840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.025 [2024-12-06 03:34:58.958851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.025 qpair failed and we were unable to recover it. 00:26:39.025 [2024-12-06 03:34:58.959015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.025 [2024-12-06 03:34:58.959028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.025 qpair failed and we were unable to recover it. 00:26:39.025 [2024-12-06 03:34:58.959205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.025 [2024-12-06 03:34:58.959218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.025 qpair failed and we were unable to recover it. 00:26:39.025 [2024-12-06 03:34:58.959289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.025 [2024-12-06 03:34:58.959301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.025 qpair failed and we were unable to recover it. 00:26:39.025 [2024-12-06 03:34:58.959519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.025 [2024-12-06 03:34:58.959532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.025 qpair failed and we were unable to recover it. 00:26:39.025 [2024-12-06 03:34:58.959736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.025 [2024-12-06 03:34:58.959748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.025 qpair failed and we were unable to recover it. 00:26:39.025 [2024-12-06 03:34:58.959910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.025 [2024-12-06 03:34:58.959923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.025 qpair failed and we were unable to recover it. 00:26:39.025 [2024-12-06 03:34:58.960126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.025 [2024-12-06 03:34:58.960138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.025 qpair failed and we were unable to recover it. 00:26:39.025 [2024-12-06 03:34:58.960365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.025 [2024-12-06 03:34:58.960377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.025 qpair failed and we were unable to recover it. 00:26:39.025 [2024-12-06 03:34:58.960477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.025 [2024-12-06 03:34:58.960490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.025 qpair failed and we were unable to recover it. 00:26:39.025 [2024-12-06 03:34:58.960625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.025 [2024-12-06 03:34:58.960638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.025 qpair failed and we were unable to recover it. 00:26:39.025 [2024-12-06 03:34:58.960794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.025 [2024-12-06 03:34:58.960806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.025 qpair failed and we were unable to recover it. 00:26:39.025 [2024-12-06 03:34:58.961031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.025 [2024-12-06 03:34:58.961044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.025 qpair failed and we were unable to recover it. 00:26:39.025 [2024-12-06 03:34:58.961241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.025 [2024-12-06 03:34:58.961256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.025 qpair failed and we were unable to recover it. 00:26:39.025 [2024-12-06 03:34:58.961426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.025 [2024-12-06 03:34:58.961438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.025 qpair failed and we were unable to recover it. 00:26:39.025 [2024-12-06 03:34:58.961534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.025 [2024-12-06 03:34:58.961547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.025 qpair failed and we were unable to recover it. 00:26:39.025 [2024-12-06 03:34:58.961678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.025 [2024-12-06 03:34:58.961690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.025 qpair failed and we were unable to recover it. 00:26:39.025 [2024-12-06 03:34:58.961891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.025 [2024-12-06 03:34:58.961903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.025 qpair failed and we were unable to recover it. 00:26:39.025 [2024-12-06 03:34:58.962152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.025 [2024-12-06 03:34:58.962165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.025 qpair failed and we were unable to recover it. 00:26:39.025 [2024-12-06 03:34:58.962310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.025 [2024-12-06 03:34:58.962322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.025 qpair failed and we were unable to recover it. 00:26:39.025 [2024-12-06 03:34:58.962549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.025 [2024-12-06 03:34:58.962562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.025 qpair failed and we were unable to recover it. 00:26:39.025 [2024-12-06 03:34:58.962658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.025 [2024-12-06 03:34:58.962672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.025 qpair failed and we were unable to recover it. 00:26:39.025 [2024-12-06 03:34:58.962919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.025 [2024-12-06 03:34:58.962932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.025 qpair failed and we were unable to recover it. 00:26:39.025 [2024-12-06 03:34:58.963173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.025 [2024-12-06 03:34:58.963186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.025 qpair failed and we were unable to recover it. 00:26:39.025 [2024-12-06 03:34:58.963318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.025 [2024-12-06 03:34:58.963331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.025 qpair failed and we were unable to recover it. 00:26:39.025 [2024-12-06 03:34:58.963549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.025 [2024-12-06 03:34:58.963561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.025 qpair failed and we were unable to recover it. 00:26:39.025 [2024-12-06 03:34:58.963738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.025 [2024-12-06 03:34:58.963750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.025 qpair failed and we were unable to recover it. 00:26:39.025 [2024-12-06 03:34:58.963899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.025 [2024-12-06 03:34:58.963911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.025 qpair failed and we were unable to recover it. 00:26:39.025 [2024-12-06 03:34:58.963996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.025 [2024-12-06 03:34:58.964008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.025 qpair failed and we were unable to recover it. 00:26:39.025 [2024-12-06 03:34:58.964148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.025 [2024-12-06 03:34:58.964159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.025 qpair failed and we were unable to recover it. 00:26:39.025 [2024-12-06 03:34:58.964225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.025 [2024-12-06 03:34:58.964236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.025 qpair failed and we were unable to recover it. 00:26:39.025 [2024-12-06 03:34:58.964369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.025 [2024-12-06 03:34:58.964381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.025 qpair failed and we were unable to recover it. 00:26:39.025 [2024-12-06 03:34:58.964525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.025 [2024-12-06 03:34:58.964537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.025 qpair failed and we were unable to recover it. 00:26:39.025 [2024-12-06 03:34:58.964667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.025 [2024-12-06 03:34:58.964680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.025 qpair failed and we were unable to recover it. 00:26:39.025 [2024-12-06 03:34:58.964761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.025 [2024-12-06 03:34:58.964772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.025 qpair failed and we were unable to recover it. 00:26:39.026 [2024-12-06 03:34:58.964931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.026 [2024-12-06 03:34:58.964944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.026 qpair failed and we were unable to recover it. 00:26:39.026 [2024-12-06 03:34:58.965087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.026 [2024-12-06 03:34:58.965100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.026 qpair failed and we were unable to recover it. 00:26:39.026 [2024-12-06 03:34:58.965299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.026 [2024-12-06 03:34:58.965311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.026 qpair failed and we were unable to recover it. 00:26:39.026 [2024-12-06 03:34:58.965532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.026 [2024-12-06 03:34:58.965545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.026 qpair failed and we were unable to recover it. 00:26:39.026 [2024-12-06 03:34:58.965686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.026 [2024-12-06 03:34:58.965699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.026 qpair failed and we were unable to recover it. 00:26:39.026 [2024-12-06 03:34:58.965832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.026 [2024-12-06 03:34:58.965844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.026 qpair failed and we were unable to recover it. 00:26:39.026 [2024-12-06 03:34:58.966056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.026 [2024-12-06 03:34:58.966069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.026 qpair failed and we were unable to recover it. 00:26:39.026 [2024-12-06 03:34:58.966315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.026 [2024-12-06 03:34:58.966327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.026 qpair failed and we were unable to recover it. 00:26:39.026 [2024-12-06 03:34:58.966526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.026 [2024-12-06 03:34:58.966538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.026 qpair failed and we were unable to recover it. 00:26:39.026 [2024-12-06 03:34:58.966741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.026 [2024-12-06 03:34:58.966753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.026 qpair failed and we were unable to recover it. 00:26:39.026 [2024-12-06 03:34:58.966898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.026 [2024-12-06 03:34:58.966911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.026 qpair failed and we were unable to recover it. 00:26:39.026 [2024-12-06 03:34:58.967067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.026 [2024-12-06 03:34:58.967081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.026 qpair failed and we were unable to recover it. 00:26:39.026 [2024-12-06 03:34:58.967304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.026 [2024-12-06 03:34:58.967316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.026 qpair failed and we were unable to recover it. 00:26:39.026 [2024-12-06 03:34:58.967544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.026 [2024-12-06 03:34:58.967557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.026 qpair failed and we were unable to recover it. 00:26:39.026 [2024-12-06 03:34:58.967776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.026 [2024-12-06 03:34:58.967788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.026 qpair failed and we were unable to recover it. 00:26:39.026 [2024-12-06 03:34:58.967872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.026 [2024-12-06 03:34:58.967883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.026 qpair failed and we were unable to recover it. 00:26:39.026 [2024-12-06 03:34:58.968107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.026 [2024-12-06 03:34:58.968120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.026 qpair failed and we were unable to recover it. 00:26:39.026 [2024-12-06 03:34:58.968261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.026 [2024-12-06 03:34:58.968273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.026 qpair failed and we were unable to recover it. 00:26:39.026 [2024-12-06 03:34:58.968523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.026 [2024-12-06 03:34:58.968538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.026 qpair failed and we were unable to recover it. 00:26:39.026 [2024-12-06 03:34:58.968781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.026 [2024-12-06 03:34:58.968794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.026 qpair failed and we were unable to recover it. 00:26:39.026 [2024-12-06 03:34:58.968994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.026 [2024-12-06 03:34:58.969007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.026 qpair failed and we were unable to recover it. 00:26:39.026 [2024-12-06 03:34:58.969158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.026 [2024-12-06 03:34:58.969171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.026 qpair failed and we were unable to recover it. 00:26:39.026 [2024-12-06 03:34:58.969247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.026 [2024-12-06 03:34:58.969258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.026 qpair failed and we were unable to recover it. 00:26:39.026 [2024-12-06 03:34:58.969404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.026 [2024-12-06 03:34:58.969416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.026 qpair failed and we were unable to recover it. 00:26:39.026 [2024-12-06 03:34:58.969507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.026 [2024-12-06 03:34:58.969518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.026 qpair failed and we were unable to recover it. 00:26:39.026 [2024-12-06 03:34:58.969669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.026 [2024-12-06 03:34:58.969681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.026 qpair failed and we were unable to recover it. 00:26:39.026 [2024-12-06 03:34:58.969813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.026 [2024-12-06 03:34:58.969825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.026 qpair failed and we were unable to recover it. 00:26:39.026 [2024-12-06 03:34:58.970023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.026 [2024-12-06 03:34:58.970036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.026 qpair failed and we were unable to recover it. 00:26:39.026 [2024-12-06 03:34:58.970124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.026 [2024-12-06 03:34:58.970135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.026 qpair failed and we were unable to recover it. 00:26:39.026 [2024-12-06 03:34:58.970389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.026 [2024-12-06 03:34:58.970401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.026 qpair failed and we were unable to recover it. 00:26:39.026 [2024-12-06 03:34:58.970604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.026 [2024-12-06 03:34:58.970617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.026 qpair failed and we were unable to recover it. 00:26:39.026 [2024-12-06 03:34:58.970881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.026 [2024-12-06 03:34:58.970894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.026 qpair failed and we were unable to recover it. 00:26:39.026 [2024-12-06 03:34:58.970998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.026 [2024-12-06 03:34:58.971011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.026 qpair failed and we were unable to recover it. 00:26:39.026 [2024-12-06 03:34:58.971171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.026 [2024-12-06 03:34:58.971183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.026 qpair failed and we were unable to recover it. 00:26:39.026 [2024-12-06 03:34:58.971351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.026 [2024-12-06 03:34:58.971363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.026 qpair failed and we were unable to recover it. 00:26:39.026 [2024-12-06 03:34:58.971518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.026 [2024-12-06 03:34:58.971530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.026 qpair failed and we were unable to recover it. 00:26:39.026 [2024-12-06 03:34:58.971696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.026 [2024-12-06 03:34:58.971709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.026 qpair failed and we were unable to recover it. 00:26:39.026 [2024-12-06 03:34:58.971852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.026 [2024-12-06 03:34:58.971864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.026 qpair failed and we were unable to recover it. 00:26:39.027 [2024-12-06 03:34:58.972100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.027 [2024-12-06 03:34:58.972113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.027 qpair failed and we were unable to recover it. 00:26:39.027 [2024-12-06 03:34:58.972327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.027 [2024-12-06 03:34:58.972339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.027 qpair failed and we were unable to recover it. 00:26:39.027 [2024-12-06 03:34:58.972492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.027 [2024-12-06 03:34:58.972504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.027 qpair failed and we were unable to recover it. 00:26:39.027 [2024-12-06 03:34:58.972636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.027 [2024-12-06 03:34:58.972649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.027 qpair failed and we were unable to recover it. 00:26:39.027 [2024-12-06 03:34:58.972855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.027 [2024-12-06 03:34:58.972867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.027 qpair failed and we were unable to recover it. 00:26:39.027 [2024-12-06 03:34:58.973020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.027 [2024-12-06 03:34:58.973033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.027 qpair failed and we were unable to recover it. 00:26:39.027 [2024-12-06 03:34:58.973189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.027 [2024-12-06 03:34:58.973201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.027 qpair failed and we were unable to recover it. 00:26:39.027 [2024-12-06 03:34:58.973296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.027 [2024-12-06 03:34:58.973308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.027 qpair failed and we were unable to recover it. 00:26:39.027 [2024-12-06 03:34:58.973457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.027 [2024-12-06 03:34:58.973470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.027 qpair failed and we were unable to recover it. 00:26:39.027 [2024-12-06 03:34:58.973615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.027 [2024-12-06 03:34:58.973627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.027 qpair failed and we were unable to recover it. 00:26:39.027 [2024-12-06 03:34:58.973772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.027 [2024-12-06 03:34:58.973784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.027 qpair failed and we were unable to recover it. 00:26:39.027 [2024-12-06 03:34:58.974002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.027 [2024-12-06 03:34:58.974015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.027 qpair failed and we were unable to recover it. 00:26:39.027 [2024-12-06 03:34:58.974238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.027 [2024-12-06 03:34:58.974251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.027 qpair failed and we were unable to recover it. 00:26:39.027 [2024-12-06 03:34:58.974419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.027 [2024-12-06 03:34:58.974431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.027 qpair failed and we were unable to recover it. 00:26:39.027 [2024-12-06 03:34:58.974651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.027 [2024-12-06 03:34:58.974663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.027 qpair failed and we were unable to recover it. 00:26:39.027 [2024-12-06 03:34:58.974812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.027 [2024-12-06 03:34:58.974825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.027 qpair failed and we were unable to recover it. 00:26:39.027 [2024-12-06 03:34:58.974967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.027 [2024-12-06 03:34:58.974980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.027 qpair failed and we were unable to recover it. 00:26:39.027 [2024-12-06 03:34:58.975161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.027 [2024-12-06 03:34:58.975174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.027 qpair failed and we were unable to recover it. 00:26:39.027 [2024-12-06 03:34:58.975311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.027 [2024-12-06 03:34:58.975324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.027 qpair failed and we were unable to recover it. 00:26:39.027 [2024-12-06 03:34:58.975546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.027 [2024-12-06 03:34:58.975558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.027 qpair failed and we were unable to recover it. 00:26:39.027 [2024-12-06 03:34:58.975706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.027 [2024-12-06 03:34:58.975722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.027 qpair failed and we were unable to recover it. 00:26:39.027 [2024-12-06 03:34:58.975861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.027 [2024-12-06 03:34:58.975874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.027 qpair failed and we were unable to recover it. 00:26:39.027 [2024-12-06 03:34:58.976021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.027 [2024-12-06 03:34:58.976034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.027 qpair failed and we were unable to recover it. 00:26:39.027 [2024-12-06 03:34:58.976185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.027 [2024-12-06 03:34:58.976198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.027 qpair failed and we were unable to recover it. 00:26:39.027 [2024-12-06 03:34:58.976417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.027 [2024-12-06 03:34:58.976429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.027 qpair failed and we were unable to recover it. 00:26:39.027 [2024-12-06 03:34:58.976578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.027 [2024-12-06 03:34:58.976590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.027 qpair failed and we were unable to recover it. 00:26:39.027 [2024-12-06 03:34:58.976815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.027 [2024-12-06 03:34:58.976828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.027 qpair failed and we were unable to recover it. 00:26:39.027 [2024-12-06 03:34:58.976978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.027 [2024-12-06 03:34:58.976991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.027 qpair failed and we were unable to recover it. 00:26:39.027 [2024-12-06 03:34:58.977069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.027 [2024-12-06 03:34:58.977080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.027 qpair failed and we were unable to recover it. 00:26:39.027 [2024-12-06 03:34:58.977173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.027 [2024-12-06 03:34:58.977184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.027 qpair failed and we were unable to recover it. 00:26:39.027 [2024-12-06 03:34:58.977330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.027 [2024-12-06 03:34:58.977343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.027 qpair failed and we were unable to recover it. 00:26:39.027 [2024-12-06 03:34:58.977498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.027 [2024-12-06 03:34:58.977510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.027 qpair failed and we were unable to recover it. 00:26:39.027 [2024-12-06 03:34:58.977736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.027 [2024-12-06 03:34:58.977748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.027 qpair failed and we were unable to recover it. 00:26:39.027 [2024-12-06 03:34:58.977894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.027 [2024-12-06 03:34:58.977906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.027 qpair failed and we were unable to recover it. 00:26:39.027 [2024-12-06 03:34:58.978151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.027 [2024-12-06 03:34:58.978164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.027 qpair failed and we were unable to recover it. 00:26:39.027 [2024-12-06 03:34:58.978384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.027 [2024-12-06 03:34:58.978397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.027 qpair failed and we were unable to recover it. 00:26:39.027 [2024-12-06 03:34:58.978493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.027 [2024-12-06 03:34:58.978506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.027 qpair failed and we were unable to recover it. 00:26:39.027 [2024-12-06 03:34:58.978665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.027 [2024-12-06 03:34:58.978677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.027 qpair failed and we were unable to recover it. 00:26:39.027 [2024-12-06 03:34:58.978848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.028 [2024-12-06 03:34:58.978860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.028 qpair failed and we were unable to recover it. 00:26:39.028 [2024-12-06 03:34:58.978958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.028 [2024-12-06 03:34:58.978970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.028 qpair failed and we were unable to recover it. 00:26:39.028 [2024-12-06 03:34:58.979114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.028 [2024-12-06 03:34:58.979127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.028 qpair failed and we were unable to recover it. 00:26:39.028 [2024-12-06 03:34:58.979277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.028 [2024-12-06 03:34:58.979289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.028 qpair failed and we were unable to recover it. 00:26:39.028 [2024-12-06 03:34:58.979420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.028 [2024-12-06 03:34:58.979432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.028 qpair failed and we were unable to recover it. 00:26:39.028 [2024-12-06 03:34:58.979660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.028 [2024-12-06 03:34:58.979672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.028 qpair failed and we were unable to recover it. 00:26:39.028 [2024-12-06 03:34:58.979762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.028 [2024-12-06 03:34:58.979773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.028 qpair failed and we were unable to recover it. 00:26:39.028 [2024-12-06 03:34:58.979977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.028 [2024-12-06 03:34:58.979990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.028 qpair failed and we were unable to recover it. 00:26:39.028 [2024-12-06 03:34:58.980169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.028 [2024-12-06 03:34:58.980181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.028 qpair failed and we were unable to recover it. 00:26:39.028 [2024-12-06 03:34:58.980326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.028 [2024-12-06 03:34:58.980338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.028 qpair failed and we were unable to recover it. 00:26:39.028 [2024-12-06 03:34:58.980511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.028 [2024-12-06 03:34:58.980523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.028 qpair failed and we were unable to recover it. 00:26:39.028 [2024-12-06 03:34:58.980738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.028 [2024-12-06 03:34:58.980751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.028 qpair failed and we were unable to recover it. 00:26:39.028 [2024-12-06 03:34:58.980899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.028 [2024-12-06 03:34:58.980912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.028 qpair failed and we were unable to recover it. 00:26:39.028 [2024-12-06 03:34:58.981166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.028 [2024-12-06 03:34:58.981179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.028 qpair failed and we were unable to recover it. 00:26:39.028 [2024-12-06 03:34:58.981327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.028 [2024-12-06 03:34:58.981339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.028 qpair failed and we were unable to recover it. 00:26:39.028 [2024-12-06 03:34:58.981538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.028 [2024-12-06 03:34:58.981551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.028 qpair failed and we were unable to recover it. 00:26:39.028 [2024-12-06 03:34:58.981739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.028 [2024-12-06 03:34:58.981751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.028 qpair failed and we were unable to recover it. 00:26:39.028 [2024-12-06 03:34:58.981978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.028 [2024-12-06 03:34:58.981992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.028 qpair failed and we were unable to recover it. 00:26:39.028 [2024-12-06 03:34:58.982126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.028 [2024-12-06 03:34:58.982145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.028 qpair failed and we were unable to recover it. 00:26:39.028 [2024-12-06 03:34:58.982342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.028 [2024-12-06 03:34:58.982354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.028 qpair failed and we were unable to recover it. 00:26:39.028 [2024-12-06 03:34:58.982497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.028 [2024-12-06 03:34:58.982509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.028 qpair failed and we were unable to recover it. 00:26:39.028 [2024-12-06 03:34:58.982757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.028 [2024-12-06 03:34:58.982769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.028 qpair failed and we were unable to recover it. 00:26:39.028 [2024-12-06 03:34:58.982920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.028 [2024-12-06 03:34:58.982935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.028 qpair failed and we were unable to recover it. 00:26:39.028 [2024-12-06 03:34:58.983110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.028 [2024-12-06 03:34:58.983123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.028 qpair failed and we were unable to recover it. 00:26:39.028 [2024-12-06 03:34:58.983208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.028 [2024-12-06 03:34:58.983220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.028 qpair failed and we were unable to recover it. 00:26:39.028 [2024-12-06 03:34:58.983367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.028 [2024-12-06 03:34:58.983379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.028 qpair failed and we were unable to recover it. 00:26:39.028 [2024-12-06 03:34:58.983484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.028 [2024-12-06 03:34:58.983496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.028 qpair failed and we were unable to recover it. 00:26:39.028 [2024-12-06 03:34:58.983644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.028 [2024-12-06 03:34:58.983656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.028 qpair failed and we were unable to recover it. 00:26:39.028 [2024-12-06 03:34:58.983801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.028 [2024-12-06 03:34:58.983814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.028 qpair failed and we were unable to recover it. 00:26:39.028 [2024-12-06 03:34:58.984040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.028 [2024-12-06 03:34:58.984053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.028 qpair failed and we were unable to recover it. 00:26:39.028 [2024-12-06 03:34:58.984275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.028 [2024-12-06 03:34:58.984287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.028 qpair failed and we were unable to recover it. 00:26:39.028 [2024-12-06 03:34:58.984459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.028 [2024-12-06 03:34:58.984472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.028 qpair failed and we were unable to recover it. 00:26:39.028 [2024-12-06 03:34:58.984669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.028 [2024-12-06 03:34:58.984682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.028 qpair failed and we were unable to recover it. 00:26:39.028 [2024-12-06 03:34:58.984840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.028 [2024-12-06 03:34:58.984852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.028 qpair failed and we were unable to recover it. 00:26:39.029 [2024-12-06 03:34:58.984983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.029 [2024-12-06 03:34:58.984997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.029 qpair failed and we were unable to recover it. 00:26:39.029 [2024-12-06 03:34:58.985214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.029 [2024-12-06 03:34:58.985227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.029 qpair failed and we were unable to recover it. 00:26:39.029 [2024-12-06 03:34:58.985455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.029 [2024-12-06 03:34:58.985468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.029 qpair failed and we were unable to recover it. 00:26:39.029 [2024-12-06 03:34:58.985556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.029 [2024-12-06 03:34:58.985568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.029 qpair failed and we were unable to recover it. 00:26:39.029 [2024-12-06 03:34:58.985824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.029 [2024-12-06 03:34:58.985836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.029 qpair failed and we were unable to recover it. 00:26:39.029 [2024-12-06 03:34:58.986052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.029 [2024-12-06 03:34:58.986065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.029 qpair failed and we were unable to recover it. 00:26:39.029 [2024-12-06 03:34:58.986292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.029 [2024-12-06 03:34:58.986304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.029 qpair failed and we were unable to recover it. 00:26:39.029 [2024-12-06 03:34:58.986504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.029 [2024-12-06 03:34:58.986516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.029 qpair failed and we were unable to recover it. 00:26:39.029 [2024-12-06 03:34:58.986760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.029 [2024-12-06 03:34:58.986772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.029 qpair failed and we were unable to recover it. 00:26:39.029 [2024-12-06 03:34:58.986977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.029 [2024-12-06 03:34:58.986989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.029 qpair failed and we were unable to recover it. 00:26:39.029 [2024-12-06 03:34:58.987215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.029 [2024-12-06 03:34:58.987227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.029 qpair failed and we were unable to recover it. 00:26:39.029 [2024-12-06 03:34:58.987451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.029 [2024-12-06 03:34:58.987464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.029 qpair failed and we were unable to recover it. 00:26:39.029 [2024-12-06 03:34:58.987614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.029 [2024-12-06 03:34:58.987627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.029 qpair failed and we were unable to recover it. 00:26:39.029 [2024-12-06 03:34:58.987786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.029 [2024-12-06 03:34:58.987799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.029 qpair failed and we were unable to recover it. 00:26:39.029 [2024-12-06 03:34:58.987932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.029 [2024-12-06 03:34:58.987944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.029 qpair failed and we were unable to recover it. 00:26:39.029 [2024-12-06 03:34:58.988183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.029 [2024-12-06 03:34:58.988195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.029 qpair failed and we were unable to recover it. 00:26:39.029 [2024-12-06 03:34:58.988290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.029 [2024-12-06 03:34:58.988303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.029 qpair failed and we were unable to recover it. 00:26:39.029 [2024-12-06 03:34:58.988538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.029 [2024-12-06 03:34:58.988550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.029 qpair failed and we were unable to recover it. 00:26:39.029 [2024-12-06 03:34:58.988728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.029 [2024-12-06 03:34:58.988740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.029 qpair failed and we were unable to recover it. 00:26:39.029 [2024-12-06 03:34:58.988836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.029 [2024-12-06 03:34:58.988848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.029 qpair failed and we were unable to recover it. 00:26:39.029 [2024-12-06 03:34:58.988991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.029 [2024-12-06 03:34:58.989004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.029 qpair failed and we were unable to recover it. 00:26:39.029 [2024-12-06 03:34:58.989140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.029 [2024-12-06 03:34:58.989153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.029 qpair failed and we were unable to recover it. 00:26:39.029 [2024-12-06 03:34:58.989227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.029 [2024-12-06 03:34:58.989239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.029 qpair failed and we were unable to recover it. 00:26:39.029 [2024-12-06 03:34:58.989435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.029 [2024-12-06 03:34:58.989447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.029 qpair failed and we were unable to recover it. 00:26:39.029 [2024-12-06 03:34:58.989609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.029 [2024-12-06 03:34:58.989621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.029 qpair failed and we were unable to recover it. 00:26:39.029 [2024-12-06 03:34:58.989704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.029 [2024-12-06 03:34:58.989715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.029 qpair failed and we were unable to recover it. 00:26:39.029 [2024-12-06 03:34:58.989939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.029 [2024-12-06 03:34:58.989960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.029 qpair failed and we were unable to recover it. 00:26:39.029 [2024-12-06 03:34:58.990062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.029 [2024-12-06 03:34:58.990084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.029 qpair failed and we were unable to recover it. 00:26:39.029 [2024-12-06 03:34:58.990249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.029 [2024-12-06 03:34:58.990264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.029 qpair failed and we were unable to recover it. 00:26:39.029 [2024-12-06 03:34:58.990477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.029 [2024-12-06 03:34:58.990490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.029 qpair failed and we were unable to recover it. 00:26:39.029 [2024-12-06 03:34:58.990657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.029 [2024-12-06 03:34:58.990670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.029 qpair failed and we were unable to recover it. 00:26:39.029 [2024-12-06 03:34:58.990821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.029 [2024-12-06 03:34:58.990833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.029 qpair failed and we were unable to recover it. 00:26:39.029 [2024-12-06 03:34:58.991062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.029 [2024-12-06 03:34:58.991074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.029 qpair failed and we were unable to recover it. 00:26:39.029 [2024-12-06 03:34:58.991219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.029 [2024-12-06 03:34:58.991232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.029 qpair failed and we were unable to recover it. 00:26:39.029 [2024-12-06 03:34:58.991477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.029 [2024-12-06 03:34:58.991489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.029 qpair failed and we were unable to recover it. 00:26:39.029 [2024-12-06 03:34:58.991556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.029 [2024-12-06 03:34:58.991567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.029 qpair failed and we were unable to recover it. 00:26:39.029 [2024-12-06 03:34:58.991790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.029 [2024-12-06 03:34:58.991802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.029 qpair failed and we were unable to recover it. 00:26:39.029 [2024-12-06 03:34:58.991951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.029 [2024-12-06 03:34:58.991964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.029 qpair failed and we were unable to recover it. 00:26:39.030 [2024-12-06 03:34:58.992040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.030 [2024-12-06 03:34:58.992051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.030 qpair failed and we were unable to recover it. 00:26:39.030 [2024-12-06 03:34:58.992128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.030 [2024-12-06 03:34:58.992140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.030 qpair failed and we were unable to recover it. 00:26:39.030 [2024-12-06 03:34:58.992365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.030 [2024-12-06 03:34:58.992377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.030 qpair failed and we were unable to recover it. 00:26:39.030 [2024-12-06 03:34:58.992617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.030 [2024-12-06 03:34:58.992630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.030 qpair failed and we were unable to recover it. 00:26:39.030 [2024-12-06 03:34:58.992726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.030 [2024-12-06 03:34:58.992738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.030 qpair failed and we were unable to recover it. 00:26:39.030 [2024-12-06 03:34:58.992959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.030 [2024-12-06 03:34:58.992971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.030 qpair failed and we were unable to recover it. 00:26:39.030 [2024-12-06 03:34:58.993111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.030 [2024-12-06 03:34:58.993124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.030 qpair failed and we were unable to recover it. 00:26:39.030 [2024-12-06 03:34:58.993272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.030 [2024-12-06 03:34:58.993284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.030 qpair failed and we were unable to recover it. 00:26:39.030 [2024-12-06 03:34:58.993431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.030 [2024-12-06 03:34:58.993443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.030 qpair failed and we were unable to recover it. 00:26:39.030 [2024-12-06 03:34:58.993539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.030 [2024-12-06 03:34:58.993551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.030 qpair failed and we were unable to recover it. 00:26:39.030 [2024-12-06 03:34:58.993708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.030 [2024-12-06 03:34:58.993721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.030 qpair failed and we were unable to recover it. 00:26:39.030 [2024-12-06 03:34:58.993803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.030 [2024-12-06 03:34:58.993814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.030 qpair failed and we were unable to recover it. 00:26:39.030 [2024-12-06 03:34:58.993945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.030 [2024-12-06 03:34:58.993961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.030 qpair failed and we were unable to recover it. 00:26:39.030 [2024-12-06 03:34:58.994090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.030 [2024-12-06 03:34:58.994103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.030 qpair failed and we were unable to recover it. 00:26:39.030 [2024-12-06 03:34:58.994251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.030 [2024-12-06 03:34:58.994263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.030 qpair failed and we were unable to recover it. 00:26:39.030 [2024-12-06 03:34:58.994495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.030 [2024-12-06 03:34:58.994508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.030 qpair failed and we were unable to recover it. 00:26:39.030 [2024-12-06 03:34:58.994602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.030 [2024-12-06 03:34:58.994613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.030 qpair failed and we were unable to recover it. 00:26:39.030 [2024-12-06 03:34:58.994762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.030 [2024-12-06 03:34:58.994775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.030 qpair failed and we were unable to recover it. 00:26:39.030 [2024-12-06 03:34:58.994930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.030 [2024-12-06 03:34:58.994942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.030 qpair failed and we were unable to recover it. 00:26:39.030 [2024-12-06 03:34:58.995119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.030 [2024-12-06 03:34:58.995132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.030 qpair failed and we were unable to recover it. 00:26:39.030 [2024-12-06 03:34:58.995329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.030 [2024-12-06 03:34:58.995340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.030 qpair failed and we were unable to recover it. 00:26:39.030 [2024-12-06 03:34:58.995541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.030 [2024-12-06 03:34:58.995553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.030 qpair failed and we were unable to recover it. 00:26:39.030 [2024-12-06 03:34:58.995730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.030 [2024-12-06 03:34:58.995742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.030 qpair failed and we were unable to recover it. 00:26:39.030 [2024-12-06 03:34:58.995981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.030 [2024-12-06 03:34:58.995994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.030 qpair failed and we were unable to recover it. 00:26:39.030 [2024-12-06 03:34:58.996145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.030 [2024-12-06 03:34:58.996158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.030 qpair failed and we were unable to recover it. 00:26:39.030 [2024-12-06 03:34:58.996364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.030 [2024-12-06 03:34:58.996376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.030 qpair failed and we were unable to recover it. 00:26:39.030 [2024-12-06 03:34:58.996638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.030 [2024-12-06 03:34:58.996650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.030 qpair failed and we were unable to recover it. 00:26:39.030 [2024-12-06 03:34:58.996800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.030 [2024-12-06 03:34:58.996812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.030 qpair failed and we were unable to recover it. 00:26:39.030 [2024-12-06 03:34:58.996959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.030 [2024-12-06 03:34:58.996971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.030 qpair failed and we were unable to recover it. 00:26:39.030 [2024-12-06 03:34:58.997144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.030 [2024-12-06 03:34:58.997156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.030 qpair failed and we were unable to recover it. 00:26:39.030 [2024-12-06 03:34:58.997355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.030 [2024-12-06 03:34:58.997369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.030 qpair failed and we were unable to recover it. 00:26:39.030 [2024-12-06 03:34:58.997585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.030 [2024-12-06 03:34:58.997597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.030 qpair failed and we were unable to recover it. 00:26:39.030 [2024-12-06 03:34:58.997744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.030 [2024-12-06 03:34:58.997756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.030 qpair failed and we were unable to recover it. 00:26:39.030 [2024-12-06 03:34:58.997983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.030 [2024-12-06 03:34:58.997996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.030 qpair failed and we were unable to recover it. 00:26:39.030 [2024-12-06 03:34:58.998078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.030 [2024-12-06 03:34:58.998090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.030 qpair failed and we were unable to recover it. 00:26:39.030 [2024-12-06 03:34:58.998228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.030 [2024-12-06 03:34:58.998240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.030 qpair failed and we were unable to recover it. 00:26:39.030 [2024-12-06 03:34:58.998478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.030 [2024-12-06 03:34:58.998490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.030 qpair failed and we were unable to recover it. 00:26:39.030 [2024-12-06 03:34:58.998644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.031 [2024-12-06 03:34:58.998657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.031 qpair failed and we were unable to recover it. 00:26:39.031 [2024-12-06 03:34:58.998889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.031 [2024-12-06 03:34:58.998901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.031 qpair failed and we were unable to recover it. 00:26:39.031 [2024-12-06 03:34:58.999069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.031 [2024-12-06 03:34:58.999082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.031 qpair failed and we were unable to recover it. 00:26:39.031 [2024-12-06 03:34:58.999180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.031 [2024-12-06 03:34:58.999191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.031 qpair failed and we were unable to recover it. 00:26:39.031 [2024-12-06 03:34:58.999344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.031 [2024-12-06 03:34:58.999356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.031 qpair failed and we were unable to recover it. 00:26:39.031 [2024-12-06 03:34:58.999446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.031 [2024-12-06 03:34:58.999457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.031 qpair failed and we were unable to recover it. 00:26:39.031 [2024-12-06 03:34:58.999608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.031 [2024-12-06 03:34:58.999620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.031 qpair failed and we were unable to recover it. 00:26:39.031 [2024-12-06 03:34:58.999772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.031 [2024-12-06 03:34:58.999785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.031 qpair failed and we were unable to recover it. 00:26:39.031 [2024-12-06 03:34:58.999930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.031 [2024-12-06 03:34:58.999942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.031 qpair failed and we were unable to recover it. 00:26:39.031 [2024-12-06 03:34:59.000106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.031 [2024-12-06 03:34:59.000119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.031 qpair failed and we were unable to recover it. 00:26:39.031 [2024-12-06 03:34:59.000292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.031 [2024-12-06 03:34:59.000305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.031 qpair failed and we were unable to recover it. 00:26:39.031 [2024-12-06 03:34:59.000458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.031 [2024-12-06 03:34:59.000471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.031 qpair failed and we were unable to recover it. 00:26:39.031 [2024-12-06 03:34:59.000604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.031 [2024-12-06 03:34:59.000616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.031 qpair failed and we were unable to recover it. 00:26:39.031 [2024-12-06 03:34:59.000908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.031 [2024-12-06 03:34:59.000920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.031 qpair failed and we were unable to recover it. 00:26:39.031 [2024-12-06 03:34:59.001075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.031 [2024-12-06 03:34:59.001088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.031 qpair failed and we were unable to recover it. 00:26:39.031 [2024-12-06 03:34:59.001294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.031 [2024-12-06 03:34:59.001307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.031 qpair failed and we were unable to recover it. 00:26:39.031 [2024-12-06 03:34:59.001458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.031 [2024-12-06 03:34:59.001470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.031 qpair failed and we were unable to recover it. 00:26:39.031 [2024-12-06 03:34:59.001665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.031 [2024-12-06 03:34:59.001677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.031 qpair failed and we were unable to recover it. 00:26:39.031 [2024-12-06 03:34:59.001830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.031 [2024-12-06 03:34:59.001842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.031 qpair failed and we were unable to recover it. 00:26:39.031 [2024-12-06 03:34:59.002068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.031 [2024-12-06 03:34:59.002081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.031 qpair failed and we were unable to recover it. 00:26:39.031 [2024-12-06 03:34:59.002276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.031 [2024-12-06 03:34:59.002288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.031 qpair failed and we were unable to recover it. 00:26:39.031 [2024-12-06 03:34:59.002390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.031 [2024-12-06 03:34:59.002403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.031 qpair failed and we were unable to recover it. 00:26:39.031 [2024-12-06 03:34:59.002601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.031 [2024-12-06 03:34:59.002614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.031 qpair failed and we were unable to recover it. 00:26:39.031 [2024-12-06 03:34:59.002826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.031 [2024-12-06 03:34:59.002838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.031 qpair failed and we were unable to recover it. 00:26:39.031 [2024-12-06 03:34:59.002916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.031 [2024-12-06 03:34:59.002927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.031 qpair failed and we were unable to recover it. 00:26:39.031 [2024-12-06 03:34:59.003128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.031 [2024-12-06 03:34:59.003141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.031 qpair failed and we were unable to recover it. 00:26:39.031 [2024-12-06 03:34:59.003355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.031 [2024-12-06 03:34:59.003368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.031 qpair failed and we were unable to recover it. 00:26:39.031 [2024-12-06 03:34:59.003451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.031 [2024-12-06 03:34:59.003462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.031 qpair failed and we were unable to recover it. 00:26:39.031 [2024-12-06 03:34:59.003661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.031 [2024-12-06 03:34:59.003674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.031 qpair failed and we were unable to recover it. 00:26:39.031 [2024-12-06 03:34:59.003880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.031 [2024-12-06 03:34:59.003892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.031 qpair failed and we were unable to recover it. 00:26:39.031 [2024-12-06 03:34:59.003984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.031 [2024-12-06 03:34:59.003996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.031 qpair failed and we were unable to recover it. 00:26:39.031 [2024-12-06 03:34:59.004137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.031 [2024-12-06 03:34:59.004149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.031 qpair failed and we were unable to recover it. 00:26:39.031 [2024-12-06 03:34:59.004248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.031 [2024-12-06 03:34:59.004260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.031 qpair failed and we were unable to recover it. 00:26:39.031 [2024-12-06 03:34:59.004413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.031 [2024-12-06 03:34:59.004428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.031 qpair failed and we were unable to recover it. 00:26:39.031 [2024-12-06 03:34:59.004656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.031 [2024-12-06 03:34:59.004669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.031 qpair failed and we were unable to recover it. 00:26:39.031 [2024-12-06 03:34:59.004733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.031 [2024-12-06 03:34:59.004745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.031 qpair failed and we were unable to recover it. 00:26:39.031 [2024-12-06 03:34:59.004890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.031 [2024-12-06 03:34:59.004903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.031 qpair failed and we were unable to recover it. 00:26:39.031 [2024-12-06 03:34:59.004999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.031 [2024-12-06 03:34:59.005011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.031 qpair failed and we were unable to recover it. 00:26:39.032 [2024-12-06 03:34:59.005211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.032 [2024-12-06 03:34:59.005223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.032 qpair failed and we were unable to recover it. 00:26:39.032 [2024-12-06 03:34:59.005423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.032 [2024-12-06 03:34:59.005436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.032 qpair failed and we were unable to recover it. 00:26:39.032 [2024-12-06 03:34:59.005525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.032 [2024-12-06 03:34:59.005537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.032 qpair failed and we were unable to recover it. 00:26:39.032 [2024-12-06 03:34:59.005626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.032 [2024-12-06 03:34:59.005639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.032 qpair failed and we were unable to recover it. 00:26:39.032 [2024-12-06 03:34:59.005865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.032 [2024-12-06 03:34:59.005878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.032 qpair failed and we were unable to recover it. 00:26:39.032 [2024-12-06 03:34:59.006011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.032 [2024-12-06 03:34:59.006024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.032 qpair failed and we were unable to recover it. 00:26:39.032 [2024-12-06 03:34:59.006213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.032 [2024-12-06 03:34:59.006226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.032 qpair failed and we were unable to recover it. 00:26:39.032 [2024-12-06 03:34:59.006378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.032 [2024-12-06 03:34:59.006390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.032 qpair failed and we were unable to recover it. 00:26:39.032 [2024-12-06 03:34:59.006618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.032 [2024-12-06 03:34:59.006631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.032 qpair failed and we were unable to recover it. 00:26:39.032 [2024-12-06 03:34:59.006783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.032 [2024-12-06 03:34:59.006796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.032 qpair failed and we were unable to recover it. 00:26:39.032 [2024-12-06 03:34:59.006985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.032 [2024-12-06 03:34:59.006999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.032 qpair failed and we were unable to recover it. 00:26:39.032 [2024-12-06 03:34:59.007249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.032 [2024-12-06 03:34:59.007261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.032 qpair failed and we were unable to recover it. 00:26:39.032 [2024-12-06 03:34:59.007409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.032 [2024-12-06 03:34:59.007421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.032 qpair failed and we were unable to recover it. 00:26:39.032 [2024-12-06 03:34:59.007639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.032 [2024-12-06 03:34:59.007651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.032 qpair failed and we were unable to recover it. 00:26:39.032 [2024-12-06 03:34:59.007799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.032 [2024-12-06 03:34:59.007812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.032 qpair failed and we were unable to recover it. 00:26:39.032 [2024-12-06 03:34:59.007978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.032 [2024-12-06 03:34:59.007991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.032 qpair failed and we were unable to recover it. 00:26:39.032 [2024-12-06 03:34:59.008131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.032 [2024-12-06 03:34:59.008143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.032 qpair failed and we were unable to recover it. 00:26:39.032 [2024-12-06 03:34:59.008347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.032 [2024-12-06 03:34:59.008360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.032 qpair failed and we were unable to recover it. 00:26:39.032 [2024-12-06 03:34:59.008605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.032 [2024-12-06 03:34:59.008618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.032 qpair failed and we were unable to recover it. 00:26:39.032 [2024-12-06 03:34:59.008878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.032 [2024-12-06 03:34:59.008891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.032 qpair failed and we were unable to recover it. 00:26:39.032 [2024-12-06 03:34:59.009023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.032 [2024-12-06 03:34:59.009036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.032 qpair failed and we were unable to recover it. 00:26:39.032 [2024-12-06 03:34:59.009236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.032 [2024-12-06 03:34:59.009249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.032 qpair failed and we were unable to recover it. 00:26:39.032 [2024-12-06 03:34:59.009446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.032 [2024-12-06 03:34:59.009458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.032 qpair failed and we were unable to recover it. 00:26:39.032 [2024-12-06 03:34:59.009689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.032 [2024-12-06 03:34:59.009701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.032 qpair failed and we were unable to recover it. 00:26:39.032 [2024-12-06 03:34:59.009922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.032 [2024-12-06 03:34:59.009934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.032 qpair failed and we were unable to recover it. 00:26:39.032 [2024-12-06 03:34:59.010092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.032 [2024-12-06 03:34:59.010106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.032 qpair failed and we were unable to recover it. 00:26:39.032 [2024-12-06 03:34:59.010338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.032 [2024-12-06 03:34:59.010350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.032 qpair failed and we were unable to recover it. 00:26:39.032 [2024-12-06 03:34:59.010588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.032 [2024-12-06 03:34:59.010601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.032 qpair failed and we were unable to recover it. 00:26:39.032 [2024-12-06 03:34:59.010753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.032 [2024-12-06 03:34:59.010766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.032 qpair failed and we were unable to recover it. 00:26:39.032 [2024-12-06 03:34:59.010972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.032 [2024-12-06 03:34:59.010985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.032 qpair failed and we were unable to recover it. 00:26:39.032 [2024-12-06 03:34:59.011187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.032 [2024-12-06 03:34:59.011200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.032 qpair failed and we were unable to recover it. 00:26:39.032 [2024-12-06 03:34:59.011396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.032 [2024-12-06 03:34:59.011409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.032 qpair failed and we were unable to recover it. 00:26:39.032 [2024-12-06 03:34:59.011660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.032 [2024-12-06 03:34:59.011672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.032 qpair failed and we were unable to recover it. 00:26:39.032 [2024-12-06 03:34:59.011939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.032 [2024-12-06 03:34:59.011963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.032 qpair failed and we were unable to recover it. 00:26:39.032 [2024-12-06 03:34:59.012191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.032 [2024-12-06 03:34:59.012204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.032 qpair failed and we were unable to recover it. 00:26:39.032 [2024-12-06 03:34:59.012430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.032 [2024-12-06 03:34:59.012445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.032 qpair failed and we were unable to recover it. 00:26:39.032 [2024-12-06 03:34:59.012604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.032 [2024-12-06 03:34:59.012617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.032 qpair failed and we were unable to recover it. 00:26:39.032 [2024-12-06 03:34:59.012695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.032 [2024-12-06 03:34:59.012707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.033 qpair failed and we were unable to recover it. 00:26:39.033 [2024-12-06 03:34:59.012877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.033 [2024-12-06 03:34:59.012889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.033 qpair failed and we were unable to recover it. 00:26:39.033 [2024-12-06 03:34:59.013059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.033 [2024-12-06 03:34:59.013072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.033 qpair failed and we were unable to recover it. 00:26:39.033 [2024-12-06 03:34:59.013286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.033 [2024-12-06 03:34:59.013298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.033 qpair failed and we were unable to recover it. 00:26:39.033 [2024-12-06 03:34:59.013429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.033 [2024-12-06 03:34:59.013441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.033 qpair failed and we were unable to recover it. 00:26:39.033 [2024-12-06 03:34:59.013659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.033 [2024-12-06 03:34:59.013671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.033 qpair failed and we were unable to recover it. 00:26:39.033 [2024-12-06 03:34:59.013923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.033 [2024-12-06 03:34:59.013935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.033 qpair failed and we were unable to recover it. 00:26:39.033 [2024-12-06 03:34:59.014195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.033 [2024-12-06 03:34:59.014208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.033 qpair failed and we were unable to recover it. 00:26:39.033 [2024-12-06 03:34:59.014343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.033 [2024-12-06 03:34:59.014355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.033 qpair failed and we were unable to recover it. 00:26:39.033 [2024-12-06 03:34:59.014420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.033 [2024-12-06 03:34:59.014431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.033 qpair failed and we were unable to recover it. 00:26:39.033 [2024-12-06 03:34:59.014581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.033 [2024-12-06 03:34:59.014594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.033 qpair failed and we were unable to recover it. 00:26:39.033 [2024-12-06 03:34:59.014744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.033 [2024-12-06 03:34:59.014756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.033 qpair failed and we were unable to recover it. 00:26:39.033 [2024-12-06 03:34:59.014898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.033 [2024-12-06 03:34:59.014910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.033 qpair failed and we were unable to recover it. 00:26:39.033 [2024-12-06 03:34:59.015072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.033 [2024-12-06 03:34:59.015084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.033 qpair failed and we were unable to recover it. 00:26:39.033 [2024-12-06 03:34:59.015314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.033 [2024-12-06 03:34:59.015326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.033 qpair failed and we were unable to recover it. 00:26:39.033 [2024-12-06 03:34:59.015496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.033 [2024-12-06 03:34:59.015508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.033 qpair failed and we were unable to recover it. 00:26:39.033 [2024-12-06 03:34:59.015600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.033 [2024-12-06 03:34:59.015611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.033 qpair failed and we were unable to recover it. 00:26:39.033 [2024-12-06 03:34:59.015789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.033 [2024-12-06 03:34:59.015802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.033 qpair failed and we were unable to recover it. 00:26:39.033 [2024-12-06 03:34:59.015973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.033 [2024-12-06 03:34:59.015985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.033 qpair failed and we were unable to recover it. 00:26:39.033 [2024-12-06 03:34:59.016052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.033 [2024-12-06 03:34:59.016063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.033 qpair failed and we were unable to recover it. 00:26:39.033 [2024-12-06 03:34:59.016306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.033 [2024-12-06 03:34:59.016318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.033 qpair failed and we were unable to recover it. 00:26:39.033 [2024-12-06 03:34:59.016452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.033 [2024-12-06 03:34:59.016464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.033 qpair failed and we were unable to recover it. 00:26:39.033 [2024-12-06 03:34:59.016629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.033 [2024-12-06 03:34:59.016642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.033 qpair failed and we were unable to recover it. 00:26:39.033 [2024-12-06 03:34:59.016807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.033 [2024-12-06 03:34:59.016820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.033 qpair failed and we were unable to recover it. 00:26:39.033 [2024-12-06 03:34:59.017016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.033 [2024-12-06 03:34:59.017029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.033 qpair failed and we were unable to recover it. 00:26:39.033 [2024-12-06 03:34:59.017186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.033 [2024-12-06 03:34:59.017198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.033 qpair failed and we were unable to recover it. 00:26:39.033 [2024-12-06 03:34:59.017424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.033 [2024-12-06 03:34:59.017437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.033 qpair failed and we were unable to recover it. 00:26:39.033 [2024-12-06 03:34:59.017674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.033 [2024-12-06 03:34:59.017686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.033 qpair failed and we were unable to recover it. 00:26:39.033 [2024-12-06 03:34:59.017897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.033 [2024-12-06 03:34:59.017909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.033 qpair failed and we were unable to recover it. 00:26:39.033 [2024-12-06 03:34:59.018086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.033 [2024-12-06 03:34:59.018098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.033 qpair failed and we were unable to recover it. 00:26:39.033 [2024-12-06 03:34:59.018235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.033 [2024-12-06 03:34:59.018248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.033 qpair failed and we were unable to recover it. 00:26:39.033 [2024-12-06 03:34:59.018475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.033 [2024-12-06 03:34:59.018491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.033 qpair failed and we were unable to recover it. 00:26:39.033 [2024-12-06 03:34:59.018622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.033 [2024-12-06 03:34:59.018634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.033 qpair failed and we were unable to recover it. 00:26:39.033 [2024-12-06 03:34:59.018862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.034 [2024-12-06 03:34:59.018874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.034 qpair failed and we were unable to recover it. 00:26:39.034 [2024-12-06 03:34:59.018973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.034 [2024-12-06 03:34:59.018985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.034 qpair failed and we were unable to recover it. 00:26:39.034 [2024-12-06 03:34:59.019125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.034 [2024-12-06 03:34:59.019137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.034 qpair failed and we were unable to recover it. 00:26:39.034 [2024-12-06 03:34:59.019287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.034 [2024-12-06 03:34:59.019299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.034 qpair failed and we were unable to recover it. 00:26:39.034 [2024-12-06 03:34:59.019500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.034 [2024-12-06 03:34:59.019513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.034 qpair failed and we were unable to recover it. 00:26:39.034 [2024-12-06 03:34:59.019602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.034 [2024-12-06 03:34:59.019615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.034 qpair failed and we were unable to recover it. 00:26:39.034 [2024-12-06 03:34:59.019885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.034 [2024-12-06 03:34:59.019897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.034 qpair failed and we were unable to recover it. 00:26:39.034 [2024-12-06 03:34:59.020099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.034 [2024-12-06 03:34:59.020112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.034 qpair failed and we were unable to recover it. 00:26:39.034 [2024-12-06 03:34:59.020337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.034 [2024-12-06 03:34:59.020349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.034 qpair failed and we were unable to recover it. 00:26:39.034 [2024-12-06 03:34:59.020566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.034 [2024-12-06 03:34:59.020579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.034 qpair failed and we were unable to recover it. 00:26:39.034 [2024-12-06 03:34:59.020726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.034 [2024-12-06 03:34:59.020738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.034 qpair failed and we were unable to recover it. 00:26:39.034 [2024-12-06 03:34:59.020871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.034 [2024-12-06 03:34:59.020883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.034 qpair failed and we were unable to recover it. 00:26:39.034 [2024-12-06 03:34:59.020966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.034 [2024-12-06 03:34:59.020978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.034 qpair failed and we were unable to recover it. 00:26:39.034 [2024-12-06 03:34:59.021176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.034 [2024-12-06 03:34:59.021188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.034 qpair failed and we were unable to recover it. 00:26:39.034 [2024-12-06 03:34:59.021320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.034 [2024-12-06 03:34:59.021333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.034 qpair failed and we were unable to recover it. 00:26:39.034 [2024-12-06 03:34:59.021557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.034 [2024-12-06 03:34:59.021569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.034 qpair failed and we were unable to recover it. 00:26:39.034 [2024-12-06 03:34:59.021793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.034 [2024-12-06 03:34:59.021805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.034 qpair failed and we were unable to recover it. 00:26:39.034 [2024-12-06 03:34:59.022060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.034 [2024-12-06 03:34:59.022073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.034 qpair failed and we were unable to recover it. 00:26:39.034 [2024-12-06 03:34:59.022301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.034 [2024-12-06 03:34:59.022313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.034 qpair failed and we were unable to recover it. 00:26:39.034 [2024-12-06 03:34:59.022448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.034 [2024-12-06 03:34:59.022461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.034 qpair failed and we were unable to recover it. 00:26:39.034 [2024-12-06 03:34:59.022677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.034 [2024-12-06 03:34:59.022689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.034 qpair failed and we were unable to recover it. 00:26:39.034 [2024-12-06 03:34:59.022836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.034 [2024-12-06 03:34:59.022848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.034 qpair failed and we were unable to recover it. 00:26:39.034 [2024-12-06 03:34:59.023092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.034 [2024-12-06 03:34:59.023105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.034 qpair failed and we were unable to recover it. 00:26:39.034 [2024-12-06 03:34:59.023204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.034 [2024-12-06 03:34:59.023215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.034 qpair failed and we were unable to recover it. 00:26:39.034 [2024-12-06 03:34:59.023456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.034 [2024-12-06 03:34:59.023469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.034 qpair failed and we were unable to recover it. 00:26:39.034 [2024-12-06 03:34:59.023692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.034 [2024-12-06 03:34:59.023705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.034 qpair failed and we were unable to recover it. 00:26:39.034 [2024-12-06 03:34:59.023961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.034 [2024-12-06 03:34:59.023974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.034 qpair failed and we were unable to recover it. 00:26:39.034 [2024-12-06 03:34:59.024121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.034 [2024-12-06 03:34:59.024133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.034 qpair failed and we were unable to recover it. 00:26:39.034 [2024-12-06 03:34:59.024355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.034 [2024-12-06 03:34:59.024367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.034 qpair failed and we were unable to recover it. 00:26:39.034 [2024-12-06 03:34:59.024577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.034 [2024-12-06 03:34:59.024589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.034 qpair failed and we were unable to recover it. 00:26:39.034 [2024-12-06 03:34:59.024823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.034 [2024-12-06 03:34:59.024835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.034 qpair failed and we were unable to recover it. 00:26:39.034 [2024-12-06 03:34:59.024980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.034 [2024-12-06 03:34:59.024993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.034 qpair failed and we were unable to recover it. 00:26:39.034 [2024-12-06 03:34:59.025214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.034 [2024-12-06 03:34:59.025226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.034 qpair failed and we were unable to recover it. 00:26:39.034 [2024-12-06 03:34:59.025358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.034 [2024-12-06 03:34:59.025370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.034 qpair failed and we were unable to recover it. 00:26:39.034 [2024-12-06 03:34:59.025535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.034 [2024-12-06 03:34:59.025547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.034 qpair failed and we were unable to recover it. 00:26:39.034 [2024-12-06 03:34:59.025771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.034 [2024-12-06 03:34:59.025784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.034 qpair failed and we were unable to recover it. 00:26:39.034 [2024-12-06 03:34:59.025943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.034 [2024-12-06 03:34:59.025959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.034 qpair failed and we were unable to recover it. 00:26:39.034 [2024-12-06 03:34:59.026094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.034 [2024-12-06 03:34:59.026106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.034 qpair failed and we were unable to recover it. 00:26:39.034 [2024-12-06 03:34:59.026199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.035 [2024-12-06 03:34:59.026211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.035 qpair failed and we were unable to recover it. 00:26:39.035 [2024-12-06 03:34:59.026356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.035 [2024-12-06 03:34:59.026368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.035 qpair failed and we were unable to recover it. 00:26:39.035 [2024-12-06 03:34:59.026454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.035 [2024-12-06 03:34:59.026466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.035 qpair failed and we were unable to recover it. 00:26:39.035 [2024-12-06 03:34:59.026534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.035 [2024-12-06 03:34:59.026546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.035 qpair failed and we were unable to recover it. 00:26:39.035 [2024-12-06 03:34:59.026709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.035 [2024-12-06 03:34:59.026721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.035 qpair failed and we were unable to recover it. 00:26:39.035 [2024-12-06 03:34:59.026811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.035 [2024-12-06 03:34:59.026822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.035 qpair failed and we were unable to recover it. 00:26:39.035 [2024-12-06 03:34:59.026999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.035 [2024-12-06 03:34:59.027012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.035 qpair failed and we were unable to recover it. 00:26:39.035 [2024-12-06 03:34:59.027210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.035 [2024-12-06 03:34:59.027225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.035 qpair failed and we were unable to recover it. 00:26:39.035 [2024-12-06 03:34:59.027368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.035 [2024-12-06 03:34:59.027381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.035 qpair failed and we were unable to recover it. 00:26:39.035 [2024-12-06 03:34:59.027523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.035 [2024-12-06 03:34:59.027535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.035 qpair failed and we were unable to recover it. 00:26:39.035 [2024-12-06 03:34:59.027757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.035 [2024-12-06 03:34:59.027769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.035 qpair failed and we were unable to recover it. 00:26:39.035 [2024-12-06 03:34:59.027899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.035 [2024-12-06 03:34:59.027911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.035 qpair failed and we were unable to recover it. 00:26:39.035 [2024-12-06 03:34:59.028110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.035 [2024-12-06 03:34:59.028122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.035 qpair failed and we were unable to recover it. 00:26:39.035 [2024-12-06 03:34:59.028290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.035 [2024-12-06 03:34:59.028303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.035 qpair failed and we were unable to recover it. 00:26:39.035 [2024-12-06 03:34:59.028506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.035 [2024-12-06 03:34:59.028518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.035 qpair failed and we were unable to recover it. 00:26:39.035 [2024-12-06 03:34:59.028680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.035 [2024-12-06 03:34:59.028692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.035 qpair failed and we were unable to recover it. 00:26:39.035 [2024-12-06 03:34:59.028894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.035 [2024-12-06 03:34:59.028907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.035 qpair failed and we were unable to recover it. 00:26:39.035 [2024-12-06 03:34:59.029057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.035 [2024-12-06 03:34:59.029069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.035 qpair failed and we were unable to recover it. 00:26:39.035 [2024-12-06 03:34:59.029295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.035 [2024-12-06 03:34:59.029307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.035 qpair failed and we were unable to recover it. 00:26:39.035 [2024-12-06 03:34:59.029443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.035 [2024-12-06 03:34:59.029456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.035 qpair failed and we were unable to recover it. 00:26:39.035 [2024-12-06 03:34:59.029662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.035 [2024-12-06 03:34:59.029674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.035 qpair failed and we were unable to recover it. 00:26:39.035 [2024-12-06 03:34:59.029838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.035 [2024-12-06 03:34:59.029850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.035 qpair failed and we were unable to recover it. 00:26:39.035 [2024-12-06 03:34:59.029992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.035 [2024-12-06 03:34:59.030005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.035 qpair failed and we were unable to recover it. 00:26:39.035 [2024-12-06 03:34:59.030145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.035 [2024-12-06 03:34:59.030158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.035 qpair failed and we were unable to recover it. 00:26:39.035 [2024-12-06 03:34:59.030323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.035 [2024-12-06 03:34:59.030336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.035 qpair failed and we were unable to recover it. 00:26:39.035 [2024-12-06 03:34:59.030435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.035 [2024-12-06 03:34:59.030446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.035 qpair failed and we were unable to recover it. 00:26:39.035 [2024-12-06 03:34:59.030650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.035 [2024-12-06 03:34:59.030662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.035 qpair failed and we were unable to recover it. 00:26:39.035 [2024-12-06 03:34:59.030812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.035 [2024-12-06 03:34:59.030825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.035 qpair failed and we were unable to recover it. 00:26:39.035 [2024-12-06 03:34:59.031054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.035 [2024-12-06 03:34:59.031067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.035 qpair failed and we were unable to recover it. 00:26:39.035 [2024-12-06 03:34:59.031213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.035 [2024-12-06 03:34:59.031225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.035 qpair failed and we were unable to recover it. 00:26:39.035 [2024-12-06 03:34:59.031382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.035 [2024-12-06 03:34:59.031395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.035 qpair failed and we were unable to recover it. 00:26:39.035 [2024-12-06 03:34:59.031565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.035 [2024-12-06 03:34:59.031577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.035 qpair failed and we were unable to recover it. 00:26:39.035 [2024-12-06 03:34:59.031716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.035 [2024-12-06 03:34:59.031728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.035 qpair failed and we were unable to recover it. 00:26:39.035 [2024-12-06 03:34:59.031814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.035 [2024-12-06 03:34:59.031825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.035 qpair failed and we were unable to recover it. 00:26:39.035 [2024-12-06 03:34:59.032084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.035 [2024-12-06 03:34:59.032112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.035 qpair failed and we were unable to recover it. 00:26:39.035 [2024-12-06 03:34:59.032373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.035 [2024-12-06 03:34:59.032390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.035 qpair failed and we were unable to recover it. 00:26:39.035 [2024-12-06 03:34:59.032605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.035 [2024-12-06 03:34:59.032622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.035 qpair failed and we were unable to recover it. 00:26:39.035 [2024-12-06 03:34:59.032779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.035 [2024-12-06 03:34:59.032795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.035 qpair failed and we were unable to recover it. 00:26:39.035 [2024-12-06 03:34:59.032993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.036 [2024-12-06 03:34:59.033010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.036 qpair failed and we were unable to recover it. 00:26:39.036 [2024-12-06 03:34:59.033190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.036 [2024-12-06 03:34:59.033206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.036 qpair failed and we were unable to recover it. 00:26:39.036 [2024-12-06 03:34:59.033433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.036 [2024-12-06 03:34:59.033449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.036 qpair failed and we were unable to recover it. 00:26:39.036 [2024-12-06 03:34:59.033686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.036 [2024-12-06 03:34:59.033702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.036 qpair failed and we were unable to recover it. 00:26:39.036 [2024-12-06 03:34:59.033853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.036 [2024-12-06 03:34:59.033869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.036 qpair failed and we were unable to recover it. 00:26:39.036 [2024-12-06 03:34:59.034097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.036 [2024-12-06 03:34:59.034113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.036 qpair failed and we were unable to recover it. 00:26:39.036 [2024-12-06 03:34:59.034272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.036 [2024-12-06 03:34:59.034289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.036 qpair failed and we were unable to recover it. 00:26:39.036 [2024-12-06 03:34:59.034428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.036 [2024-12-06 03:34:59.034444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.036 qpair failed and we were unable to recover it. 00:26:39.036 [2024-12-06 03:34:59.034662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.036 [2024-12-06 03:34:59.034679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.036 qpair failed and we were unable to recover it. 00:26:39.036 [2024-12-06 03:34:59.034766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.036 [2024-12-06 03:34:59.034780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.036 qpair failed and we were unable to recover it. 00:26:39.036 [2024-12-06 03:34:59.034938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.036 [2024-12-06 03:34:59.034964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.036 qpair failed and we were unable to recover it. 00:26:39.036 [2024-12-06 03:34:59.035198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.036 [2024-12-06 03:34:59.035214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.036 qpair failed and we were unable to recover it. 00:26:39.036 [2024-12-06 03:34:59.035468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.036 [2024-12-06 03:34:59.035484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.036 qpair failed and we were unable to recover it. 00:26:39.036 [2024-12-06 03:34:59.035635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.036 [2024-12-06 03:34:59.035652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.036 qpair failed and we were unable to recover it. 00:26:39.036 [2024-12-06 03:34:59.035801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.036 [2024-12-06 03:34:59.035818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.036 qpair failed and we were unable to recover it. 00:26:39.036 [2024-12-06 03:34:59.036051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.036 [2024-12-06 03:34:59.036068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.036 qpair failed and we were unable to recover it. 00:26:39.036 [2024-12-06 03:34:59.036280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.036 [2024-12-06 03:34:59.036296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.036 qpair failed and we were unable to recover it. 00:26:39.036 [2024-12-06 03:34:59.036528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.036 [2024-12-06 03:34:59.036544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.036 qpair failed and we were unable to recover it. 00:26:39.036 [2024-12-06 03:34:59.036758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.036 [2024-12-06 03:34:59.036774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.036 qpair failed and we were unable to recover it. 00:26:39.036 [2024-12-06 03:34:59.037036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.036 [2024-12-06 03:34:59.037052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.036 qpair failed and we were unable to recover it. 00:26:39.036 [2024-12-06 03:34:59.037234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.036 [2024-12-06 03:34:59.037251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.036 qpair failed and we were unable to recover it. 00:26:39.036 [2024-12-06 03:34:59.037469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.036 [2024-12-06 03:34:59.037486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.036 qpair failed and we were unable to recover it. 00:26:39.036 [2024-12-06 03:34:59.037696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.036 [2024-12-06 03:34:59.037713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.036 qpair failed and we were unable to recover it. 00:26:39.036 [2024-12-06 03:34:59.037892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.036 [2024-12-06 03:34:59.037911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.036 qpair failed and we were unable to recover it. 00:26:39.036 [2024-12-06 03:34:59.038116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.036 [2024-12-06 03:34:59.038133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.036 qpair failed and we were unable to recover it. 00:26:39.036 [2024-12-06 03:34:59.038342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.036 [2024-12-06 03:34:59.038358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.036 qpair failed and we were unable to recover it. 00:26:39.036 [2024-12-06 03:34:59.038566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.036 [2024-12-06 03:34:59.038582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.036 qpair failed and we were unable to recover it. 00:26:39.036 [2024-12-06 03:34:59.038744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.036 [2024-12-06 03:34:59.038761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.036 qpair failed and we were unable to recover it. 00:26:39.036 [2024-12-06 03:34:59.038992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.036 [2024-12-06 03:34:59.039010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.036 qpair failed and we were unable to recover it. 00:26:39.036 [2024-12-06 03:34:59.039195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.036 [2024-12-06 03:34:59.039211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.036 qpair failed and we were unable to recover it. 00:26:39.036 [2024-12-06 03:34:59.039397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.036 [2024-12-06 03:34:59.039413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.036 qpair failed and we were unable to recover it. 00:26:39.036 [2024-12-06 03:34:59.039591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.036 [2024-12-06 03:34:59.039608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.036 qpair failed and we were unable to recover it. 00:26:39.036 [2024-12-06 03:34:59.039749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.036 [2024-12-06 03:34:59.039765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.036 qpair failed and we were unable to recover it. 00:26:39.036 [2024-12-06 03:34:59.040027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.036 [2024-12-06 03:34:59.040044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.036 qpair failed and we were unable to recover it. 00:26:39.036 [2024-12-06 03:34:59.040197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.036 [2024-12-06 03:34:59.040215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.036 qpair failed and we were unable to recover it. 00:26:39.036 [2024-12-06 03:34:59.040389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.037 [2024-12-06 03:34:59.040406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.037 qpair failed and we were unable to recover it. 00:26:39.037 [2024-12-06 03:34:59.040640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.037 [2024-12-06 03:34:59.040657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.037 qpair failed and we were unable to recover it. 00:26:39.037 [2024-12-06 03:34:59.040900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.037 [2024-12-06 03:34:59.040916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.037 qpair failed and we were unable to recover it. 00:26:39.037 [2024-12-06 03:34:59.041128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.037 [2024-12-06 03:34:59.041145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.037 qpair failed and we were unable to recover it. 00:26:39.037 [2024-12-06 03:34:59.041302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.037 [2024-12-06 03:34:59.041319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.037 qpair failed and we were unable to recover it. 00:26:39.037 [2024-12-06 03:34:59.041525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.037 [2024-12-06 03:34:59.041542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.037 qpair failed and we were unable to recover it. 00:26:39.037 [2024-12-06 03:34:59.041780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.037 [2024-12-06 03:34:59.041797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.037 qpair failed and we were unable to recover it. 00:26:39.037 [2024-12-06 03:34:59.041954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.037 [2024-12-06 03:34:59.041972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.037 qpair failed and we were unable to recover it. 00:26:39.037 [2024-12-06 03:34:59.042080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.037 [2024-12-06 03:34:59.042096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.037 qpair failed and we were unable to recover it. 00:26:39.037 [2024-12-06 03:34:59.042236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.037 [2024-12-06 03:34:59.042252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.037 qpair failed and we were unable to recover it. 00:26:39.037 [2024-12-06 03:34:59.042427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.037 [2024-12-06 03:34:59.042443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.037 qpair failed and we were unable to recover it. 00:26:39.037 [2024-12-06 03:34:59.042598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.037 [2024-12-06 03:34:59.042615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.037 qpair failed and we were unable to recover it. 00:26:39.037 [2024-12-06 03:34:59.042772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.037 [2024-12-06 03:34:59.042789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.037 qpair failed and we were unable to recover it. 00:26:39.037 [2024-12-06 03:34:59.043001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.037 [2024-12-06 03:34:59.043017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.037 qpair failed and we were unable to recover it. 00:26:39.037 [2024-12-06 03:34:59.043259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.037 [2024-12-06 03:34:59.043275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.037 qpair failed and we were unable to recover it. 00:26:39.037 [2024-12-06 03:34:59.043447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.037 [2024-12-06 03:34:59.043466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.037 qpair failed and we were unable to recover it. 00:26:39.037 [2024-12-06 03:34:59.043683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.037 [2024-12-06 03:34:59.043699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.037 qpair failed and we were unable to recover it. 00:26:39.037 [2024-12-06 03:34:59.043914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.037 [2024-12-06 03:34:59.043931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.037 qpair failed and we were unable to recover it. 00:26:39.037 [2024-12-06 03:34:59.044205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.037 [2024-12-06 03:34:59.044223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.037 qpair failed and we were unable to recover it. 00:26:39.037 [2024-12-06 03:34:59.044385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.037 [2024-12-06 03:34:59.044401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.037 qpair failed and we were unable to recover it. 00:26:39.037 [2024-12-06 03:34:59.044645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.037 [2024-12-06 03:34:59.044662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.037 qpair failed and we were unable to recover it. 00:26:39.037 [2024-12-06 03:34:59.044741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.037 [2024-12-06 03:34:59.044756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.037 qpair failed and we were unable to recover it. 00:26:39.037 [2024-12-06 03:34:59.044962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.037 [2024-12-06 03:34:59.044980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.037 qpair failed and we were unable to recover it. 00:26:39.037 [2024-12-06 03:34:59.045142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.037 [2024-12-06 03:34:59.045158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.037 qpair failed and we were unable to recover it. 00:26:39.037 [2024-12-06 03:34:59.045340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.037 [2024-12-06 03:34:59.045356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.037 qpair failed and we were unable to recover it. 00:26:39.037 [2024-12-06 03:34:59.045548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.037 [2024-12-06 03:34:59.045564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.037 qpair failed and we were unable to recover it. 00:26:39.037 [2024-12-06 03:34:59.045724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.037 [2024-12-06 03:34:59.045741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.037 qpair failed and we were unable to recover it. 00:26:39.037 [2024-12-06 03:34:59.045912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.037 [2024-12-06 03:34:59.045928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.037 qpair failed and we were unable to recover it. 00:26:39.037 [2024-12-06 03:34:59.046099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.037 [2024-12-06 03:34:59.046116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.037 qpair failed and we were unable to recover it. 00:26:39.037 [2024-12-06 03:34:59.046259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.037 [2024-12-06 03:34:59.046276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.037 qpair failed and we were unable to recover it. 00:26:39.037 [2024-12-06 03:34:59.046483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.037 [2024-12-06 03:34:59.046499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.037 qpair failed and we were unable to recover it. 00:26:39.037 [2024-12-06 03:34:59.046728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.037 [2024-12-06 03:34:59.046744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.037 qpair failed and we were unable to recover it. 00:26:39.037 [2024-12-06 03:34:59.046956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.037 [2024-12-06 03:34:59.046972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.037 qpair failed and we were unable to recover it. 00:26:39.037 [2024-12-06 03:34:59.047235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.037 [2024-12-06 03:34:59.047251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.037 qpair failed and we were unable to recover it. 00:26:39.037 [2024-12-06 03:34:59.047430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.038 [2024-12-06 03:34:59.047447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.038 qpair failed and we were unable to recover it. 00:26:39.038 [2024-12-06 03:34:59.047599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.038 [2024-12-06 03:34:59.047615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.038 qpair failed and we were unable to recover it. 00:26:39.038 [2024-12-06 03:34:59.047757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.038 [2024-12-06 03:34:59.047773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.038 qpair failed and we were unable to recover it. 00:26:39.038 [2024-12-06 03:34:59.048006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.038 [2024-12-06 03:34:59.048024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.038 qpair failed and we were unable to recover it. 00:26:39.038 [2024-12-06 03:34:59.048256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.038 [2024-12-06 03:34:59.048272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.038 qpair failed and we were unable to recover it. 00:26:39.038 [2024-12-06 03:34:59.048373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.038 [2024-12-06 03:34:59.048388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.038 qpair failed and we were unable to recover it. 00:26:39.038 [2024-12-06 03:34:59.048528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.038 [2024-12-06 03:34:59.048544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.038 qpair failed and we were unable to recover it. 00:26:39.038 [2024-12-06 03:34:59.048773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.038 [2024-12-06 03:34:59.048789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.038 qpair failed and we were unable to recover it. 00:26:39.038 [2024-12-06 03:34:59.048999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.038 [2024-12-06 03:34:59.049016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.038 qpair failed and we were unable to recover it. 00:26:39.038 [2024-12-06 03:34:59.049230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.038 [2024-12-06 03:34:59.049247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.038 qpair failed and we were unable to recover it. 00:26:39.038 [2024-12-06 03:34:59.049396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.038 [2024-12-06 03:34:59.049413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.038 qpair failed and we were unable to recover it. 00:26:39.038 [2024-12-06 03:34:59.049631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.038 [2024-12-06 03:34:59.049647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.038 qpair failed and we were unable to recover it. 00:26:39.038 [2024-12-06 03:34:59.049898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.038 [2024-12-06 03:34:59.049915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.038 qpair failed and we were unable to recover it. 00:26:39.038 [2024-12-06 03:34:59.050066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.038 [2024-12-06 03:34:59.050083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.038 qpair failed and we were unable to recover it. 00:26:39.038 [2024-12-06 03:34:59.050289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.038 [2024-12-06 03:34:59.050305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.038 qpair failed and we were unable to recover it. 00:26:39.038 [2024-12-06 03:34:59.050528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.038 [2024-12-06 03:34:59.050545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.038 qpair failed and we were unable to recover it. 00:26:39.038 [2024-12-06 03:34:59.050751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.038 [2024-12-06 03:34:59.050768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.038 qpair failed and we were unable to recover it. 00:26:39.038 [2024-12-06 03:34:59.051041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.038 [2024-12-06 03:34:59.051058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.038 qpair failed and we were unable to recover it. 00:26:39.038 [2024-12-06 03:34:59.051291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.038 [2024-12-06 03:34:59.051308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.038 qpair failed and we were unable to recover it. 00:26:39.038 [2024-12-06 03:34:59.051538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.038 [2024-12-06 03:34:59.051555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.038 qpair failed and we were unable to recover it. 00:26:39.038 [2024-12-06 03:34:59.051707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.038 [2024-12-06 03:34:59.051725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.038 qpair failed and we were unable to recover it. 00:26:39.038 [2024-12-06 03:34:59.051883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.038 [2024-12-06 03:34:59.051899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.038 qpair failed and we were unable to recover it. 00:26:39.038 [2024-12-06 03:34:59.052104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.038 [2024-12-06 03:34:59.052120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.038 qpair failed and we were unable to recover it. 00:26:39.038 [2024-12-06 03:34:59.052198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.038 [2024-12-06 03:34:59.052210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.038 qpair failed and we were unable to recover it. 00:26:39.038 [2024-12-06 03:34:59.052419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.038 [2024-12-06 03:34:59.052431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.038 qpair failed and we were unable to recover it. 00:26:39.038 [2024-12-06 03:34:59.052567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.038 [2024-12-06 03:34:59.052579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.038 qpair failed and we were unable to recover it. 00:26:39.038 [2024-12-06 03:34:59.052714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.038 [2024-12-06 03:34:59.052726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.038 qpair failed and we were unable to recover it. 00:26:39.038 [2024-12-06 03:34:59.052884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.038 [2024-12-06 03:34:59.052897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.038 qpair failed and we were unable to recover it. 00:26:39.038 [2024-12-06 03:34:59.052987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.038 [2024-12-06 03:34:59.052999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.038 qpair failed and we were unable to recover it. 00:26:39.038 [2024-12-06 03:34:59.053172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.038 [2024-12-06 03:34:59.053184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.038 qpair failed and we were unable to recover it. 00:26:39.038 [2024-12-06 03:34:59.053348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.038 [2024-12-06 03:34:59.053360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.038 qpair failed and we were unable to recover it. 00:26:39.038 [2024-12-06 03:34:59.053514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.038 [2024-12-06 03:34:59.053527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.038 qpair failed and we were unable to recover it. 00:26:39.038 [2024-12-06 03:34:59.053661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.038 [2024-12-06 03:34:59.053673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.038 qpair failed and we were unable to recover it. 00:26:39.038 [2024-12-06 03:34:59.053917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.038 [2024-12-06 03:34:59.053930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.038 qpair failed and we were unable to recover it. 00:26:39.038 [2024-12-06 03:34:59.054124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.039 [2024-12-06 03:34:59.054137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.039 qpair failed and we were unable to recover it. 00:26:39.039 [2024-12-06 03:34:59.054291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.039 [2024-12-06 03:34:59.054306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.039 qpair failed and we were unable to recover it. 00:26:39.039 [2024-12-06 03:34:59.054530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.039 [2024-12-06 03:34:59.054542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.039 qpair failed and we were unable to recover it. 00:26:39.039 [2024-12-06 03:34:59.054710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.039 [2024-12-06 03:34:59.054723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.039 qpair failed and we were unable to recover it. 00:26:39.039 [2024-12-06 03:34:59.054928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.039 [2024-12-06 03:34:59.054940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.039 qpair failed and we were unable to recover it. 00:26:39.039 [2024-12-06 03:34:59.055157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.039 [2024-12-06 03:34:59.055170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.039 qpair failed and we were unable to recover it. 00:26:39.039 [2024-12-06 03:34:59.055398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.039 [2024-12-06 03:34:59.055411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.039 qpair failed and we were unable to recover it. 00:26:39.039 [2024-12-06 03:34:59.055639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.039 [2024-12-06 03:34:59.055651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.039 qpair failed and we were unable to recover it. 00:26:39.039 [2024-12-06 03:34:59.055894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.039 [2024-12-06 03:34:59.055907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.039 qpair failed and we were unable to recover it. 00:26:39.039 [2024-12-06 03:34:59.056135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.039 [2024-12-06 03:34:59.056148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.039 qpair failed and we were unable to recover it. 00:26:39.039 [2024-12-06 03:34:59.056394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.039 [2024-12-06 03:34:59.056407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.039 qpair failed and we were unable to recover it. 00:26:39.039 [2024-12-06 03:34:59.056634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.039 [2024-12-06 03:34:59.056646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.039 qpair failed and we were unable to recover it. 00:26:39.039 [2024-12-06 03:34:59.056847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.039 [2024-12-06 03:34:59.056860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.039 qpair failed and we were unable to recover it. 00:26:39.039 [2024-12-06 03:34:59.057058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.039 [2024-12-06 03:34:59.057072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.039 qpair failed and we were unable to recover it. 00:26:39.039 [2024-12-06 03:34:59.057282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.039 [2024-12-06 03:34:59.057295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.039 qpair failed and we were unable to recover it. 00:26:39.039 [2024-12-06 03:34:59.057514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.039 [2024-12-06 03:34:59.057526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.039 qpair failed and we were unable to recover it. 00:26:39.039 [2024-12-06 03:34:59.057692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.039 [2024-12-06 03:34:59.057705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.039 qpair failed and we were unable to recover it. 00:26:39.039 [2024-12-06 03:34:59.057940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.039 [2024-12-06 03:34:59.057956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.039 qpair failed and we were unable to recover it. 00:26:39.039 [2024-12-06 03:34:59.058176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.039 [2024-12-06 03:34:59.058188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.039 qpair failed and we were unable to recover it. 00:26:39.039 [2024-12-06 03:34:59.058408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.039 [2024-12-06 03:34:59.058421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.039 qpair failed and we were unable to recover it. 00:26:39.039 [2024-12-06 03:34:59.058645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.039 [2024-12-06 03:34:59.058658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.039 qpair failed and we were unable to recover it. 00:26:39.039 [2024-12-06 03:34:59.058830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.039 [2024-12-06 03:34:59.058842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.039 qpair failed and we were unable to recover it. 00:26:39.039 [2024-12-06 03:34:59.058986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.039 [2024-12-06 03:34:59.058999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.039 qpair failed and we were unable to recover it. 00:26:39.039 [2024-12-06 03:34:59.059160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.039 [2024-12-06 03:34:59.059173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.039 qpair failed and we were unable to recover it. 00:26:39.039 [2024-12-06 03:34:59.059321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.039 [2024-12-06 03:34:59.059333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.039 qpair failed and we were unable to recover it. 00:26:39.039 [2024-12-06 03:34:59.059530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.039 [2024-12-06 03:34:59.059542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.039 qpair failed and we were unable to recover it. 00:26:39.039 [2024-12-06 03:34:59.059761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.039 [2024-12-06 03:34:59.059774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.039 qpair failed and we were unable to recover it. 00:26:39.039 [2024-12-06 03:34:59.059974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.039 [2024-12-06 03:34:59.059988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.039 qpair failed and we were unable to recover it. 00:26:39.039 [2024-12-06 03:34:59.060220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.039 [2024-12-06 03:34:59.060240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.039 qpair failed and we were unable to recover it. 00:26:39.039 [2024-12-06 03:34:59.060339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.039 [2024-12-06 03:34:59.060354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.039 qpair failed and we were unable to recover it. 00:26:39.039 [2024-12-06 03:34:59.060600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.039 [2024-12-06 03:34:59.060616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.039 qpair failed and we were unable to recover it. 00:26:39.039 [2024-12-06 03:34:59.060727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.039 [2024-12-06 03:34:59.060744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.039 qpair failed and we were unable to recover it. 00:26:39.039 [2024-12-06 03:34:59.060841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.039 [2024-12-06 03:34:59.060856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.039 qpair failed and we were unable to recover it. 00:26:39.039 [2024-12-06 03:34:59.061048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.039 [2024-12-06 03:34:59.061065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.039 qpair failed and we were unable to recover it. 00:26:39.039 [2024-12-06 03:34:59.061240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.039 [2024-12-06 03:34:59.061256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.039 qpair failed and we were unable to recover it. 00:26:39.039 [2024-12-06 03:34:59.061425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.039 [2024-12-06 03:34:59.061442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.039 qpair failed and we were unable to recover it. 00:26:39.039 [2024-12-06 03:34:59.061675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.039 [2024-12-06 03:34:59.061691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.039 qpair failed and we were unable to recover it. 00:26:39.039 [2024-12-06 03:34:59.061915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.039 [2024-12-06 03:34:59.061932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.039 qpair failed and we were unable to recover it. 00:26:39.039 [2024-12-06 03:34:59.062110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.040 [2024-12-06 03:34:59.062145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.040 qpair failed and we were unable to recover it. 00:26:39.040 [2024-12-06 03:34:59.062358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.040 [2024-12-06 03:34:59.062376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.040 qpair failed and we were unable to recover it. 00:26:39.040 [2024-12-06 03:34:59.062637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.040 [2024-12-06 03:34:59.062653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.040 qpair failed and we were unable to recover it. 00:26:39.040 [2024-12-06 03:34:59.062795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.040 [2024-12-06 03:34:59.062811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.040 qpair failed and we were unable to recover it. 00:26:39.040 [2024-12-06 03:34:59.062972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.040 [2024-12-06 03:34:59.062989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.040 qpair failed and we were unable to recover it. 00:26:39.040 [2024-12-06 03:34:59.063083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.040 [2024-12-06 03:34:59.063098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.040 qpair failed and we were unable to recover it. 00:26:39.040 [2024-12-06 03:34:59.063253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.040 [2024-12-06 03:34:59.063268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.040 qpair failed and we were unable to recover it. 00:26:39.040 [2024-12-06 03:34:59.063423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.040 [2024-12-06 03:34:59.063438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.040 qpair failed and we were unable to recover it. 00:26:39.040 [2024-12-06 03:34:59.063582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.040 [2024-12-06 03:34:59.063598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.040 qpair failed and we were unable to recover it. 00:26:39.040 [2024-12-06 03:34:59.063825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.040 [2024-12-06 03:34:59.063841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.040 qpair failed and we were unable to recover it. 00:26:39.040 [2024-12-06 03:34:59.064107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.040 [2024-12-06 03:34:59.064124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.040 qpair failed and we were unable to recover it. 00:26:39.040 [2024-12-06 03:34:59.064290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.040 [2024-12-06 03:34:59.064305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.040 qpair failed and we were unable to recover it. 00:26:39.040 [2024-12-06 03:34:59.064600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.040 [2024-12-06 03:34:59.064616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.040 qpair failed and we were unable to recover it. 00:26:39.040 [2024-12-06 03:34:59.064769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.040 [2024-12-06 03:34:59.064785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.040 qpair failed and we were unable to recover it. 00:26:39.040 [2024-12-06 03:34:59.064954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.040 [2024-12-06 03:34:59.064971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.040 qpair failed and we were unable to recover it. 00:26:39.040 [2024-12-06 03:34:59.065151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.040 [2024-12-06 03:34:59.065166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.040 qpair failed and we were unable to recover it. 00:26:39.040 [2024-12-06 03:34:59.065323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.040 [2024-12-06 03:34:59.065339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.040 qpair failed and we were unable to recover it. 00:26:39.040 [2024-12-06 03:34:59.065443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.040 [2024-12-06 03:34:59.065457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.040 qpair failed and we were unable to recover it. 00:26:39.040 [2024-12-06 03:34:59.065631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.040 [2024-12-06 03:34:59.065648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.040 qpair failed and we were unable to recover it. 00:26:39.040 [2024-12-06 03:34:59.065834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.040 [2024-12-06 03:34:59.065850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.040 qpair failed and we were unable to recover it. 00:26:39.040 [2024-12-06 03:34:59.066059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.040 [2024-12-06 03:34:59.066075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.040 qpair failed and we were unable to recover it. 00:26:39.040 [2024-12-06 03:34:59.066287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.040 [2024-12-06 03:34:59.066304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.040 qpair failed and we were unable to recover it. 00:26:39.040 [2024-12-06 03:34:59.066458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.040 [2024-12-06 03:34:59.066475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.040 qpair failed and we were unable to recover it. 00:26:39.040 [2024-12-06 03:34:59.066621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.040 [2024-12-06 03:34:59.066637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.040 qpair failed and we were unable to recover it. 00:26:39.040 [2024-12-06 03:34:59.066828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.040 [2024-12-06 03:34:59.066844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.040 qpair failed and we were unable to recover it. 00:26:39.040 [2024-12-06 03:34:59.066984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.040 [2024-12-06 03:34:59.067001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.040 qpair failed and we were unable to recover it. 00:26:39.040 [2024-12-06 03:34:59.067156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.040 [2024-12-06 03:34:59.067174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.040 qpair failed and we were unable to recover it. 00:26:39.040 [2024-12-06 03:34:59.067335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.040 [2024-12-06 03:34:59.067351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.040 qpair failed and we were unable to recover it. 00:26:39.040 [2024-12-06 03:34:59.067451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.040 [2024-12-06 03:34:59.067466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.040 qpair failed and we were unable to recover it. 00:26:39.040 [2024-12-06 03:34:59.067627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.040 [2024-12-06 03:34:59.067642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.040 qpair failed and we were unable to recover it. 00:26:39.040 [2024-12-06 03:34:59.067722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.040 [2024-12-06 03:34:59.067740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.040 qpair failed and we were unable to recover it. 00:26:39.040 [2024-12-06 03:34:59.067953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.040 [2024-12-06 03:34:59.067970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.040 qpair failed and we were unable to recover it. 00:26:39.040 [2024-12-06 03:34:59.068112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.040 [2024-12-06 03:34:59.068128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.040 qpair failed and we were unable to recover it. 00:26:39.040 [2024-12-06 03:34:59.068386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.040 [2024-12-06 03:34:59.068403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.040 qpair failed and we were unable to recover it. 00:26:39.040 [2024-12-06 03:34:59.068604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.040 [2024-12-06 03:34:59.068621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.040 qpair failed and we were unable to recover it. 00:26:39.040 [2024-12-06 03:34:59.068785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.040 [2024-12-06 03:34:59.068801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.040 qpair failed and we were unable to recover it. 00:26:39.040 [2024-12-06 03:34:59.068902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.040 [2024-12-06 03:34:59.068917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.040 qpair failed and we were unable to recover it. 00:26:39.040 [2024-12-06 03:34:59.069082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.040 [2024-12-06 03:34:59.069099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.040 qpair failed and we were unable to recover it. 00:26:39.040 [2024-12-06 03:34:59.069251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.041 [2024-12-06 03:34:59.069267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.041 qpair failed and we were unable to recover it. 00:26:39.041 [2024-12-06 03:34:59.069450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.041 [2024-12-06 03:34:59.069467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.041 qpair failed and we were unable to recover it. 00:26:39.041 [2024-12-06 03:34:59.069676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.041 [2024-12-06 03:34:59.069692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.041 qpair failed and we were unable to recover it. 00:26:39.041 [2024-12-06 03:34:59.069838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.041 [2024-12-06 03:34:59.069854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.041 qpair failed and we were unable to recover it. 00:26:39.041 [2024-12-06 03:34:59.070082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.041 [2024-12-06 03:34:59.070099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.041 qpair failed and we were unable to recover it. 00:26:39.041 [2024-12-06 03:34:59.070253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.041 [2024-12-06 03:34:59.070269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.041 qpair failed and we were unable to recover it. 00:26:39.041 [2024-12-06 03:34:59.070523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.041 [2024-12-06 03:34:59.070540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.041 qpair failed and we were unable to recover it. 00:26:39.041 [2024-12-06 03:34:59.070737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.041 [2024-12-06 03:34:59.070753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.041 qpair failed and we were unable to recover it. 00:26:39.041 [2024-12-06 03:34:59.070900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.041 [2024-12-06 03:34:59.070915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.041 qpair failed and we were unable to recover it. 00:26:39.041 [2024-12-06 03:34:59.071010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.041 [2024-12-06 03:34:59.071026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.041 qpair failed and we were unable to recover it. 00:26:39.041 [2024-12-06 03:34:59.071171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.041 [2024-12-06 03:34:59.071186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.041 qpair failed and we were unable to recover it. 00:26:39.041 [2024-12-06 03:34:59.071405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.041 [2024-12-06 03:34:59.071420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.041 qpair failed and we were unable to recover it. 00:26:39.041 [2024-12-06 03:34:59.071579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.041 [2024-12-06 03:34:59.071594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.041 qpair failed and we were unable to recover it. 00:26:39.041 [2024-12-06 03:34:59.071747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.041 [2024-12-06 03:34:59.071763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.041 qpair failed and we were unable to recover it. 00:26:39.041 [2024-12-06 03:34:59.071973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.041 [2024-12-06 03:34:59.071990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.041 qpair failed and we were unable to recover it. 00:26:39.041 [2024-12-06 03:34:59.072264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.041 [2024-12-06 03:34:59.072280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.041 qpair failed and we were unable to recover it. 00:26:39.041 [2024-12-06 03:34:59.072434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.041 [2024-12-06 03:34:59.072449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.041 qpair failed and we were unable to recover it. 00:26:39.041 [2024-12-06 03:34:59.072708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.041 [2024-12-06 03:34:59.072724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.041 qpair failed and we were unable to recover it. 00:26:39.041 [2024-12-06 03:34:59.072876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.041 [2024-12-06 03:34:59.072892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.041 qpair failed and we were unable to recover it. 00:26:39.041 [2024-12-06 03:34:59.073056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.041 [2024-12-06 03:34:59.073073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.041 qpair failed and we were unable to recover it. 00:26:39.041 [2024-12-06 03:34:59.073306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.041 [2024-12-06 03:34:59.073322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.041 qpair failed and we were unable to recover it. 00:26:39.041 [2024-12-06 03:34:59.073475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.041 [2024-12-06 03:34:59.073491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.041 qpair failed and we were unable to recover it. 00:26:39.041 [2024-12-06 03:34:59.073651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.041 [2024-12-06 03:34:59.073667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.041 qpair failed and we were unable to recover it. 00:26:39.041 [2024-12-06 03:34:59.073814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.041 [2024-12-06 03:34:59.073830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.041 qpair failed and we were unable to recover it. 00:26:39.041 [2024-12-06 03:34:59.074008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.041 [2024-12-06 03:34:59.074024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.041 qpair failed and we were unable to recover it. 00:26:39.041 [2024-12-06 03:34:59.074118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.041 [2024-12-06 03:34:59.074135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.041 qpair failed and we were unable to recover it. 00:26:39.041 [2024-12-06 03:34:59.074374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.041 [2024-12-06 03:34:59.074390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.041 qpair failed and we were unable to recover it. 00:26:39.041 [2024-12-06 03:34:59.074565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.041 [2024-12-06 03:34:59.074581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.041 qpair failed and we were unable to recover it. 00:26:39.041 [2024-12-06 03:34:59.074809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.041 [2024-12-06 03:34:59.074825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.041 qpair failed and we were unable to recover it. 00:26:39.041 [2024-12-06 03:34:59.074920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.041 [2024-12-06 03:34:59.074935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.041 qpair failed and we were unable to recover it. 00:26:39.041 [2024-12-06 03:34:59.075103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.041 [2024-12-06 03:34:59.075119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.041 qpair failed and we were unable to recover it. 00:26:39.041 [2024-12-06 03:34:59.075311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.041 [2024-12-06 03:34:59.075325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.041 qpair failed and we were unable to recover it. 00:26:39.041 [2024-12-06 03:34:59.075475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.041 [2024-12-06 03:34:59.075494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.041 qpair failed and we were unable to recover it. 00:26:39.041 [2024-12-06 03:34:59.075596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.041 [2024-12-06 03:34:59.075611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.041 qpair failed and we were unable to recover it. 00:26:39.041 [2024-12-06 03:34:59.075846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.041 [2024-12-06 03:34:59.075861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.041 qpair failed and we were unable to recover it. 00:26:39.041 [2024-12-06 03:34:59.076034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.041 [2024-12-06 03:34:59.076051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.041 qpair failed and we were unable to recover it. 00:26:39.041 [2024-12-06 03:34:59.076158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.041 [2024-12-06 03:34:59.076174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.041 qpair failed and we were unable to recover it. 00:26:39.041 [2024-12-06 03:34:59.076346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.041 [2024-12-06 03:34:59.076362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.041 qpair failed and we were unable to recover it. 00:26:39.042 [2024-12-06 03:34:59.076616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.042 [2024-12-06 03:34:59.076632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.042 qpair failed and we were unable to recover it. 00:26:39.042 [2024-12-06 03:34:59.076869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.042 [2024-12-06 03:34:59.076885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.042 qpair failed and we were unable to recover it. 00:26:39.042 [2024-12-06 03:34:59.077097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.042 [2024-12-06 03:34:59.077114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.042 qpair failed and we were unable to recover it. 00:26:39.042 [2024-12-06 03:34:59.077268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.042 [2024-12-06 03:34:59.077285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.042 qpair failed and we were unable to recover it. 00:26:39.042 [2024-12-06 03:34:59.077502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.042 [2024-12-06 03:34:59.077519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.042 qpair failed and we were unable to recover it. 00:26:39.042 [2024-12-06 03:34:59.077682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.042 [2024-12-06 03:34:59.077698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.042 qpair failed and we were unable to recover it. 00:26:39.042 [2024-12-06 03:34:59.077867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.042 [2024-12-06 03:34:59.077883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.042 qpair failed and we were unable to recover it. 00:26:39.042 [2024-12-06 03:34:59.078024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.042 [2024-12-06 03:34:59.078040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.042 qpair failed and we were unable to recover it. 00:26:39.042 [2024-12-06 03:34:59.078184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.042 [2024-12-06 03:34:59.078200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.042 qpair failed and we were unable to recover it. 00:26:39.042 [2024-12-06 03:34:59.078384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.042 [2024-12-06 03:34:59.078400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.042 qpair failed and we were unable to recover it. 00:26:39.042 [2024-12-06 03:34:59.078552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.042 [2024-12-06 03:34:59.078568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.042 qpair failed and we were unable to recover it. 00:26:39.042 [2024-12-06 03:34:59.078746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.042 [2024-12-06 03:34:59.078762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.042 qpair failed and we were unable to recover it. 00:26:39.042 [2024-12-06 03:34:59.078971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.042 [2024-12-06 03:34:59.078988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.042 qpair failed and we were unable to recover it. 00:26:39.042 [2024-12-06 03:34:59.079197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.042 [2024-12-06 03:34:59.079213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.042 qpair failed and we were unable to recover it. 00:26:39.042 [2024-12-06 03:34:59.079396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.042 [2024-12-06 03:34:59.079412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.042 qpair failed and we were unable to recover it. 00:26:39.042 [2024-12-06 03:34:59.079566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.042 [2024-12-06 03:34:59.079582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.042 qpair failed and we were unable to recover it. 00:26:39.042 [2024-12-06 03:34:59.079837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.042 [2024-12-06 03:34:59.079853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.042 qpair failed and we were unable to recover it. 00:26:39.042 [2024-12-06 03:34:59.080100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.042 [2024-12-06 03:34:59.080116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.042 qpair failed and we were unable to recover it. 00:26:39.042 [2024-12-06 03:34:59.080350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.042 [2024-12-06 03:34:59.080366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.042 qpair failed and we were unable to recover it. 00:26:39.042 [2024-12-06 03:34:59.080582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.042 [2024-12-06 03:34:59.080598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.042 qpair failed and we were unable to recover it. 00:26:39.042 [2024-12-06 03:34:59.080738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.042 [2024-12-06 03:34:59.080753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.042 qpair failed and we were unable to recover it. 00:26:39.042 [2024-12-06 03:34:59.080914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.042 [2024-12-06 03:34:59.080931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.042 qpair failed and we were unable to recover it. 00:26:39.042 [2024-12-06 03:34:59.081208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.042 [2024-12-06 03:34:59.081224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.042 qpair failed and we were unable to recover it. 00:26:39.042 [2024-12-06 03:34:59.081367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.042 [2024-12-06 03:34:59.081383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.042 qpair failed and we were unable to recover it. 00:26:39.042 [2024-12-06 03:34:59.081545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.042 [2024-12-06 03:34:59.081561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.042 qpair failed and we were unable to recover it. 00:26:39.042 [2024-12-06 03:34:59.081704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.042 [2024-12-06 03:34:59.081719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.042 qpair failed and we were unable to recover it. 00:26:39.042 [2024-12-06 03:34:59.081965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.042 [2024-12-06 03:34:59.081982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.042 qpair failed and we were unable to recover it. 00:26:39.042 [2024-12-06 03:34:59.082244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.042 [2024-12-06 03:34:59.082260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.042 qpair failed and we were unable to recover it. 00:26:39.042 [2024-12-06 03:34:59.082420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.042 [2024-12-06 03:34:59.082436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.042 qpair failed and we were unable to recover it. 00:26:39.042 [2024-12-06 03:34:59.082652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.042 [2024-12-06 03:34:59.082668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.042 qpair failed and we were unable to recover it. 00:26:39.042 [2024-12-06 03:34:59.082823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.042 [2024-12-06 03:34:59.082838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.042 qpair failed and we were unable to recover it. 00:26:39.042 [2024-12-06 03:34:59.083010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.042 [2024-12-06 03:34:59.083026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.042 qpair failed and we were unable to recover it. 00:26:39.042 [2024-12-06 03:34:59.083197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.042 [2024-12-06 03:34:59.083213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.042 qpair failed and we were unable to recover it. 00:26:39.042 [2024-12-06 03:34:59.083407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.043 [2024-12-06 03:34:59.083422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.043 qpair failed and we were unable to recover it. 00:26:39.043 [2024-12-06 03:34:59.083659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.043 [2024-12-06 03:34:59.083678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.043 qpair failed and we were unable to recover it. 00:26:39.043 [2024-12-06 03:34:59.083916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.043 [2024-12-06 03:34:59.083933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.043 qpair failed and we were unable to recover it. 00:26:39.043 [2024-12-06 03:34:59.084100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.043 [2024-12-06 03:34:59.084118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.043 qpair failed and we were unable to recover it. 00:26:39.043 [2024-12-06 03:34:59.084340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.043 [2024-12-06 03:34:59.084357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.043 qpair failed and we were unable to recover it. 00:26:39.043 [2024-12-06 03:34:59.084565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.043 [2024-12-06 03:34:59.084582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.043 qpair failed and we were unable to recover it. 00:26:39.043 [2024-12-06 03:34:59.084811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.043 [2024-12-06 03:34:59.084827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.043 qpair failed and we were unable to recover it. 00:26:39.043 [2024-12-06 03:34:59.084971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.043 [2024-12-06 03:34:59.084988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.043 qpair failed and we were unable to recover it. 00:26:39.043 [2024-12-06 03:34:59.085149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.043 [2024-12-06 03:34:59.085166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.043 qpair failed and we were unable to recover it. 00:26:39.043 [2024-12-06 03:34:59.085308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.043 [2024-12-06 03:34:59.085325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.043 qpair failed and we were unable to recover it. 00:26:39.043 [2024-12-06 03:34:59.085418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.043 [2024-12-06 03:34:59.085433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.043 qpair failed and we were unable to recover it. 00:26:39.043 [2024-12-06 03:34:59.085599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.043 [2024-12-06 03:34:59.085615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.043 qpair failed and we were unable to recover it. 00:26:39.043 [2024-12-06 03:34:59.085852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.043 [2024-12-06 03:34:59.085868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.043 qpair failed and we were unable to recover it. 00:26:39.043 [2024-12-06 03:34:59.085962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.043 [2024-12-06 03:34:59.085978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.043 qpair failed and we were unable to recover it. 00:26:39.043 [2024-12-06 03:34:59.086231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.043 [2024-12-06 03:34:59.086247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.043 qpair failed and we were unable to recover it. 00:26:39.043 [2024-12-06 03:34:59.086487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.043 [2024-12-06 03:34:59.086504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.043 qpair failed and we were unable to recover it. 00:26:39.043 [2024-12-06 03:34:59.086721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.043 [2024-12-06 03:34:59.086737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.043 qpair failed and we were unable to recover it. 00:26:39.043 [2024-12-06 03:34:59.086972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.043 [2024-12-06 03:34:59.086989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.043 qpair failed and we were unable to recover it. 00:26:39.043 [2024-12-06 03:34:59.087145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.043 [2024-12-06 03:34:59.087162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.043 qpair failed and we were unable to recover it. 00:26:39.043 [2024-12-06 03:34:59.087405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.043 [2024-12-06 03:34:59.087422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.043 qpair failed and we were unable to recover it. 00:26:39.043 [2024-12-06 03:34:59.087659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.043 [2024-12-06 03:34:59.087676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.043 qpair failed and we were unable to recover it. 00:26:39.043 [2024-12-06 03:34:59.087834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.043 [2024-12-06 03:34:59.087851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.043 qpair failed and we were unable to recover it. 00:26:39.043 [2024-12-06 03:34:59.088059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.043 [2024-12-06 03:34:59.088076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.043 qpair failed and we were unable to recover it. 00:26:39.043 [2024-12-06 03:34:59.088317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.043 [2024-12-06 03:34:59.088334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.043 qpair failed and we were unable to recover it. 00:26:39.043 [2024-12-06 03:34:59.088481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.043 [2024-12-06 03:34:59.088497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.043 qpair failed and we were unable to recover it. 00:26:39.043 [2024-12-06 03:34:59.088598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.043 [2024-12-06 03:34:59.088613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.043 qpair failed and we were unable to recover it. 00:26:39.043 [2024-12-06 03:34:59.088833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.043 [2024-12-06 03:34:59.088850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.043 qpair failed and we were unable to recover it. 00:26:39.043 [2024-12-06 03:34:59.089004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.043 [2024-12-06 03:34:59.089021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.043 qpair failed and we were unable to recover it. 00:26:39.043 [2024-12-06 03:34:59.089048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116ab20 (9): Bad file descriptor 00:26:39.043 [2024-12-06 03:34:59.089233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.043 [2024-12-06 03:34:59.089250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.043 qpair failed and we were unable to recover it. 00:26:39.043 [2024-12-06 03:34:59.089405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.043 [2024-12-06 03:34:59.089418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.043 qpair failed and we were unable to recover it. 00:26:39.043 [2024-12-06 03:34:59.089557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.043 [2024-12-06 03:34:59.089570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.043 qpair failed and we were unable to recover it. 00:26:39.043 [2024-12-06 03:34:59.089715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.043 [2024-12-06 03:34:59.089729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.043 qpair failed and we were unable to recover it. 00:26:39.043 [2024-12-06 03:34:59.089869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.043 [2024-12-06 03:34:59.089883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.043 qpair failed and we were unable to recover it. 00:26:39.043 [2024-12-06 03:34:59.090100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.043 [2024-12-06 03:34:59.090113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.043 qpair failed and we were unable to recover it. 00:26:39.043 [2024-12-06 03:34:59.090312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.043 [2024-12-06 03:34:59.090324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.043 qpair failed and we were unable to recover it. 00:26:39.043 [2024-12-06 03:34:59.090467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.043 [2024-12-06 03:34:59.090479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.043 qpair failed and we were unable to recover it. 00:26:39.043 [2024-12-06 03:34:59.090627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.043 [2024-12-06 03:34:59.090641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.043 qpair failed and we were unable to recover it. 00:26:39.043 [2024-12-06 03:34:59.090857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.043 [2024-12-06 03:34:59.090870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.044 qpair failed and we were unable to recover it. 00:26:39.044 [2024-12-06 03:34:59.091011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.044 [2024-12-06 03:34:59.091024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.044 qpair failed and we were unable to recover it. 00:26:39.044 [2024-12-06 03:34:59.091241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.044 [2024-12-06 03:34:59.091253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.044 qpair failed and we were unable to recover it. 00:26:39.044 [2024-12-06 03:34:59.091386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.044 [2024-12-06 03:34:59.091398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.044 qpair failed and we were unable to recover it. 00:26:39.044 [2024-12-06 03:34:59.091558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.044 [2024-12-06 03:34:59.091571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.044 qpair failed and we were unable to recover it. 00:26:39.044 [2024-12-06 03:34:59.091785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.044 [2024-12-06 03:34:59.091798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.044 qpair failed and we were unable to recover it. 00:26:39.044 [2024-12-06 03:34:59.091930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.044 [2024-12-06 03:34:59.091943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.044 qpair failed and we were unable to recover it. 00:26:39.044 [2024-12-06 03:34:59.092051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.044 [2024-12-06 03:34:59.092063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.044 qpair failed and we were unable to recover it. 00:26:39.044 [2024-12-06 03:34:59.092284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.044 [2024-12-06 03:34:59.092296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.044 qpair failed and we were unable to recover it. 00:26:39.044 [2024-12-06 03:34:59.092455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.044 [2024-12-06 03:34:59.092467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.044 qpair failed and we were unable to recover it. 00:26:39.044 [2024-12-06 03:34:59.092606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.044 [2024-12-06 03:34:59.092618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.044 qpair failed and we were unable to recover it. 00:26:39.044 [2024-12-06 03:34:59.092794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.044 [2024-12-06 03:34:59.092808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.044 qpair failed and we were unable to recover it. 00:26:39.044 [2024-12-06 03:34:59.093006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.044 [2024-12-06 03:34:59.093019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.044 qpair failed and we were unable to recover it. 00:26:39.044 [2024-12-06 03:34:59.093247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.044 [2024-12-06 03:34:59.093260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.044 qpair failed and we were unable to recover it. 00:26:39.044 [2024-12-06 03:34:59.093405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.044 [2024-12-06 03:34:59.093419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.044 qpair failed and we were unable to recover it. 00:26:39.044 [2024-12-06 03:34:59.093636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.044 [2024-12-06 03:34:59.093649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.044 qpair failed and we were unable to recover it. 00:26:39.044 [2024-12-06 03:34:59.093874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.044 [2024-12-06 03:34:59.093886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.044 qpair failed and we were unable to recover it. 00:26:39.044 [2024-12-06 03:34:59.094137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.044 [2024-12-06 03:34:59.094153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.044 qpair failed and we were unable to recover it. 00:26:39.044 [2024-12-06 03:34:59.094238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.044 [2024-12-06 03:34:59.094249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.044 qpair failed and we were unable to recover it. 00:26:39.044 [2024-12-06 03:34:59.094403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.044 [2024-12-06 03:34:59.094416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.044 qpair failed and we were unable to recover it. 00:26:39.044 [2024-12-06 03:34:59.094574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.044 [2024-12-06 03:34:59.094586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.044 qpair failed and we were unable to recover it. 00:26:39.044 [2024-12-06 03:34:59.094807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.044 [2024-12-06 03:34:59.094820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.044 qpair failed and we were unable to recover it. 00:26:39.044 [2024-12-06 03:34:59.094885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.044 [2024-12-06 03:34:59.094896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.044 qpair failed and we were unable to recover it. 00:26:39.044 [2024-12-06 03:34:59.095031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.044 [2024-12-06 03:34:59.095044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.044 qpair failed and we were unable to recover it. 00:26:39.044 [2024-12-06 03:34:59.095263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.044 [2024-12-06 03:34:59.095276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.044 qpair failed and we were unable to recover it. 00:26:39.044 [2024-12-06 03:34:59.095499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.044 [2024-12-06 03:34:59.095511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.044 qpair failed and we were unable to recover it. 00:26:39.044 [2024-12-06 03:34:59.095660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.044 [2024-12-06 03:34:59.095672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.044 qpair failed and we were unable to recover it. 00:26:39.044 [2024-12-06 03:34:59.095840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.044 [2024-12-06 03:34:59.095852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.044 qpair failed and we were unable to recover it. 00:26:39.044 [2024-12-06 03:34:59.095929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.044 [2024-12-06 03:34:59.095940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.044 qpair failed and we were unable to recover it. 00:26:39.044 [2024-12-06 03:34:59.096116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.044 [2024-12-06 03:34:59.096129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.044 qpair failed and we were unable to recover it. 00:26:39.044 [2024-12-06 03:34:59.096277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.044 [2024-12-06 03:34:59.096289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.044 qpair failed and we were unable to recover it. 00:26:39.044 [2024-12-06 03:34:59.096519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.044 [2024-12-06 03:34:59.096532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.044 qpair failed and we were unable to recover it. 00:26:39.044 [2024-12-06 03:34:59.096705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.044 [2024-12-06 03:34:59.096717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.044 qpair failed and we were unable to recover it. 00:26:39.044 [2024-12-06 03:34:59.096863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.044 [2024-12-06 03:34:59.096876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.044 qpair failed and we were unable to recover it. 00:26:39.044 [2024-12-06 03:34:59.097020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.044 [2024-12-06 03:34:59.097033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.044 qpair failed and we were unable to recover it. 00:26:39.044 [2024-12-06 03:34:59.097126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.044 [2024-12-06 03:34:59.097138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.044 qpair failed and we were unable to recover it. 00:26:39.044 [2024-12-06 03:34:59.097232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.044 [2024-12-06 03:34:59.097244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.044 qpair failed and we were unable to recover it. 00:26:39.044 [2024-12-06 03:34:59.097463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.044 [2024-12-06 03:34:59.097476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.044 qpair failed and we were unable to recover it. 00:26:39.044 [2024-12-06 03:34:59.097613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.045 [2024-12-06 03:34:59.097625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.045 qpair failed and we were unable to recover it. 00:26:39.045 [2024-12-06 03:34:59.097714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.045 [2024-12-06 03:34:59.097726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.045 qpair failed and we were unable to recover it. 00:26:39.045 [2024-12-06 03:34:59.097887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.045 [2024-12-06 03:34:59.097899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.045 qpair failed and we were unable to recover it. 00:26:39.045 [2024-12-06 03:34:59.098159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.045 [2024-12-06 03:34:59.098173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.045 qpair failed and we were unable to recover it. 00:26:39.045 [2024-12-06 03:34:59.098340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.045 [2024-12-06 03:34:59.098353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.045 qpair failed and we were unable to recover it. 00:26:39.045 [2024-12-06 03:34:59.098578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.045 [2024-12-06 03:34:59.098591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.045 qpair failed and we were unable to recover it. 00:26:39.045 [2024-12-06 03:34:59.098769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.045 [2024-12-06 03:34:59.098781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.045 qpair failed and we were unable to recover it. 00:26:39.045 [2024-12-06 03:34:59.098924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.045 [2024-12-06 03:34:59.098936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.045 qpair failed and we were unable to recover it. 00:26:39.045 [2024-12-06 03:34:59.099031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.045 [2024-12-06 03:34:59.099043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.045 qpair failed and we were unable to recover it. 00:26:39.045 [2024-12-06 03:34:59.099241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.045 [2024-12-06 03:34:59.099253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.045 qpair failed and we were unable to recover it. 00:26:39.045 [2024-12-06 03:34:59.099394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.045 [2024-12-06 03:34:59.099406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.045 qpair failed and we were unable to recover it. 00:26:39.045 [2024-12-06 03:34:59.099548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.045 [2024-12-06 03:34:59.099561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.045 qpair failed and we were unable to recover it. 00:26:39.045 [2024-12-06 03:34:59.099695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.045 [2024-12-06 03:34:59.099707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.045 qpair failed and we were unable to recover it. 00:26:39.045 [2024-12-06 03:34:59.099913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.045 [2024-12-06 03:34:59.099926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.045 qpair failed and we were unable to recover it. 00:26:39.045 [2024-12-06 03:34:59.100110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.045 [2024-12-06 03:34:59.100123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.045 qpair failed and we were unable to recover it. 00:26:39.045 [2024-12-06 03:34:59.100205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.045 [2024-12-06 03:34:59.100217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.045 qpair failed and we were unable to recover it. 00:26:39.045 [2024-12-06 03:34:59.100445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.045 [2024-12-06 03:34:59.100457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.045 qpair failed and we were unable to recover it. 00:26:39.045 [2024-12-06 03:34:59.100657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.045 [2024-12-06 03:34:59.100669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.045 qpair failed and we were unable to recover it. 00:26:39.045 [2024-12-06 03:34:59.100891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.045 [2024-12-06 03:34:59.100903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.045 qpair failed and we were unable to recover it. 00:26:39.045 [2024-12-06 03:34:59.101074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.045 [2024-12-06 03:34:59.101089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.045 qpair failed and we were unable to recover it. 00:26:39.045 [2024-12-06 03:34:59.101222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.045 [2024-12-06 03:34:59.101235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.045 qpair failed and we were unable to recover it. 00:26:39.045 [2024-12-06 03:34:59.101461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.045 [2024-12-06 03:34:59.101474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.045 qpair failed and we were unable to recover it. 00:26:39.045 [2024-12-06 03:34:59.101571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.045 [2024-12-06 03:34:59.101585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.045 qpair failed and we were unable to recover it. 00:26:39.045 [2024-12-06 03:34:59.101724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.045 [2024-12-06 03:34:59.101738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.045 qpair failed and we were unable to recover it. 00:26:39.045 [2024-12-06 03:34:59.101937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.045 [2024-12-06 03:34:59.101959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.045 qpair failed and we were unable to recover it. 00:26:39.045 [2024-12-06 03:34:59.102104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.045 [2024-12-06 03:34:59.102118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.045 qpair failed and we were unable to recover it. 00:26:39.045 [2024-12-06 03:34:59.102268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.045 [2024-12-06 03:34:59.102281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.045 qpair failed and we were unable to recover it. 00:26:39.045 [2024-12-06 03:34:59.102421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.045 [2024-12-06 03:34:59.102433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.045 qpair failed and we were unable to recover it. 00:26:39.045 [2024-12-06 03:34:59.102703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.045 [2024-12-06 03:34:59.102716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.045 qpair failed and we were unable to recover it. 00:26:39.045 [2024-12-06 03:34:59.102938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.045 [2024-12-06 03:34:59.102954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.045 qpair failed and we were unable to recover it. 00:26:39.045 [2024-12-06 03:34:59.103088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.045 [2024-12-06 03:34:59.103101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.045 qpair failed and we were unable to recover it. 00:26:39.045 [2024-12-06 03:34:59.103322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.045 [2024-12-06 03:34:59.103335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.045 qpair failed and we were unable to recover it. 00:26:39.045 [2024-12-06 03:34:59.103511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.045 [2024-12-06 03:34:59.103524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.045 qpair failed and we were unable to recover it. 00:26:39.045 [2024-12-06 03:34:59.103744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.045 [2024-12-06 03:34:59.103756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.045 qpair failed and we were unable to recover it. 00:26:39.045 [2024-12-06 03:34:59.103917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.045 [2024-12-06 03:34:59.103929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.045 qpair failed and we were unable to recover it. 00:26:39.045 [2024-12-06 03:34:59.104159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.045 [2024-12-06 03:34:59.104172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.045 qpair failed and we were unable to recover it. 00:26:39.045 [2024-12-06 03:34:59.104253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.045 [2024-12-06 03:34:59.104266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.045 qpair failed and we were unable to recover it. 00:26:39.045 [2024-12-06 03:34:59.104486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.045 [2024-12-06 03:34:59.104499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.045 qpair failed and we were unable to recover it. 00:26:39.045 [2024-12-06 03:34:59.104631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.046 [2024-12-06 03:34:59.104644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.046 qpair failed and we were unable to recover it. 00:26:39.046 [2024-12-06 03:34:59.104794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.046 [2024-12-06 03:34:59.104807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.046 qpair failed and we were unable to recover it. 00:26:39.046 [2024-12-06 03:34:59.105060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.046 [2024-12-06 03:34:59.105072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.046 qpair failed and we were unable to recover it. 00:26:39.046 [2024-12-06 03:34:59.105269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.046 [2024-12-06 03:34:59.105282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.046 qpair failed and we were unable to recover it. 00:26:39.046 [2024-12-06 03:34:59.105551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.046 [2024-12-06 03:34:59.105563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.046 qpair failed and we were unable to recover it. 00:26:39.046 [2024-12-06 03:34:59.105637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.046 [2024-12-06 03:34:59.105649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.046 qpair failed and we were unable to recover it. 00:26:39.046 [2024-12-06 03:34:59.105849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.046 [2024-12-06 03:34:59.105862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.046 qpair failed and we were unable to recover it. 00:26:39.046 [2024-12-06 03:34:59.106030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.046 [2024-12-06 03:34:59.106050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.046 qpair failed and we were unable to recover it. 00:26:39.046 [2024-12-06 03:34:59.106192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.046 [2024-12-06 03:34:59.106204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.046 qpair failed and we were unable to recover it. 00:26:39.046 [2024-12-06 03:34:59.106295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.046 [2024-12-06 03:34:59.106307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.046 qpair failed and we were unable to recover it. 00:26:39.046 [2024-12-06 03:34:59.106469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.046 [2024-12-06 03:34:59.106482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.046 qpair failed and we were unable to recover it. 00:26:39.046 [2024-12-06 03:34:59.106702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.046 [2024-12-06 03:34:59.106715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.046 qpair failed and we were unable to recover it. 00:26:39.046 [2024-12-06 03:34:59.106940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.046 [2024-12-06 03:34:59.106957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.046 qpair failed and we were unable to recover it. 00:26:39.046 [2024-12-06 03:34:59.107180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.046 [2024-12-06 03:34:59.107194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.046 qpair failed and we were unable to recover it. 00:26:39.046 [2024-12-06 03:34:59.107272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.046 [2024-12-06 03:34:59.107283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.046 qpair failed and we were unable to recover it. 00:26:39.046 [2024-12-06 03:34:59.107421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.046 [2024-12-06 03:34:59.107433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.046 qpair failed and we were unable to recover it. 00:26:39.046 [2024-12-06 03:34:59.107521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.046 [2024-12-06 03:34:59.107533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.046 qpair failed and we were unable to recover it. 00:26:39.046 [2024-12-06 03:34:59.107753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.046 [2024-12-06 03:34:59.107766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.046 qpair failed and we were unable to recover it. 00:26:39.046 [2024-12-06 03:34:59.107862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.046 [2024-12-06 03:34:59.107876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.046 qpair failed and we were unable to recover it. 00:26:39.046 [2024-12-06 03:34:59.108076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.046 [2024-12-06 03:34:59.108090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.046 qpair failed and we were unable to recover it. 00:26:39.046 [2024-12-06 03:34:59.108252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.046 [2024-12-06 03:34:59.108265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.046 qpair failed and we were unable to recover it. 00:26:39.046 [2024-12-06 03:34:59.108440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.046 [2024-12-06 03:34:59.108456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.046 qpair failed and we were unable to recover it. 00:26:39.046 [2024-12-06 03:34:59.108702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.046 [2024-12-06 03:34:59.108714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.046 qpair failed and we were unable to recover it. 00:26:39.046 [2024-12-06 03:34:59.108806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.046 [2024-12-06 03:34:59.108817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.046 qpair failed and we were unable to recover it. 00:26:39.046 [2024-12-06 03:34:59.108902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.046 [2024-12-06 03:34:59.108914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.046 qpair failed and we were unable to recover it. 00:26:39.046 [2024-12-06 03:34:59.109063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.046 [2024-12-06 03:34:59.109078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.046 qpair failed and we were unable to recover it. 00:26:39.046 [2024-12-06 03:34:59.109278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.046 [2024-12-06 03:34:59.109292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.046 qpair failed and we were unable to recover it. 00:26:39.046 [2024-12-06 03:34:59.109374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.046 [2024-12-06 03:34:59.109385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.046 qpair failed and we were unable to recover it. 00:26:39.046 [2024-12-06 03:34:59.109528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.046 [2024-12-06 03:34:59.109540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.046 qpair failed and we were unable to recover it. 00:26:39.046 [2024-12-06 03:34:59.109690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.046 [2024-12-06 03:34:59.109703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.046 qpair failed and we were unable to recover it. 00:26:39.046 [2024-12-06 03:34:59.109890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.046 [2024-12-06 03:34:59.109903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.046 qpair failed and we were unable to recover it. 00:26:39.046 [2024-12-06 03:34:59.110070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.046 [2024-12-06 03:34:59.110082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.046 qpair failed and we were unable to recover it. 00:26:39.046 [2024-12-06 03:34:59.110308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.046 [2024-12-06 03:34:59.110321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.046 qpair failed and we were unable to recover it. 00:26:39.046 [2024-12-06 03:34:59.110521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.046 [2024-12-06 03:34:59.110533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.046 qpair failed and we were unable to recover it. 00:26:39.046 [2024-12-06 03:34:59.110676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.046 [2024-12-06 03:34:59.110688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.046 qpair failed and we were unable to recover it. 00:26:39.046 [2024-12-06 03:34:59.110827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.046 [2024-12-06 03:34:59.110840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.046 qpair failed and we were unable to recover it. 00:26:39.046 [2024-12-06 03:34:59.110992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.046 [2024-12-06 03:34:59.111005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.046 qpair failed and we were unable to recover it. 00:26:39.046 [2024-12-06 03:34:59.111203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.046 [2024-12-06 03:34:59.111215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.046 qpair failed and we were unable to recover it. 00:26:39.047 [2024-12-06 03:34:59.111367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.047 [2024-12-06 03:34:59.111379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.047 qpair failed and we were unable to recover it. 00:26:39.047 [2024-12-06 03:34:59.111551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.047 [2024-12-06 03:34:59.111564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.047 qpair failed and we were unable to recover it. 00:26:39.047 [2024-12-06 03:34:59.111761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.047 [2024-12-06 03:34:59.111773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.047 qpair failed and we were unable to recover it. 00:26:39.047 [2024-12-06 03:34:59.111915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.047 [2024-12-06 03:34:59.111928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.047 qpair failed and we were unable to recover it. 00:26:39.047 [2024-12-06 03:34:59.112075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.047 [2024-12-06 03:34:59.112089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.047 qpair failed and we were unable to recover it. 00:26:39.047 [2024-12-06 03:34:59.112229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.047 [2024-12-06 03:34:59.112242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.047 qpair failed and we were unable to recover it. 00:26:39.047 [2024-12-06 03:34:59.112440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.047 [2024-12-06 03:34:59.112452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.047 qpair failed and we were unable to recover it. 00:26:39.047 [2024-12-06 03:34:59.112672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.047 [2024-12-06 03:34:59.112685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.047 qpair failed and we were unable to recover it. 00:26:39.047 [2024-12-06 03:34:59.112905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.047 [2024-12-06 03:34:59.112919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.047 qpair failed and we were unable to recover it. 00:26:39.047 [2024-12-06 03:34:59.113089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.047 [2024-12-06 03:34:59.113103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.047 qpair failed and we were unable to recover it. 00:26:39.047 [2024-12-06 03:34:59.113248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.047 [2024-12-06 03:34:59.113261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.047 qpair failed and we were unable to recover it. 00:26:39.047 [2024-12-06 03:34:59.113437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.047 [2024-12-06 03:34:59.113450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.047 qpair failed and we were unable to recover it. 00:26:39.047 [2024-12-06 03:34:59.113617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.047 [2024-12-06 03:34:59.113630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.047 qpair failed and we were unable to recover it. 00:26:39.047 [2024-12-06 03:34:59.113693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.047 [2024-12-06 03:34:59.113705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.047 qpair failed and we were unable to recover it. 00:26:39.047 [2024-12-06 03:34:59.113883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.047 [2024-12-06 03:34:59.113896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.047 qpair failed and we were unable to recover it. 00:26:39.047 [2024-12-06 03:34:59.113971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.047 [2024-12-06 03:34:59.113983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.047 qpair failed and we were unable to recover it. 00:26:39.047 [2024-12-06 03:34:59.114067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.047 [2024-12-06 03:34:59.114079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.047 qpair failed and we were unable to recover it. 00:26:39.047 [2024-12-06 03:34:59.114257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.047 [2024-12-06 03:34:59.114270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.047 qpair failed and we were unable to recover it. 00:26:39.047 [2024-12-06 03:34:59.114419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.047 [2024-12-06 03:34:59.114432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.047 qpair failed and we were unable to recover it. 00:26:39.047 [2024-12-06 03:34:59.114591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.047 [2024-12-06 03:34:59.114604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.047 qpair failed and we were unable to recover it. 00:26:39.047 [2024-12-06 03:34:59.114691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.047 [2024-12-06 03:34:59.114703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.047 qpair failed and we were unable to recover it. 00:26:39.047 [2024-12-06 03:34:59.114802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.047 [2024-12-06 03:34:59.114814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.047 qpair failed and we were unable to recover it. 00:26:39.047 [2024-12-06 03:34:59.115052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.047 [2024-12-06 03:34:59.115065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.047 qpair failed and we were unable to recover it. 00:26:39.047 [2024-12-06 03:34:59.115160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.047 [2024-12-06 03:34:59.115174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.047 qpair failed and we were unable to recover it. 00:26:39.047 [2024-12-06 03:34:59.115397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.047 [2024-12-06 03:34:59.115409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.047 qpair failed and we were unable to recover it. 00:26:39.047 [2024-12-06 03:34:59.115609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.047 [2024-12-06 03:34:59.115622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.047 qpair failed and we were unable to recover it. 00:26:39.047 [2024-12-06 03:34:59.115708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.047 [2024-12-06 03:34:59.115720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.047 qpair failed and we were unable to recover it. 00:26:39.047 [2024-12-06 03:34:59.115854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.047 [2024-12-06 03:34:59.115867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.047 qpair failed and we were unable to recover it. 00:26:39.047 [2024-12-06 03:34:59.116028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.047 [2024-12-06 03:34:59.116041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.047 qpair failed and we were unable to recover it. 00:26:39.047 [2024-12-06 03:34:59.116266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.047 [2024-12-06 03:34:59.116280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.047 qpair failed and we were unable to recover it. 00:26:39.047 [2024-12-06 03:34:59.116384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.047 [2024-12-06 03:34:59.116396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.047 qpair failed and we were unable to recover it. 00:26:39.047 [2024-12-06 03:34:59.116537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.047 [2024-12-06 03:34:59.116550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.047 qpair failed and we were unable to recover it. 00:26:39.047 [2024-12-06 03:34:59.116700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.047 [2024-12-06 03:34:59.116714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.047 qpair failed and we were unable to recover it. 00:26:39.047 [2024-12-06 03:34:59.116959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.048 [2024-12-06 03:34:59.116973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.048 qpair failed and we were unable to recover it. 00:26:39.048 [2024-12-06 03:34:59.117189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.048 [2024-12-06 03:34:59.117202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.048 qpair failed and we were unable to recover it. 00:26:39.048 [2024-12-06 03:34:59.117343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.048 [2024-12-06 03:34:59.117356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.048 qpair failed and we were unable to recover it. 00:26:39.048 [2024-12-06 03:34:59.117490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.048 [2024-12-06 03:34:59.117503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.048 qpair failed and we were unable to recover it. 00:26:39.048 [2024-12-06 03:34:59.117701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.048 [2024-12-06 03:34:59.117713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.048 qpair failed and we were unable to recover it. 00:26:39.048 [2024-12-06 03:34:59.117959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.048 [2024-12-06 03:34:59.117972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.048 qpair failed and we were unable to recover it. 00:26:39.048 [2024-12-06 03:34:59.118104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.048 [2024-12-06 03:34:59.118117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.048 qpair failed and we were unable to recover it. 00:26:39.048 [2024-12-06 03:34:59.118332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.048 [2024-12-06 03:34:59.118344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.048 qpair failed and we were unable to recover it. 00:26:39.048 [2024-12-06 03:34:59.118527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.048 [2024-12-06 03:34:59.118540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.048 qpair failed and we were unable to recover it. 00:26:39.048 [2024-12-06 03:34:59.118687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.048 [2024-12-06 03:34:59.118700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.048 qpair failed and we were unable to recover it. 00:26:39.048 [2024-12-06 03:34:59.118844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.048 [2024-12-06 03:34:59.118858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.048 qpair failed and we were unable to recover it. 00:26:39.048 [2024-12-06 03:34:59.119056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.048 [2024-12-06 03:34:59.119069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.048 qpair failed and we were unable to recover it. 00:26:39.048 [2024-12-06 03:34:59.119217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.048 [2024-12-06 03:34:59.119230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.048 qpair failed and we were unable to recover it. 00:26:39.048 [2024-12-06 03:34:59.119464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.048 [2024-12-06 03:34:59.119477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.048 qpair failed and we were unable to recover it. 00:26:39.048 [2024-12-06 03:34:59.119630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.048 [2024-12-06 03:34:59.119643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.048 qpair failed and we were unable to recover it. 00:26:39.048 [2024-12-06 03:34:59.119774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.048 [2024-12-06 03:34:59.119787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.048 qpair failed and we were unable to recover it. 00:26:39.048 [2024-12-06 03:34:59.119944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.048 [2024-12-06 03:34:59.119960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.048 qpair failed and we were unable to recover it. 00:26:39.048 [2024-12-06 03:34:59.120218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.048 [2024-12-06 03:34:59.120232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.048 qpair failed and we were unable to recover it. 00:26:39.048 [2024-12-06 03:34:59.120375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.048 [2024-12-06 03:34:59.120388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.048 qpair failed and we were unable to recover it. 00:26:39.048 [2024-12-06 03:34:59.120528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.048 [2024-12-06 03:34:59.120540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.048 qpair failed and we were unable to recover it. 00:26:39.048 [2024-12-06 03:34:59.120673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.048 [2024-12-06 03:34:59.120686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.048 qpair failed and we were unable to recover it. 00:26:39.048 [2024-12-06 03:34:59.120832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.048 [2024-12-06 03:34:59.120846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.048 qpair failed and we were unable to recover it. 00:26:39.048 [2024-12-06 03:34:59.121005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.048 [2024-12-06 03:34:59.121018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.048 qpair failed and we were unable to recover it. 00:26:39.048 [2024-12-06 03:34:59.121215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.048 [2024-12-06 03:34:59.121227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.048 qpair failed and we were unable to recover it. 00:26:39.048 [2024-12-06 03:34:59.121322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.048 [2024-12-06 03:34:59.121333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.048 qpair failed and we were unable to recover it. 00:26:39.048 [2024-12-06 03:34:59.121422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.048 [2024-12-06 03:34:59.121433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.048 qpair failed and we were unable to recover it. 00:26:39.048 [2024-12-06 03:34:59.121576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.048 [2024-12-06 03:34:59.121590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.048 qpair failed and we were unable to recover it. 00:26:39.048 [2024-12-06 03:34:59.121789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.048 [2024-12-06 03:34:59.121802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.048 qpair failed and we were unable to recover it. 00:26:39.048 [2024-12-06 03:34:59.121951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.048 [2024-12-06 03:34:59.121964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.048 qpair failed and we were unable to recover it. 00:26:39.048 [2024-12-06 03:34:59.122100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.048 [2024-12-06 03:34:59.122114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.048 qpair failed and we were unable to recover it. 00:26:39.048 [2024-12-06 03:34:59.122330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.048 [2024-12-06 03:34:59.122345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.048 qpair failed and we were unable to recover it. 00:26:39.048 [2024-12-06 03:34:59.122529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.048 [2024-12-06 03:34:59.122541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.048 qpair failed and we were unable to recover it. 00:26:39.048 [2024-12-06 03:34:59.122691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.048 [2024-12-06 03:34:59.122703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.048 qpair failed and we were unable to recover it. 00:26:39.048 [2024-12-06 03:34:59.122924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.048 [2024-12-06 03:34:59.122937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.048 qpair failed and we were unable to recover it. 00:26:39.048 [2024-12-06 03:34:59.123030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.048 [2024-12-06 03:34:59.123043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.048 qpair failed and we were unable to recover it. 00:26:39.048 [2024-12-06 03:34:59.123238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.048 [2024-12-06 03:34:59.123250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.048 qpair failed and we were unable to recover it. 00:26:39.048 [2024-12-06 03:34:59.123387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.048 [2024-12-06 03:34:59.123401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.048 qpair failed and we were unable to recover it. 00:26:39.048 [2024-12-06 03:34:59.123621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.048 [2024-12-06 03:34:59.123633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.048 qpair failed and we were unable to recover it. 00:26:39.048 [2024-12-06 03:34:59.123867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.049 [2024-12-06 03:34:59.123880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.049 qpair failed and we were unable to recover it. 00:26:39.049 [2024-12-06 03:34:59.124030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.049 [2024-12-06 03:34:59.124043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.049 qpair failed and we were unable to recover it. 00:26:39.049 [2024-12-06 03:34:59.124211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.049 [2024-12-06 03:34:59.124223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.049 qpair failed and we were unable to recover it. 00:26:39.049 [2024-12-06 03:34:59.124447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.049 [2024-12-06 03:34:59.124459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.049 qpair failed and we were unable to recover it. 00:26:39.049 [2024-12-06 03:34:59.124610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.049 [2024-12-06 03:34:59.124622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.049 qpair failed and we were unable to recover it. 00:26:39.049 [2024-12-06 03:34:59.124815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.049 [2024-12-06 03:34:59.124828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.049 qpair failed and we were unable to recover it. 00:26:39.049 [2024-12-06 03:34:59.124923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.049 [2024-12-06 03:34:59.124935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.049 qpair failed and we were unable to recover it. 00:26:39.049 [2024-12-06 03:34:59.125157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.049 [2024-12-06 03:34:59.125178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.049 qpair failed and we were unable to recover it. 00:26:39.049 [2024-12-06 03:34:59.125335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.049 [2024-12-06 03:34:59.125351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.049 qpair failed and we were unable to recover it. 00:26:39.049 [2024-12-06 03:34:59.125445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.049 [2024-12-06 03:34:59.125459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.049 qpair failed and we were unable to recover it. 00:26:39.049 [2024-12-06 03:34:59.125687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.049 [2024-12-06 03:34:59.125703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.049 qpair failed and we were unable to recover it. 00:26:39.049 [2024-12-06 03:34:59.125931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.049 [2024-12-06 03:34:59.125953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.049 qpair failed and we were unable to recover it. 00:26:39.049 [2024-12-06 03:34:59.126192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.049 [2024-12-06 03:34:59.126208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.049 qpair failed and we were unable to recover it. 00:26:39.049 [2024-12-06 03:34:59.126382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.049 [2024-12-06 03:34:59.126398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.049 qpair failed and we were unable to recover it. 00:26:39.049 [2024-12-06 03:34:59.126588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.049 [2024-12-06 03:34:59.126604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.049 qpair failed and we were unable to recover it. 00:26:39.049 [2024-12-06 03:34:59.126765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.049 [2024-12-06 03:34:59.126781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.049 qpair failed and we were unable to recover it. 00:26:39.049 [2024-12-06 03:34:59.126921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.049 [2024-12-06 03:34:59.126936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.049 qpair failed and we were unable to recover it. 00:26:39.049 [2024-12-06 03:34:59.127101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.049 [2024-12-06 03:34:59.127113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.049 qpair failed and we were unable to recover it. 00:26:39.049 [2024-12-06 03:34:59.127262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.049 [2024-12-06 03:34:59.127275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.049 qpair failed and we were unable to recover it. 00:26:39.049 [2024-12-06 03:34:59.127525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.349 [2024-12-06 03:34:59.127542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.349 qpair failed and we were unable to recover it. 00:26:39.349 [2024-12-06 03:34:59.127769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.349 [2024-12-06 03:34:59.127785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.349 qpair failed and we were unable to recover it. 00:26:39.349 [2024-12-06 03:34:59.127939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.349 [2024-12-06 03:34:59.127960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.349 qpair failed and we were unable to recover it. 00:26:39.349 [2024-12-06 03:34:59.128190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.349 [2024-12-06 03:34:59.128206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.349 qpair failed and we were unable to recover it. 00:26:39.349 [2024-12-06 03:34:59.128318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.349 [2024-12-06 03:34:59.128334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.349 qpair failed and we were unable to recover it. 00:26:39.349 [2024-12-06 03:34:59.128565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.349 [2024-12-06 03:34:59.128582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.349 qpair failed and we were unable to recover it. 00:26:39.349 [2024-12-06 03:34:59.128797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.349 [2024-12-06 03:34:59.128814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.349 qpair failed and we were unable to recover it. 00:26:39.349 [2024-12-06 03:34:59.128977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.349 [2024-12-06 03:34:59.128994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.349 qpair failed and we were unable to recover it. 00:26:39.350 [2024-12-06 03:34:59.129147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.350 [2024-12-06 03:34:59.129164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.350 qpair failed and we were unable to recover it. 00:26:39.350 [2024-12-06 03:34:59.129319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.350 [2024-12-06 03:34:59.129336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.350 qpair failed and we were unable to recover it. 00:26:39.350 [2024-12-06 03:34:59.129567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.350 [2024-12-06 03:34:59.129583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.350 qpair failed and we were unable to recover it. 00:26:39.350 [2024-12-06 03:34:59.129687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.350 [2024-12-06 03:34:59.129705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.350 qpair failed and we were unable to recover it. 00:26:39.350 [2024-12-06 03:34:59.129862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.350 [2024-12-06 03:34:59.129879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.350 qpair failed and we were unable to recover it. 00:26:39.350 [2024-12-06 03:34:59.130056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.350 [2024-12-06 03:34:59.130075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.350 qpair failed and we were unable to recover it. 00:26:39.350 [2024-12-06 03:34:59.130286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.350 [2024-12-06 03:34:59.130303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.350 qpair failed and we were unable to recover it. 00:26:39.350 [2024-12-06 03:34:59.130447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.350 [2024-12-06 03:34:59.130464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.350 qpair failed and we were unable to recover it. 00:26:39.350 [2024-12-06 03:34:59.130640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.350 [2024-12-06 03:34:59.130656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.350 qpair failed and we were unable to recover it. 00:26:39.350 [2024-12-06 03:34:59.130914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.350 [2024-12-06 03:34:59.130931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.350 qpair failed and we were unable to recover it. 00:26:39.350 [2024-12-06 03:34:59.131099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.350 [2024-12-06 03:34:59.131113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.350 qpair failed and we were unable to recover it. 00:26:39.350 [2024-12-06 03:34:59.131259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.350 [2024-12-06 03:34:59.131271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.350 qpair failed and we were unable to recover it. 00:26:39.350 [2024-12-06 03:34:59.131425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.350 [2024-12-06 03:34:59.131438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.350 qpair failed and we were unable to recover it. 00:26:39.350 [2024-12-06 03:34:59.131531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.350 [2024-12-06 03:34:59.131542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.350 qpair failed and we were unable to recover it. 00:26:39.350 [2024-12-06 03:34:59.131764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.350 [2024-12-06 03:34:59.131777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.350 qpair failed and we were unable to recover it. 00:26:39.350 [2024-12-06 03:34:59.131943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.350 [2024-12-06 03:34:59.131961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.350 qpair failed and we were unable to recover it. 00:26:39.350 [2024-12-06 03:34:59.132211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.350 [2024-12-06 03:34:59.132224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.350 qpair failed and we were unable to recover it. 00:26:39.350 [2024-12-06 03:34:59.132382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.350 [2024-12-06 03:34:59.132394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.350 qpair failed and we were unable to recover it. 00:26:39.350 [2024-12-06 03:34:59.132614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.350 [2024-12-06 03:34:59.132626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.350 qpair failed and we were unable to recover it. 00:26:39.350 [2024-12-06 03:34:59.132703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.350 [2024-12-06 03:34:59.132715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.350 qpair failed and we were unable to recover it. 00:26:39.350 [2024-12-06 03:34:59.132920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.350 [2024-12-06 03:34:59.132933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.350 qpair failed and we were unable to recover it. 00:26:39.350 [2024-12-06 03:34:59.133136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.350 [2024-12-06 03:34:59.133149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.350 qpair failed and we were unable to recover it. 00:26:39.350 [2024-12-06 03:34:59.133308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.350 [2024-12-06 03:34:59.133320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.350 qpair failed and we were unable to recover it. 00:26:39.350 [2024-12-06 03:34:59.133404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.350 [2024-12-06 03:34:59.133415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.350 qpair failed and we were unable to recover it. 00:26:39.350 [2024-12-06 03:34:59.133578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.350 [2024-12-06 03:34:59.133591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.350 qpair failed and we were unable to recover it. 00:26:39.350 [2024-12-06 03:34:59.133759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.350 [2024-12-06 03:34:59.133772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.350 qpair failed and we were unable to recover it. 00:26:39.350 [2024-12-06 03:34:59.133917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.350 [2024-12-06 03:34:59.133929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.350 qpair failed and we were unable to recover it. 00:26:39.350 [2024-12-06 03:34:59.134165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.350 [2024-12-06 03:34:59.134178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.350 qpair failed and we were unable to recover it. 00:26:39.350 [2024-12-06 03:34:59.134329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.351 [2024-12-06 03:34:59.134341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.351 qpair failed and we were unable to recover it. 00:26:39.351 [2024-12-06 03:34:59.134551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.351 [2024-12-06 03:34:59.134563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.351 qpair failed and we were unable to recover it. 00:26:39.351 [2024-12-06 03:34:59.134793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.351 [2024-12-06 03:34:59.134806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.351 qpair failed and we were unable to recover it. 00:26:39.351 [2024-12-06 03:34:59.134955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.351 [2024-12-06 03:34:59.134968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.351 qpair failed and we were unable to recover it. 00:26:39.351 [2024-12-06 03:34:59.135052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.351 [2024-12-06 03:34:59.135063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.351 qpair failed and we were unable to recover it. 00:26:39.351 [2024-12-06 03:34:59.135227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.351 [2024-12-06 03:34:59.135240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.351 qpair failed and we were unable to recover it. 00:26:39.351 [2024-12-06 03:34:59.135444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.351 [2024-12-06 03:34:59.135457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.351 qpair failed and we were unable to recover it. 00:26:39.351 [2024-12-06 03:34:59.135547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.351 [2024-12-06 03:34:59.135558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.351 qpair failed and we were unable to recover it. 00:26:39.351 [2024-12-06 03:34:59.135689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.351 [2024-12-06 03:34:59.135702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.351 qpair failed and we were unable to recover it. 00:26:39.351 [2024-12-06 03:34:59.135859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.351 [2024-12-06 03:34:59.135872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.351 qpair failed and we were unable to recover it. 00:26:39.351 [2024-12-06 03:34:59.136078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.351 [2024-12-06 03:34:59.136091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.351 qpair failed and we were unable to recover it. 00:26:39.351 [2024-12-06 03:34:59.136320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.351 [2024-12-06 03:34:59.136332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.351 qpair failed and we were unable to recover it. 00:26:39.351 [2024-12-06 03:34:59.136468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.351 [2024-12-06 03:34:59.136480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.351 qpair failed and we were unable to recover it. 00:26:39.351 [2024-12-06 03:34:59.136635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.351 [2024-12-06 03:34:59.136647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.351 qpair failed and we were unable to recover it. 00:26:39.351 [2024-12-06 03:34:59.136786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.351 [2024-12-06 03:34:59.136799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.351 qpair failed and we were unable to recover it. 00:26:39.351 [2024-12-06 03:34:59.136975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.351 [2024-12-06 03:34:59.136989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.351 qpair failed and we were unable to recover it. 00:26:39.351 [2024-12-06 03:34:59.137141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.351 [2024-12-06 03:34:59.137154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.351 qpair failed and we were unable to recover it. 00:26:39.351 [2024-12-06 03:34:59.137307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.351 [2024-12-06 03:34:59.137324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.351 qpair failed and we were unable to recover it. 00:26:39.351 [2024-12-06 03:34:59.137549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.351 [2024-12-06 03:34:59.137561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.351 qpair failed and we were unable to recover it. 00:26:39.351 [2024-12-06 03:34:59.137709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.351 [2024-12-06 03:34:59.137721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.351 qpair failed and we were unable to recover it. 00:26:39.351 [2024-12-06 03:34:59.137959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.351 [2024-12-06 03:34:59.137972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.351 qpair failed and we were unable to recover it. 00:26:39.351 [2024-12-06 03:34:59.138179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.351 [2024-12-06 03:34:59.138191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.351 qpair failed and we were unable to recover it. 00:26:39.351 [2024-12-06 03:34:59.138336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.351 [2024-12-06 03:34:59.138348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.351 qpair failed and we were unable to recover it. 00:26:39.351 [2024-12-06 03:34:59.138478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.351 [2024-12-06 03:34:59.138490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.351 qpair failed and we were unable to recover it. 00:26:39.351 [2024-12-06 03:34:59.138574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.351 [2024-12-06 03:34:59.138585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.351 qpair failed and we were unable to recover it. 00:26:39.351 [2024-12-06 03:34:59.138740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.351 [2024-12-06 03:34:59.138753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.351 qpair failed and we were unable to recover it. 00:26:39.351 [2024-12-06 03:34:59.138839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.351 [2024-12-06 03:34:59.138850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.351 qpair failed and we were unable to recover it. 00:26:39.351 [2024-12-06 03:34:59.139000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.351 [2024-12-06 03:34:59.139013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.351 qpair failed and we were unable to recover it. 00:26:39.351 [2024-12-06 03:34:59.139261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.351 [2024-12-06 03:34:59.139273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.352 qpair failed and we were unable to recover it. 00:26:39.352 [2024-12-06 03:34:59.139462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.352 [2024-12-06 03:34:59.139474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.352 qpair failed and we were unable to recover it. 00:26:39.352 [2024-12-06 03:34:59.139697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.352 [2024-12-06 03:34:59.139710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.352 qpair failed and we were unable to recover it. 00:26:39.352 [2024-12-06 03:34:59.139846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.352 [2024-12-06 03:34:59.139859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.352 qpair failed and we were unable to recover it. 00:26:39.352 [2024-12-06 03:34:59.140006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.352 [2024-12-06 03:34:59.140020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.352 qpair failed and we were unable to recover it. 00:26:39.352 [2024-12-06 03:34:59.140097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.352 [2024-12-06 03:34:59.140109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.352 qpair failed and we were unable to recover it. 00:26:39.352 [2024-12-06 03:34:59.140327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.352 [2024-12-06 03:34:59.140340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.352 qpair failed and we were unable to recover it. 00:26:39.352 [2024-12-06 03:34:59.140509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.352 [2024-12-06 03:34:59.140521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.352 qpair failed and we were unable to recover it. 00:26:39.352 [2024-12-06 03:34:59.140589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.352 [2024-12-06 03:34:59.140601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.352 qpair failed and we were unable to recover it. 00:26:39.352 [2024-12-06 03:34:59.140822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.352 [2024-12-06 03:34:59.140835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.352 qpair failed and we were unable to recover it. 00:26:39.352 [2024-12-06 03:34:59.140994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.352 [2024-12-06 03:34:59.141008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.352 qpair failed and we were unable to recover it. 00:26:39.352 [2024-12-06 03:34:59.141189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.352 [2024-12-06 03:34:59.141202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.352 qpair failed and we were unable to recover it. 00:26:39.352 [2024-12-06 03:34:59.141342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.352 [2024-12-06 03:34:59.141355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.352 qpair failed and we were unable to recover it. 00:26:39.352 [2024-12-06 03:34:59.141513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.352 [2024-12-06 03:34:59.141526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.352 qpair failed and we were unable to recover it. 00:26:39.352 [2024-12-06 03:34:59.141669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.352 [2024-12-06 03:34:59.141682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.352 qpair failed and we were unable to recover it. 00:26:39.352 [2024-12-06 03:34:59.141764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.352 [2024-12-06 03:34:59.141775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.352 qpair failed and we were unable to recover it. 00:26:39.352 [2024-12-06 03:34:59.141878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.352 [2024-12-06 03:34:59.141902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.352 qpair failed and we were unable to recover it. 00:26:39.352 [2024-12-06 03:34:59.142089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.352 [2024-12-06 03:34:59.142108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.352 qpair failed and we were unable to recover it. 00:26:39.352 [2024-12-06 03:34:59.142370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.352 [2024-12-06 03:34:59.142387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.352 qpair failed and we were unable to recover it. 00:26:39.352 [2024-12-06 03:34:59.142594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.352 [2024-12-06 03:34:59.142610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.352 qpair failed and we were unable to recover it. 00:26:39.352 [2024-12-06 03:34:59.142849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.352 [2024-12-06 03:34:59.142866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.352 qpair failed and we were unable to recover it. 00:26:39.352 [2024-12-06 03:34:59.143091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.352 [2024-12-06 03:34:59.143108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.352 qpair failed and we were unable to recover it. 00:26:39.352 [2024-12-06 03:34:59.143317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.352 [2024-12-06 03:34:59.143333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.352 qpair failed and we were unable to recover it. 00:26:39.352 [2024-12-06 03:34:59.143429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.352 [2024-12-06 03:34:59.143444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.352 qpair failed and we were unable to recover it. 00:26:39.352 [2024-12-06 03:34:59.143521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.352 [2024-12-06 03:34:59.143536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.352 qpair failed and we were unable to recover it. 00:26:39.352 [2024-12-06 03:34:59.143685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.352 [2024-12-06 03:34:59.143702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.352 qpair failed and we were unable to recover it. 00:26:39.352 [2024-12-06 03:34:59.143938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.352 [2024-12-06 03:34:59.143958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.352 qpair failed and we were unable to recover it. 00:26:39.352 [2024-12-06 03:34:59.144061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.352 [2024-12-06 03:34:59.144076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.352 qpair failed and we were unable to recover it. 00:26:39.352 [2024-12-06 03:34:59.144273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.353 [2024-12-06 03:34:59.144290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.353 qpair failed and we were unable to recover it. 00:26:39.353 [2024-12-06 03:34:59.144497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.353 [2024-12-06 03:34:59.144517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.353 qpair failed and we were unable to recover it. 00:26:39.353 [2024-12-06 03:34:59.144740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.353 [2024-12-06 03:34:59.144757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.353 qpair failed and we were unable to recover it. 00:26:39.353 [2024-12-06 03:34:59.144864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.353 [2024-12-06 03:34:59.144881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.353 qpair failed and we were unable to recover it. 00:26:39.353 [2024-12-06 03:34:59.145089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.353 [2024-12-06 03:34:59.145105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.353 qpair failed and we were unable to recover it. 00:26:39.353 [2024-12-06 03:34:59.145250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.353 [2024-12-06 03:34:59.145267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.353 qpair failed and we were unable to recover it. 00:26:39.353 [2024-12-06 03:34:59.145490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.353 [2024-12-06 03:34:59.145507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.353 qpair failed and we were unable to recover it. 00:26:39.353 [2024-12-06 03:34:59.145720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.353 [2024-12-06 03:34:59.145736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.353 qpair failed and we were unable to recover it. 00:26:39.353 [2024-12-06 03:34:59.145911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.353 [2024-12-06 03:34:59.145928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.353 qpair failed and we were unable to recover it. 00:26:39.353 [2024-12-06 03:34:59.146024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.353 [2024-12-06 03:34:59.146039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.353 qpair failed and we were unable to recover it. 00:26:39.353 [2024-12-06 03:34:59.146268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.353 [2024-12-06 03:34:59.146283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.353 qpair failed and we were unable to recover it. 00:26:39.353 [2024-12-06 03:34:59.146477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.353 [2024-12-06 03:34:59.146493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.353 qpair failed and we were unable to recover it. 00:26:39.353 [2024-12-06 03:34:59.146712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.353 [2024-12-06 03:34:59.146728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.353 qpair failed and we were unable to recover it. 00:26:39.353 [2024-12-06 03:34:59.146935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.353 [2024-12-06 03:34:59.146958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.353 qpair failed and we were unable to recover it. 00:26:39.353 [2024-12-06 03:34:59.147211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.353 [2024-12-06 03:34:59.147229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.353 qpair failed and we were unable to recover it. 00:26:39.353 [2024-12-06 03:34:59.147440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.353 [2024-12-06 03:34:59.147457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.353 qpair failed and we were unable to recover it. 00:26:39.353 [2024-12-06 03:34:59.147713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.353 [2024-12-06 03:34:59.147730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.353 qpair failed and we were unable to recover it. 00:26:39.353 [2024-12-06 03:34:59.147883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.353 [2024-12-06 03:34:59.147900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.353 qpair failed and we were unable to recover it. 00:26:39.353 [2024-12-06 03:34:59.148071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.353 [2024-12-06 03:34:59.148088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.353 qpair failed and we were unable to recover it. 00:26:39.353 [2024-12-06 03:34:59.148343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.353 [2024-12-06 03:34:59.148359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.353 qpair failed and we were unable to recover it. 00:26:39.353 [2024-12-06 03:34:59.148601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.353 [2024-12-06 03:34:59.148617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.353 qpair failed and we were unable to recover it. 00:26:39.353 [2024-12-06 03:34:59.148825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.353 [2024-12-06 03:34:59.148841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.353 qpair failed and we were unable to recover it. 00:26:39.353 [2024-12-06 03:34:59.149095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.353 [2024-12-06 03:34:59.149112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.353 qpair failed and we were unable to recover it. 00:26:39.353 [2024-12-06 03:34:59.149368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.353 [2024-12-06 03:34:59.149384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.353 qpair failed and we were unable to recover it. 00:26:39.353 [2024-12-06 03:34:59.149613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.353 [2024-12-06 03:34:59.149629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.353 qpair failed and we were unable to recover it. 00:26:39.353 [2024-12-06 03:34:59.149836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.353 [2024-12-06 03:34:59.149853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.353 qpair failed and we were unable to recover it. 00:26:39.353 [2024-12-06 03:34:59.150062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.353 [2024-12-06 03:34:59.150078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.353 qpair failed and we were unable to recover it. 00:26:39.353 [2024-12-06 03:34:59.150241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.353 [2024-12-06 03:34:59.150257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.353 qpair failed and we were unable to recover it. 00:26:39.353 [2024-12-06 03:34:59.150492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.353 [2024-12-06 03:34:59.150513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.353 qpair failed and we were unable to recover it. 00:26:39.353 [2024-12-06 03:34:59.150746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.353 [2024-12-06 03:34:59.150762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.353 qpair failed and we were unable to recover it. 00:26:39.353 [2024-12-06 03:34:59.150998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.354 [2024-12-06 03:34:59.151016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.354 qpair failed and we were unable to recover it. 00:26:39.354 [2024-12-06 03:34:59.151173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.354 [2024-12-06 03:34:59.151190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.354 qpair failed and we were unable to recover it. 00:26:39.354 [2024-12-06 03:34:59.151397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.354 [2024-12-06 03:34:59.151414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.354 qpair failed and we were unable to recover it. 00:26:39.354 [2024-12-06 03:34:59.151570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.354 [2024-12-06 03:34:59.151586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.354 qpair failed and we were unable to recover it. 00:26:39.354 [2024-12-06 03:34:59.151693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.354 [2024-12-06 03:34:59.151709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.354 qpair failed and we were unable to recover it. 00:26:39.354 [2024-12-06 03:34:59.151918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.354 [2024-12-06 03:34:59.151934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.354 qpair failed and we were unable to recover it. 00:26:39.354 [2024-12-06 03:34:59.152106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.354 [2024-12-06 03:34:59.152123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.354 qpair failed and we were unable to recover it. 00:26:39.354 [2024-12-06 03:34:59.152282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.354 [2024-12-06 03:34:59.152298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.354 qpair failed and we were unable to recover it. 00:26:39.354 [2024-12-06 03:34:59.152534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.354 [2024-12-06 03:34:59.152551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.354 qpair failed and we were unable to recover it. 00:26:39.354 [2024-12-06 03:34:59.152640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.354 [2024-12-06 03:34:59.152656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.354 qpair failed and we were unable to recover it. 00:26:39.354 [2024-12-06 03:34:59.152748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.354 [2024-12-06 03:34:59.152762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.354 qpair failed and we were unable to recover it. 00:26:39.354 [2024-12-06 03:34:59.152915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.354 [2024-12-06 03:34:59.152931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.354 qpair failed and we were unable to recover it. 00:26:39.354 [2024-12-06 03:34:59.153093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.354 [2024-12-06 03:34:59.153109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.354 qpair failed and we were unable to recover it. 00:26:39.354 [2024-12-06 03:34:59.153297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.354 [2024-12-06 03:34:59.153313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.354 qpair failed and we were unable to recover it. 00:26:39.354 [2024-12-06 03:34:59.153453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.354 [2024-12-06 03:34:59.153470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.354 qpair failed and we were unable to recover it. 00:26:39.354 [2024-12-06 03:34:59.153738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.354 [2024-12-06 03:34:59.153755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.354 qpair failed and we were unable to recover it. 00:26:39.354 [2024-12-06 03:34:59.153992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.354 [2024-12-06 03:34:59.154008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.354 qpair failed and we were unable to recover it. 00:26:39.354 [2024-12-06 03:34:59.154243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.354 [2024-12-06 03:34:59.154259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.354 qpair failed and we were unable to recover it. 00:26:39.354 [2024-12-06 03:34:59.154399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.354 [2024-12-06 03:34:59.154415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.354 qpair failed and we were unable to recover it. 00:26:39.354 [2024-12-06 03:34:59.154585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.354 [2024-12-06 03:34:59.154602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.354 qpair failed and we were unable to recover it. 00:26:39.354 [2024-12-06 03:34:59.154813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.354 [2024-12-06 03:34:59.154830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.354 qpair failed and we were unable to recover it. 00:26:39.354 [2024-12-06 03:34:59.154971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.354 [2024-12-06 03:34:59.154988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.354 qpair failed and we were unable to recover it. 00:26:39.354 [2024-12-06 03:34:59.155216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.354 [2024-12-06 03:34:59.155233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.354 qpair failed and we were unable to recover it. 00:26:39.354 [2024-12-06 03:34:59.155478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.354 [2024-12-06 03:34:59.155494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.354 qpair failed and we were unable to recover it. 00:26:39.354 [2024-12-06 03:34:59.155661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.354 [2024-12-06 03:34:59.155678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.354 qpair failed and we were unable to recover it. 00:26:39.354 [2024-12-06 03:34:59.155906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.354 [2024-12-06 03:34:59.155926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.354 qpair failed and we were unable to recover it. 00:26:39.354 [2024-12-06 03:34:59.156139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.354 [2024-12-06 03:34:59.156157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.354 qpair failed and we were unable to recover it. 00:26:39.354 [2024-12-06 03:34:59.156414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.354 [2024-12-06 03:34:59.156429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.354 qpair failed and we were unable to recover it. 00:26:39.354 [2024-12-06 03:34:59.156599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.354 [2024-12-06 03:34:59.156616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.354 qpair failed and we were unable to recover it. 00:26:39.355 [2024-12-06 03:34:59.156701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.355 [2024-12-06 03:34:59.156717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.355 qpair failed and we were unable to recover it. 00:26:39.355 [2024-12-06 03:34:59.156860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.355 [2024-12-06 03:34:59.156876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.355 qpair failed and we were unable to recover it. 00:26:39.355 [2024-12-06 03:34:59.157040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.355 [2024-12-06 03:34:59.157057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.355 qpair failed and we were unable to recover it. 00:26:39.355 [2024-12-06 03:34:59.157199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.355 [2024-12-06 03:34:59.157215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.355 qpair failed and we were unable to recover it. 00:26:39.355 [2024-12-06 03:34:59.157315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.355 [2024-12-06 03:34:59.157330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.355 qpair failed and we were unable to recover it. 00:26:39.355 [2024-12-06 03:34:59.157567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.355 [2024-12-06 03:34:59.157584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.355 qpair failed and we were unable to recover it. 00:26:39.355 [2024-12-06 03:34:59.157680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.355 [2024-12-06 03:34:59.157696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.355 qpair failed and we were unable to recover it. 00:26:39.355 [2024-12-06 03:34:59.157790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.355 [2024-12-06 03:34:59.157805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.355 qpair failed and we were unable to recover it. 00:26:39.355 [2024-12-06 03:34:59.157910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.355 [2024-12-06 03:34:59.157927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.355 qpair failed and we were unable to recover it. 00:26:39.355 [2024-12-06 03:34:59.158081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.355 [2024-12-06 03:34:59.158099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.355 qpair failed and we were unable to recover it. 00:26:39.355 [2024-12-06 03:34:59.158202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.355 [2024-12-06 03:34:59.158217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.355 qpair failed and we were unable to recover it. 00:26:39.355 [2024-12-06 03:34:59.158374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.355 [2024-12-06 03:34:59.158391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.355 qpair failed and we were unable to recover it. 00:26:39.355 [2024-12-06 03:34:59.158625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.355 [2024-12-06 03:34:59.158642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.355 qpair failed and we were unable to recover it. 00:26:39.355 [2024-12-06 03:34:59.158791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.355 [2024-12-06 03:34:59.158807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.355 qpair failed and we were unable to recover it. 00:26:39.355 [2024-12-06 03:34:59.158953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.355 [2024-12-06 03:34:59.158970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.355 qpair failed and we were unable to recover it. 00:26:39.355 [2024-12-06 03:34:59.159177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.355 [2024-12-06 03:34:59.159194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.355 qpair failed and we were unable to recover it. 00:26:39.355 [2024-12-06 03:34:59.159421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.355 [2024-12-06 03:34:59.159438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.355 qpair failed and we were unable to recover it. 00:26:39.355 [2024-12-06 03:34:59.159541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.355 [2024-12-06 03:34:59.159556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.355 qpair failed and we were unable to recover it. 00:26:39.355 [2024-12-06 03:34:59.159783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.355 [2024-12-06 03:34:59.159800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.355 qpair failed and we were unable to recover it. 00:26:39.355 [2024-12-06 03:34:59.160050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.355 [2024-12-06 03:34:59.160066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.355 qpair failed and we were unable to recover it. 00:26:39.355 [2024-12-06 03:34:59.160292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.355 [2024-12-06 03:34:59.160309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.355 qpair failed and we were unable to recover it. 00:26:39.355 [2024-12-06 03:34:59.160454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.355 [2024-12-06 03:34:59.160469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.355 qpair failed and we were unable to recover it. 00:26:39.355 [2024-12-06 03:34:59.160611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.355 [2024-12-06 03:34:59.160627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.355 qpair failed and we were unable to recover it. 00:26:39.355 [2024-12-06 03:34:59.160777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.355 [2024-12-06 03:34:59.160795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.355 qpair failed and we were unable to recover it. 00:26:39.356 [2024-12-06 03:34:59.160896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.356 [2024-12-06 03:34:59.160911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.356 qpair failed and we were unable to recover it. 00:26:39.356 [2024-12-06 03:34:59.161159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.356 [2024-12-06 03:34:59.161176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.356 qpair failed and we were unable to recover it. 00:26:39.356 [2024-12-06 03:34:59.161316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.356 [2024-12-06 03:34:59.161332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.356 qpair failed and we were unable to recover it. 00:26:39.356 [2024-12-06 03:34:59.161470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.356 [2024-12-06 03:34:59.161487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.356 qpair failed and we were unable to recover it. 00:26:39.356 [2024-12-06 03:34:59.161666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.356 [2024-12-06 03:34:59.161682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.356 qpair failed and we were unable to recover it. 00:26:39.356 [2024-12-06 03:34:59.161939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.356 [2024-12-06 03:34:59.161961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.356 qpair failed and we were unable to recover it. 00:26:39.356 [2024-12-06 03:34:59.162182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.356 [2024-12-06 03:34:59.162199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.356 qpair failed and we were unable to recover it. 00:26:39.356 [2024-12-06 03:34:59.162433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.356 [2024-12-06 03:34:59.162450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.356 qpair failed and we were unable to recover it. 00:26:39.356 [2024-12-06 03:34:59.162658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.356 [2024-12-06 03:34:59.162675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.356 qpair failed and we were unable to recover it. 00:26:39.356 [2024-12-06 03:34:59.162852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.356 [2024-12-06 03:34:59.162869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.356 qpair failed and we were unable to recover it. 00:26:39.356 [2024-12-06 03:34:59.163110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.356 [2024-12-06 03:34:59.163127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.356 qpair failed and we were unable to recover it. 00:26:39.356 [2024-12-06 03:34:59.163238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.356 [2024-12-06 03:34:59.163255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.356 qpair failed and we were unable to recover it. 00:26:39.356 [2024-12-06 03:34:59.163349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.356 [2024-12-06 03:34:59.163365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.356 qpair failed and we were unable to recover it. 00:26:39.356 [2024-12-06 03:34:59.163575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.356 [2024-12-06 03:34:59.163592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.356 qpair failed and we were unable to recover it. 00:26:39.356 [2024-12-06 03:34:59.163743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.356 [2024-12-06 03:34:59.163759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.356 qpair failed and we were unable to recover it. 00:26:39.356 [2024-12-06 03:34:59.163839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.356 [2024-12-06 03:34:59.163854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.356 qpair failed and we were unable to recover it. 00:26:39.356 [2024-12-06 03:34:59.164086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.356 [2024-12-06 03:34:59.164103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.356 qpair failed and we were unable to recover it. 00:26:39.356 [2024-12-06 03:34:59.164249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.356 [2024-12-06 03:34:59.164266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.356 qpair failed and we were unable to recover it. 00:26:39.356 [2024-12-06 03:34:59.164438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.356 [2024-12-06 03:34:59.164454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.356 qpair failed and we were unable to recover it. 00:26:39.356 [2024-12-06 03:34:59.164561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.356 [2024-12-06 03:34:59.164578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.356 qpair failed and we were unable to recover it. 00:26:39.356 [2024-12-06 03:34:59.164721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.356 [2024-12-06 03:34:59.164737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.356 qpair failed and we were unable to recover it. 00:26:39.356 [2024-12-06 03:34:59.164825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.356 [2024-12-06 03:34:59.164840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.356 qpair failed and we were unable to recover it. 00:26:39.356 [2024-12-06 03:34:59.165093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.356 [2024-12-06 03:34:59.165110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.356 qpair failed and we were unable to recover it. 00:26:39.356 [2024-12-06 03:34:59.165349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.356 [2024-12-06 03:34:59.165366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.356 qpair failed and we were unable to recover it. 00:26:39.356 [2024-12-06 03:34:59.165575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.356 [2024-12-06 03:34:59.165591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.356 qpair failed and we were unable to recover it. 00:26:39.356 [2024-12-06 03:34:59.165854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.356 [2024-12-06 03:34:59.165871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.356 qpair failed and we were unable to recover it. 00:26:39.356 [2024-12-06 03:34:59.165959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.356 [2024-12-06 03:34:59.165977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.356 qpair failed and we were unable to recover it. 00:26:39.356 [2024-12-06 03:34:59.166125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.356 [2024-12-06 03:34:59.166141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.356 qpair failed and we were unable to recover it. 00:26:39.356 [2024-12-06 03:34:59.166293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.357 [2024-12-06 03:34:59.166309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.357 qpair failed and we were unable to recover it. 00:26:39.357 [2024-12-06 03:34:59.166418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.357 [2024-12-06 03:34:59.166434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.357 qpair failed and we were unable to recover it. 00:26:39.357 [2024-12-06 03:34:59.166664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.357 [2024-12-06 03:34:59.166680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.357 qpair failed and we were unable to recover it. 00:26:39.357 [2024-12-06 03:34:59.166831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.357 [2024-12-06 03:34:59.166847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.357 qpair failed and we were unable to recover it. 00:26:39.357 [2024-12-06 03:34:59.167007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.357 [2024-12-06 03:34:59.167024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.357 qpair failed and we were unable to recover it. 00:26:39.357 [2024-12-06 03:34:59.167258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.357 [2024-12-06 03:34:59.167275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.357 qpair failed and we were unable to recover it. 00:26:39.357 [2024-12-06 03:34:59.167361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.357 [2024-12-06 03:34:59.167376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.357 qpair failed and we were unable to recover it. 00:26:39.357 [2024-12-06 03:34:59.167452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.357 [2024-12-06 03:34:59.167467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.357 qpair failed and we were unable to recover it. 00:26:39.357 [2024-12-06 03:34:59.167623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.357 [2024-12-06 03:34:59.167639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.357 qpair failed and we were unable to recover it. 00:26:39.357 [2024-12-06 03:34:59.167793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.357 [2024-12-06 03:34:59.167809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.357 qpair failed and we were unable to recover it. 00:26:39.357 [2024-12-06 03:34:59.167952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.357 [2024-12-06 03:34:59.167969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.357 qpair failed and we were unable to recover it. 00:26:39.357 [2024-12-06 03:34:59.168123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.357 [2024-12-06 03:34:59.168140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.357 qpair failed and we were unable to recover it. 00:26:39.357 [2024-12-06 03:34:59.168307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.357 [2024-12-06 03:34:59.168322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.357 qpair failed and we were unable to recover it. 00:26:39.357 [2024-12-06 03:34:59.168465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.357 [2024-12-06 03:34:59.168478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.357 qpair failed and we were unable to recover it. 00:26:39.357 [2024-12-06 03:34:59.168740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.357 [2024-12-06 03:34:59.168753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.357 qpair failed and we were unable to recover it. 00:26:39.357 [2024-12-06 03:34:59.168958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.357 [2024-12-06 03:34:59.168971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.357 qpair failed and we were unable to recover it. 00:26:39.357 [2024-12-06 03:34:59.169127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.357 [2024-12-06 03:34:59.169140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.357 qpair failed and we were unable to recover it. 00:26:39.357 [2024-12-06 03:34:59.169227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.357 [2024-12-06 03:34:59.169239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.357 qpair failed and we were unable to recover it. 00:26:39.357 [2024-12-06 03:34:59.169483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.357 [2024-12-06 03:34:59.169497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.357 qpair failed and we were unable to recover it. 00:26:39.357 [2024-12-06 03:34:59.169640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.357 [2024-12-06 03:34:59.169652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.357 qpair failed and we were unable to recover it. 00:26:39.357 [2024-12-06 03:34:59.169821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.357 [2024-12-06 03:34:59.169834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.357 qpair failed and we were unable to recover it. 00:26:39.357 [2024-12-06 03:34:59.169982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.357 [2024-12-06 03:34:59.169995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.357 qpair failed and we were unable to recover it. 00:26:39.357 [2024-12-06 03:34:59.170221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.357 [2024-12-06 03:34:59.170233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.357 qpair failed and we were unable to recover it. 00:26:39.357 [2024-12-06 03:34:59.170323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.357 [2024-12-06 03:34:59.170335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.357 qpair failed and we were unable to recover it. 00:26:39.357 [2024-12-06 03:34:59.170473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.357 [2024-12-06 03:34:59.170487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.357 qpair failed and we were unable to recover it. 00:26:39.357 [2024-12-06 03:34:59.170572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.357 [2024-12-06 03:34:59.170586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.357 qpair failed and we were unable to recover it. 00:26:39.357 [2024-12-06 03:34:59.170735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.357 [2024-12-06 03:34:59.170748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.357 qpair failed and we were unable to recover it. 00:26:39.357 [2024-12-06 03:34:59.170896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.357 [2024-12-06 03:34:59.170908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.357 qpair failed and we were unable to recover it. 00:26:39.357 [2024-12-06 03:34:59.171132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.358 [2024-12-06 03:34:59.171145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.358 qpair failed and we were unable to recover it. 00:26:39.358 [2024-12-06 03:34:59.171364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.358 [2024-12-06 03:34:59.171377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.358 qpair failed and we were unable to recover it. 00:26:39.358 [2024-12-06 03:34:59.171548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.358 [2024-12-06 03:34:59.171561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.358 qpair failed and we were unable to recover it. 00:26:39.358 [2024-12-06 03:34:59.171707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.358 [2024-12-06 03:34:59.171719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.358 qpair failed and we were unable to recover it. 00:26:39.358 [2024-12-06 03:34:59.171888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.358 [2024-12-06 03:34:59.171900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.358 qpair failed and we were unable to recover it. 00:26:39.358 [2024-12-06 03:34:59.172080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.358 [2024-12-06 03:34:59.172093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.358 qpair failed and we were unable to recover it. 00:26:39.358 [2024-12-06 03:34:59.172380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.358 [2024-12-06 03:34:59.172392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.358 qpair failed and we were unable to recover it. 00:26:39.358 [2024-12-06 03:34:59.172601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.358 [2024-12-06 03:34:59.172614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.358 qpair failed and we were unable to recover it. 00:26:39.358 [2024-12-06 03:34:59.172766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.358 [2024-12-06 03:34:59.172778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.358 qpair failed and we were unable to recover it. 00:26:39.358 [2024-12-06 03:34:59.172975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.358 [2024-12-06 03:34:59.172988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.358 qpair failed and we were unable to recover it. 00:26:39.358 [2024-12-06 03:34:59.173185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.358 [2024-12-06 03:34:59.173198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.358 qpair failed and we were unable to recover it. 00:26:39.358 [2024-12-06 03:34:59.173293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.358 [2024-12-06 03:34:59.173304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.358 qpair failed and we were unable to recover it. 00:26:39.358 [2024-12-06 03:34:59.173475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.358 [2024-12-06 03:34:59.173488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.358 qpair failed and we were unable to recover it. 00:26:39.358 [2024-12-06 03:34:59.173623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.358 [2024-12-06 03:34:59.173635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.358 qpair failed and we were unable to recover it. 00:26:39.358 [2024-12-06 03:34:59.173850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.358 [2024-12-06 03:34:59.173862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.358 qpair failed and we were unable to recover it. 00:26:39.358 [2024-12-06 03:34:59.173959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.358 [2024-12-06 03:34:59.173971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.358 qpair failed and we were unable to recover it. 00:26:39.358 [2024-12-06 03:34:59.174134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.358 [2024-12-06 03:34:59.174147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.358 qpair failed and we were unable to recover it. 00:26:39.358 [2024-12-06 03:34:59.174286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.358 [2024-12-06 03:34:59.174298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.358 qpair failed and we were unable to recover it. 00:26:39.358 [2024-12-06 03:34:59.174468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.358 [2024-12-06 03:34:59.174480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.358 qpair failed and we were unable to recover it. 00:26:39.358 [2024-12-06 03:34:59.174577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.358 [2024-12-06 03:34:59.174587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.358 qpair failed and we were unable to recover it. 00:26:39.358 [2024-12-06 03:34:59.174774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.358 [2024-12-06 03:34:59.174786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.358 qpair failed and we were unable to recover it. 00:26:39.358 [2024-12-06 03:34:59.174965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.358 [2024-12-06 03:34:59.174978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.358 qpair failed and we were unable to recover it. 00:26:39.358 [2024-12-06 03:34:59.175076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.358 [2024-12-06 03:34:59.175087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.358 qpair failed and we were unable to recover it. 00:26:39.358 [2024-12-06 03:34:59.175290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.358 [2024-12-06 03:34:59.175302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.358 qpair failed and we were unable to recover it. 00:26:39.358 [2024-12-06 03:34:59.175379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.358 [2024-12-06 03:34:59.175391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.358 qpair failed and we were unable to recover it. 00:26:39.358 [2024-12-06 03:34:59.175541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.358 [2024-12-06 03:34:59.175553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.358 qpair failed and we were unable to recover it. 00:26:39.358 [2024-12-06 03:34:59.175720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.358 [2024-12-06 03:34:59.175732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.358 qpair failed and we were unable to recover it. 00:26:39.358 [2024-12-06 03:34:59.175929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.358 [2024-12-06 03:34:59.175941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.358 qpair failed and we were unable to recover it. 00:26:39.358 [2024-12-06 03:34:59.176090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.359 [2024-12-06 03:34:59.176103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.359 qpair failed and we were unable to recover it. 00:26:39.359 [2024-12-06 03:34:59.176345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.359 [2024-12-06 03:34:59.176357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.359 qpair failed and we were unable to recover it. 00:26:39.359 [2024-12-06 03:34:59.176520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.359 [2024-12-06 03:34:59.176533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.359 qpair failed and we were unable to recover it. 00:26:39.359 [2024-12-06 03:34:59.176679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.359 [2024-12-06 03:34:59.176692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.359 qpair failed and we were unable to recover it. 00:26:39.359 [2024-12-06 03:34:59.176913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.359 [2024-12-06 03:34:59.176925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.359 qpair failed and we were unable to recover it. 00:26:39.359 [2024-12-06 03:34:59.177058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.359 [2024-12-06 03:34:59.177071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.359 qpair failed and we were unable to recover it. 00:26:39.359 [2024-12-06 03:34:59.177228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.359 [2024-12-06 03:34:59.177240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.359 qpair failed and we were unable to recover it. 00:26:39.359 [2024-12-06 03:34:59.177462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.359 [2024-12-06 03:34:59.177474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.359 qpair failed and we were unable to recover it. 00:26:39.359 [2024-12-06 03:34:59.177685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.359 [2024-12-06 03:34:59.177698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.359 qpair failed and we were unable to recover it. 00:26:39.359 [2024-12-06 03:34:59.177961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.359 [2024-12-06 03:34:59.177976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.359 qpair failed and we were unable to recover it. 00:26:39.359 [2024-12-06 03:34:59.178222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.359 [2024-12-06 03:34:59.178236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.359 qpair failed and we were unable to recover it. 00:26:39.359 [2024-12-06 03:34:59.178437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.359 [2024-12-06 03:34:59.178449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.359 qpair failed and we were unable to recover it. 00:26:39.359 [2024-12-06 03:34:59.178671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.359 [2024-12-06 03:34:59.178683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.359 qpair failed and we were unable to recover it. 00:26:39.359 [2024-12-06 03:34:59.178819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.359 [2024-12-06 03:34:59.178832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.359 qpair failed and we were unable to recover it. 00:26:39.359 [2024-12-06 03:34:59.178971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.359 [2024-12-06 03:34:59.178983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.359 qpair failed and we were unable to recover it. 00:26:39.359 [2024-12-06 03:34:59.179182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.359 [2024-12-06 03:34:59.179194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.359 qpair failed and we were unable to recover it. 00:26:39.359 [2024-12-06 03:34:59.179392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.359 [2024-12-06 03:34:59.179404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.359 qpair failed and we were unable to recover it. 00:26:39.359 [2024-12-06 03:34:59.179594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.359 [2024-12-06 03:34:59.179606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.359 qpair failed and we were unable to recover it. 00:26:39.359 [2024-12-06 03:34:59.179812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.359 [2024-12-06 03:34:59.179825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.359 qpair failed and we were unable to recover it. 00:26:39.359 [2024-12-06 03:34:59.179909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.359 [2024-12-06 03:34:59.179919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.359 qpair failed and we were unable to recover it. 00:26:39.359 [2024-12-06 03:34:59.180084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.359 [2024-12-06 03:34:59.180097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.359 qpair failed and we were unable to recover it. 00:26:39.359 [2024-12-06 03:34:59.180235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.359 [2024-12-06 03:34:59.180247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.359 qpair failed and we were unable to recover it. 00:26:39.359 [2024-12-06 03:34:59.180392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.359 [2024-12-06 03:34:59.180404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.359 qpair failed and we were unable to recover it. 00:26:39.359 [2024-12-06 03:34:59.180474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.359 [2024-12-06 03:34:59.180485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.359 qpair failed and we were unable to recover it. 00:26:39.359 [2024-12-06 03:34:59.180641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.359 [2024-12-06 03:34:59.180654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.359 qpair failed and we were unable to recover it. 00:26:39.359 [2024-12-06 03:34:59.180797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.359 [2024-12-06 03:34:59.180810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.359 qpair failed and we were unable to recover it. 00:26:39.359 [2024-12-06 03:34:59.181024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.359 [2024-12-06 03:34:59.181036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.359 qpair failed and we were unable to recover it. 00:26:39.359 [2024-12-06 03:34:59.181233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.359 [2024-12-06 03:34:59.181247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.359 qpair failed and we were unable to recover it. 00:26:39.359 [2024-12-06 03:34:59.181337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.359 [2024-12-06 03:34:59.181349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.360 qpair failed and we were unable to recover it. 00:26:39.360 [2024-12-06 03:34:59.181545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.360 [2024-12-06 03:34:59.181558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.360 qpair failed and we were unable to recover it. 00:26:39.360 [2024-12-06 03:34:59.181779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.360 [2024-12-06 03:34:59.181792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.360 qpair failed and we were unable to recover it. 00:26:39.360 [2024-12-06 03:34:59.182029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.360 [2024-12-06 03:34:59.182041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.360 qpair failed and we were unable to recover it. 00:26:39.360 [2024-12-06 03:34:59.182256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.360 [2024-12-06 03:34:59.182269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.360 qpair failed and we were unable to recover it. 00:26:39.360 [2024-12-06 03:34:59.182422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.360 [2024-12-06 03:34:59.182435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.360 qpair failed and we were unable to recover it. 00:26:39.360 [2024-12-06 03:34:59.182568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.360 [2024-12-06 03:34:59.182580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.360 qpair failed and we were unable to recover it. 00:26:39.360 [2024-12-06 03:34:59.182730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.360 [2024-12-06 03:34:59.182743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.360 qpair failed and we were unable to recover it. 00:26:39.360 [2024-12-06 03:34:59.182967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.360 [2024-12-06 03:34:59.182979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.360 qpair failed and we were unable to recover it. 00:26:39.360 [2024-12-06 03:34:59.183198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.360 [2024-12-06 03:34:59.183210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.360 qpair failed and we were unable to recover it. 00:26:39.360 [2024-12-06 03:34:59.183384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.360 [2024-12-06 03:34:59.183396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.360 qpair failed and we were unable to recover it. 00:26:39.360 [2024-12-06 03:34:59.183615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.360 [2024-12-06 03:34:59.183627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.360 qpair failed and we were unable to recover it. 00:26:39.360 [2024-12-06 03:34:59.183820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.360 [2024-12-06 03:34:59.183833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.360 qpair failed and we were unable to recover it. 00:26:39.360 [2024-12-06 03:34:59.184047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.360 [2024-12-06 03:34:59.184061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.360 qpair failed and we were unable to recover it. 00:26:39.360 [2024-12-06 03:34:59.184288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.360 [2024-12-06 03:34:59.184300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.360 qpair failed and we were unable to recover it. 00:26:39.360 [2024-12-06 03:34:59.184433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.360 [2024-12-06 03:34:59.184446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.360 qpair failed and we were unable to recover it. 00:26:39.360 [2024-12-06 03:34:59.184616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.360 [2024-12-06 03:34:59.184629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.360 qpair failed and we were unable to recover it. 00:26:39.360 [2024-12-06 03:34:59.184729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.360 [2024-12-06 03:34:59.184743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.360 qpair failed and we were unable to recover it. 00:26:39.360 [2024-12-06 03:34:59.184968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.360 [2024-12-06 03:34:59.184982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.360 qpair failed and we were unable to recover it. 00:26:39.360 [2024-12-06 03:34:59.185131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.360 [2024-12-06 03:34:59.185144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.360 qpair failed and we were unable to recover it. 00:26:39.360 [2024-12-06 03:34:59.185359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.360 [2024-12-06 03:34:59.185372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.360 qpair failed and we were unable to recover it. 00:26:39.360 [2024-12-06 03:34:59.185539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.360 [2024-12-06 03:34:59.185555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.360 qpair failed and we were unable to recover it. 00:26:39.360 [2024-12-06 03:34:59.185631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.360 [2024-12-06 03:34:59.185642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.360 qpair failed and we were unable to recover it. 00:26:39.360 [2024-12-06 03:34:59.185822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.360 [2024-12-06 03:34:59.185834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.360 qpair failed and we were unable to recover it. 00:26:39.360 [2024-12-06 03:34:59.185994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.360 [2024-12-06 03:34:59.186006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.360 qpair failed and we were unable to recover it. 00:26:39.360 [2024-12-06 03:34:59.186104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.360 [2024-12-06 03:34:59.186116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.360 qpair failed and we were unable to recover it. 00:26:39.360 [2024-12-06 03:34:59.186201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.360 [2024-12-06 03:34:59.186212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.360 qpair failed and we were unable to recover it. 00:26:39.360 [2024-12-06 03:34:59.186439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.360 [2024-12-06 03:34:59.186451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.360 qpair failed and we were unable to recover it. 00:26:39.360 [2024-12-06 03:34:59.186628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.360 [2024-12-06 03:34:59.186642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.360 qpair failed and we were unable to recover it. 00:26:39.361 [2024-12-06 03:34:59.186774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.361 [2024-12-06 03:34:59.186787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.361 qpair failed and we were unable to recover it. 00:26:39.361 [2024-12-06 03:34:59.186865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.361 [2024-12-06 03:34:59.186877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.361 qpair failed and we were unable to recover it. 00:26:39.361 [2024-12-06 03:34:59.187096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.361 [2024-12-06 03:34:59.187109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.361 qpair failed and we were unable to recover it. 00:26:39.361 [2024-12-06 03:34:59.187195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.361 [2024-12-06 03:34:59.187207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.361 qpair failed and we were unable to recover it. 00:26:39.361 [2024-12-06 03:34:59.187370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.361 [2024-12-06 03:34:59.187382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.361 qpair failed and we were unable to recover it. 00:26:39.361 [2024-12-06 03:34:59.187579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.361 [2024-12-06 03:34:59.187592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.361 qpair failed and we were unable to recover it. 00:26:39.361 [2024-12-06 03:34:59.187771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.361 [2024-12-06 03:34:59.187784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.361 qpair failed and we were unable to recover it. 00:26:39.361 [2024-12-06 03:34:59.187858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.361 [2024-12-06 03:34:59.187870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.361 qpair failed and we were unable to recover it. 00:26:39.361 [2024-12-06 03:34:59.188106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.361 [2024-12-06 03:34:59.188120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.361 qpair failed and we were unable to recover it. 00:26:39.361 [2024-12-06 03:34:59.188348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.361 [2024-12-06 03:34:59.188361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.361 qpair failed and we were unable to recover it. 00:26:39.361 [2024-12-06 03:34:59.188516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.361 [2024-12-06 03:34:59.188529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.361 qpair failed and we were unable to recover it. 00:26:39.361 [2024-12-06 03:34:59.188727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.361 [2024-12-06 03:34:59.188740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.361 qpair failed and we were unable to recover it. 00:26:39.361 [2024-12-06 03:34:59.188826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.361 [2024-12-06 03:34:59.188838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.361 qpair failed and we were unable to recover it. 00:26:39.361 [2024-12-06 03:34:59.188970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.361 [2024-12-06 03:34:59.188982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.361 qpair failed and we were unable to recover it. 00:26:39.361 [2024-12-06 03:34:59.189168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.361 [2024-12-06 03:34:59.189180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.361 qpair failed and we were unable to recover it. 00:26:39.361 [2024-12-06 03:34:59.189379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.361 [2024-12-06 03:34:59.189392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.361 qpair failed and we were unable to recover it. 00:26:39.361 [2024-12-06 03:34:59.189590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.361 [2024-12-06 03:34:59.189603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.361 qpair failed and we were unable to recover it. 00:26:39.361 [2024-12-06 03:34:59.189758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.361 [2024-12-06 03:34:59.189771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.361 qpair failed and we were unable to recover it. 00:26:39.361 [2024-12-06 03:34:59.189921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.361 [2024-12-06 03:34:59.189935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.361 qpair failed and we were unable to recover it. 00:26:39.361 [2024-12-06 03:34:59.190191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.361 [2024-12-06 03:34:59.190210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.361 qpair failed and we were unable to recover it. 00:26:39.361 [2024-12-06 03:34:59.190357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.361 [2024-12-06 03:34:59.190374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.361 qpair failed and we were unable to recover it. 00:26:39.361 [2024-12-06 03:34:59.190625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.361 [2024-12-06 03:34:59.190642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.361 qpair failed and we were unable to recover it. 00:26:39.361 [2024-12-06 03:34:59.190796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.361 [2024-12-06 03:34:59.190813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.361 qpair failed and we were unable to recover it. 00:26:39.361 [2024-12-06 03:34:59.190966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.361 [2024-12-06 03:34:59.190983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.362 qpair failed and we were unable to recover it. 00:26:39.362 [2024-12-06 03:34:59.191127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.362 [2024-12-06 03:34:59.191143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.362 qpair failed and we were unable to recover it. 00:26:39.362 [2024-12-06 03:34:59.191300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.362 [2024-12-06 03:34:59.191316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.362 qpair failed and we were unable to recover it. 00:26:39.362 [2024-12-06 03:34:59.191535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.362 [2024-12-06 03:34:59.191551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.362 qpair failed and we were unable to recover it. 00:26:39.362 [2024-12-06 03:34:59.191768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.362 [2024-12-06 03:34:59.191784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.362 qpair failed and we were unable to recover it. 00:26:39.362 [2024-12-06 03:34:59.192035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.362 [2024-12-06 03:34:59.192052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.362 qpair failed and we were unable to recover it. 00:26:39.362 [2024-12-06 03:34:59.192260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.362 [2024-12-06 03:34:59.192277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.362 qpair failed and we were unable to recover it. 00:26:39.362 [2024-12-06 03:34:59.192505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.362 [2024-12-06 03:34:59.192521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.362 qpair failed and we were unable to recover it. 00:26:39.362 [2024-12-06 03:34:59.192748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.362 [2024-12-06 03:34:59.192765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.362 qpair failed and we were unable to recover it. 00:26:39.362 [2024-12-06 03:34:59.193022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.362 [2024-12-06 03:34:59.193039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.362 qpair failed and we were unable to recover it. 00:26:39.362 [2024-12-06 03:34:59.193191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.362 [2024-12-06 03:34:59.193208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.362 qpair failed and we were unable to recover it. 00:26:39.362 [2024-12-06 03:34:59.193439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.362 [2024-12-06 03:34:59.193454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.362 qpair failed and we were unable to recover it. 00:26:39.362 [2024-12-06 03:34:59.193548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.362 [2024-12-06 03:34:59.193560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.362 qpair failed and we were unable to recover it. 00:26:39.362 [2024-12-06 03:34:59.193783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.362 [2024-12-06 03:34:59.193795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.362 qpair failed and we were unable to recover it. 00:26:39.362 [2024-12-06 03:34:59.194004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.362 [2024-12-06 03:34:59.194017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.362 qpair failed and we were unable to recover it. 00:26:39.362 [2024-12-06 03:34:59.194170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.362 [2024-12-06 03:34:59.194182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.362 qpair failed and we were unable to recover it. 00:26:39.362 [2024-12-06 03:34:59.194249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.362 [2024-12-06 03:34:59.194261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.362 qpair failed and we were unable to recover it. 00:26:39.362 [2024-12-06 03:34:59.194414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.362 [2024-12-06 03:34:59.194426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.362 qpair failed and we were unable to recover it. 00:26:39.362 [2024-12-06 03:34:59.194626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.362 [2024-12-06 03:34:59.194637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.362 qpair failed and we were unable to recover it. 00:26:39.362 [2024-12-06 03:34:59.194714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.362 [2024-12-06 03:34:59.194726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.362 qpair failed and we were unable to recover it. 00:26:39.362 [2024-12-06 03:34:59.194867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.362 [2024-12-06 03:34:59.194879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.362 qpair failed and we were unable to recover it. 00:26:39.362 [2024-12-06 03:34:59.195127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.362 [2024-12-06 03:34:59.195140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.362 qpair failed and we were unable to recover it. 00:26:39.362 [2024-12-06 03:34:59.195304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.362 [2024-12-06 03:34:59.195316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.362 qpair failed and we were unable to recover it. 00:26:39.362 [2024-12-06 03:34:59.195423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.362 [2024-12-06 03:34:59.195441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.362 qpair failed and we were unable to recover it. 00:26:39.362 [2024-12-06 03:34:59.195679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.362 [2024-12-06 03:34:59.195695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.362 qpair failed and we were unable to recover it. 00:26:39.362 [2024-12-06 03:34:59.195932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.362 [2024-12-06 03:34:59.195954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.362 qpair failed and we were unable to recover it. 00:26:39.362 [2024-12-06 03:34:59.196049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.362 [2024-12-06 03:34:59.196064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.362 qpair failed and we were unable to recover it. 00:26:39.362 [2024-12-06 03:34:59.196145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.362 [2024-12-06 03:34:59.196160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.362 qpair failed and we were unable to recover it. 00:26:39.362 [2024-12-06 03:34:59.196369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.363 [2024-12-06 03:34:59.196387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.363 qpair failed and we were unable to recover it. 00:26:39.363 [2024-12-06 03:34:59.196541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.363 [2024-12-06 03:34:59.196557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.363 qpair failed and we were unable to recover it. 00:26:39.363 [2024-12-06 03:34:59.196789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.363 [2024-12-06 03:34:59.196806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.363 qpair failed and we were unable to recover it. 00:26:39.363 [2024-12-06 03:34:59.196962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.363 [2024-12-06 03:34:59.196980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.363 qpair failed and we were unable to recover it. 00:26:39.363 [2024-12-06 03:34:59.197126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.363 [2024-12-06 03:34:59.197144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.363 qpair failed and we were unable to recover it. 00:26:39.363 [2024-12-06 03:34:59.197239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.363 [2024-12-06 03:34:59.197256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.363 qpair failed and we were unable to recover it. 00:26:39.363 [2024-12-06 03:34:59.197409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.363 [2024-12-06 03:34:59.197427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.363 qpair failed and we were unable to recover it. 00:26:39.363 [2024-12-06 03:34:59.197569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.363 [2024-12-06 03:34:59.197585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.363 qpair failed and we were unable to recover it. 00:26:39.363 [2024-12-06 03:34:59.197683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.363 [2024-12-06 03:34:59.197699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.363 qpair failed and we were unable to recover it. 00:26:39.363 [2024-12-06 03:34:59.197932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.363 [2024-12-06 03:34:59.197953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.363 qpair failed and we were unable to recover it. 00:26:39.363 [2024-12-06 03:34:59.198108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.363 [2024-12-06 03:34:59.198124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.363 qpair failed and we were unable to recover it. 00:26:39.363 [2024-12-06 03:34:59.198354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.363 [2024-12-06 03:34:59.198370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.363 qpair failed and we were unable to recover it. 00:26:39.363 [2024-12-06 03:34:59.198548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.363 [2024-12-06 03:34:59.198564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.363 qpair failed and we were unable to recover it. 00:26:39.363 [2024-12-06 03:34:59.198734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.363 [2024-12-06 03:34:59.198749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.363 qpair failed and we were unable to recover it. 00:26:39.363 [2024-12-06 03:34:59.198894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.363 [2024-12-06 03:34:59.198906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.363 qpair failed and we were unable to recover it. 00:26:39.363 [2024-12-06 03:34:59.199067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.363 [2024-12-06 03:34:59.199079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.363 qpair failed and we were unable to recover it. 00:26:39.363 [2024-12-06 03:34:59.199221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.363 [2024-12-06 03:34:59.199233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.363 qpair failed and we were unable to recover it. 00:26:39.363 [2024-12-06 03:34:59.199438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.363 [2024-12-06 03:34:59.199450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.363 qpair failed and we were unable to recover it. 00:26:39.363 [2024-12-06 03:34:59.199619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.363 [2024-12-06 03:34:59.199631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.363 qpair failed and we were unable to recover it. 00:26:39.363 [2024-12-06 03:34:59.199777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.363 [2024-12-06 03:34:59.199790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.363 qpair failed and we were unable to recover it. 00:26:39.363 [2024-12-06 03:34:59.199932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.363 [2024-12-06 03:34:59.199945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.363 qpair failed and we were unable to recover it. 00:26:39.363 [2024-12-06 03:34:59.200086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.363 [2024-12-06 03:34:59.200099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.363 qpair failed and we were unable to recover it. 00:26:39.363 [2024-12-06 03:34:59.200359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.363 [2024-12-06 03:34:59.200376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.363 qpair failed and we were unable to recover it. 00:26:39.363 [2024-12-06 03:34:59.200523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.363 [2024-12-06 03:34:59.200539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.363 qpair failed and we were unable to recover it. 00:26:39.363 [2024-12-06 03:34:59.200631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.363 [2024-12-06 03:34:59.200647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.363 qpair failed and we were unable to recover it. 00:26:39.363 [2024-12-06 03:34:59.200873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.363 [2024-12-06 03:34:59.200889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.363 qpair failed and we were unable to recover it. 00:26:39.363 [2024-12-06 03:34:59.201046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.363 [2024-12-06 03:34:59.201063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.363 qpair failed and we were unable to recover it. 00:26:39.363 [2024-12-06 03:34:59.201213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.363 [2024-12-06 03:34:59.201230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.363 qpair failed and we were unable to recover it. 00:26:39.363 [2024-12-06 03:34:59.201371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.363 [2024-12-06 03:34:59.201387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.363 qpair failed and we were unable to recover it. 00:26:39.363 [2024-12-06 03:34:59.201595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.364 [2024-12-06 03:34:59.201611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.364 qpair failed and we were unable to recover it. 00:26:39.364 [2024-12-06 03:34:59.201755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.364 [2024-12-06 03:34:59.201771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.364 qpair failed and we were unable to recover it. 00:26:39.364 [2024-12-06 03:34:59.202006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.364 [2024-12-06 03:34:59.202021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.364 qpair failed and we were unable to recover it. 00:26:39.364 [2024-12-06 03:34:59.202169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.364 [2024-12-06 03:34:59.202182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.364 qpair failed and we were unable to recover it. 00:26:39.364 [2024-12-06 03:34:59.202328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.364 [2024-12-06 03:34:59.202340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.364 qpair failed and we were unable to recover it. 00:26:39.364 [2024-12-06 03:34:59.202486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.364 [2024-12-06 03:34:59.202498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.364 qpair failed and we were unable to recover it. 00:26:39.364 [2024-12-06 03:34:59.202677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.364 [2024-12-06 03:34:59.202689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.364 qpair failed and we were unable to recover it. 00:26:39.364 [2024-12-06 03:34:59.202915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.364 [2024-12-06 03:34:59.202927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.364 qpair failed and we were unable to recover it. 00:26:39.364 [2024-12-06 03:34:59.203099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.364 [2024-12-06 03:34:59.203112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.364 qpair failed and we were unable to recover it. 00:26:39.364 [2024-12-06 03:34:59.203321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.364 [2024-12-06 03:34:59.203333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.364 qpair failed and we were unable to recover it. 00:26:39.364 [2024-12-06 03:34:59.203487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.364 [2024-12-06 03:34:59.203499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.364 qpair failed and we were unable to recover it. 00:26:39.364 [2024-12-06 03:34:59.203726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.364 [2024-12-06 03:34:59.203738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.364 qpair failed and we were unable to recover it. 00:26:39.364 [2024-12-06 03:34:59.203952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.364 [2024-12-06 03:34:59.203965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.364 qpair failed and we were unable to recover it. 00:26:39.364 [2024-12-06 03:34:59.204165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.364 [2024-12-06 03:34:59.204177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.364 qpair failed and we were unable to recover it. 00:26:39.364 [2024-12-06 03:34:59.204342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.364 [2024-12-06 03:34:59.204354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.364 qpair failed and we were unable to recover it. 00:26:39.364 [2024-12-06 03:34:59.204499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.364 [2024-12-06 03:34:59.204511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.364 qpair failed and we were unable to recover it. 00:26:39.364 [2024-12-06 03:34:59.204597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.364 [2024-12-06 03:34:59.204609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.364 qpair failed and we were unable to recover it. 00:26:39.364 [2024-12-06 03:34:59.204753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.364 [2024-12-06 03:34:59.204765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.364 qpair failed and we were unable to recover it. 00:26:39.364 [2024-12-06 03:34:59.204899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.364 [2024-12-06 03:34:59.204911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.364 qpair failed and we were unable to recover it. 00:26:39.364 [2024-12-06 03:34:59.205056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.364 [2024-12-06 03:34:59.205068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.364 qpair failed and we were unable to recover it. 00:26:39.364 [2024-12-06 03:34:59.205152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.364 [2024-12-06 03:34:59.205162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.364 qpair failed and we were unable to recover it. 00:26:39.364 [2024-12-06 03:34:59.205263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.364 [2024-12-06 03:34:59.205274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.364 qpair failed and we were unable to recover it. 00:26:39.364 [2024-12-06 03:34:59.205410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.364 [2024-12-06 03:34:59.205422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.364 qpair failed and we were unable to recover it. 00:26:39.364 [2024-12-06 03:34:59.205519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.364 [2024-12-06 03:34:59.205532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.364 qpair failed and we were unable to recover it. 00:26:39.364 [2024-12-06 03:34:59.205681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.364 [2024-12-06 03:34:59.205693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.364 qpair failed and we were unable to recover it. 00:26:39.364 [2024-12-06 03:34:59.205904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.364 [2024-12-06 03:34:59.205917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.364 qpair failed and we were unable to recover it. 00:26:39.364 [2024-12-06 03:34:59.206068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.364 [2024-12-06 03:34:59.206081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.364 qpair failed and we were unable to recover it. 00:26:39.364 [2024-12-06 03:34:59.206148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.364 [2024-12-06 03:34:59.206160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.364 qpair failed and we were unable to recover it. 00:26:39.364 [2024-12-06 03:34:59.206299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.364 [2024-12-06 03:34:59.206312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.364 qpair failed and we were unable to recover it. 00:26:39.364 [2024-12-06 03:34:59.206478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.365 [2024-12-06 03:34:59.206490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.365 qpair failed and we were unable to recover it. 00:26:39.365 [2024-12-06 03:34:59.206645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.365 [2024-12-06 03:34:59.206657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.365 qpair failed and we were unable to recover it. 00:26:39.365 [2024-12-06 03:34:59.206879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.365 [2024-12-06 03:34:59.206892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.365 qpair failed and we were unable to recover it. 00:26:39.365 [2024-12-06 03:34:59.207026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.365 [2024-12-06 03:34:59.207038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.365 qpair failed and we were unable to recover it. 00:26:39.365 [2024-12-06 03:34:59.207188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.365 [2024-12-06 03:34:59.207203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.365 qpair failed and we were unable to recover it. 00:26:39.365 [2024-12-06 03:34:59.207401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.365 [2024-12-06 03:34:59.207414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.365 qpair failed and we were unable to recover it. 00:26:39.365 [2024-12-06 03:34:59.207635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.365 [2024-12-06 03:34:59.207648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.365 qpair failed and we were unable to recover it. 00:26:39.365 [2024-12-06 03:34:59.207871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.365 [2024-12-06 03:34:59.207884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.365 qpair failed and we were unable to recover it. 00:26:39.365 [2024-12-06 03:34:59.208082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.365 [2024-12-06 03:34:59.208094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.365 qpair failed and we were unable to recover it. 00:26:39.365 [2024-12-06 03:34:59.208255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.365 [2024-12-06 03:34:59.208268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.365 qpair failed and we were unable to recover it. 00:26:39.365 [2024-12-06 03:34:59.208497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.365 [2024-12-06 03:34:59.208509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.365 qpair failed and we were unable to recover it. 00:26:39.365 [2024-12-06 03:34:59.208596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.365 [2024-12-06 03:34:59.208607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.365 qpair failed and we were unable to recover it. 00:26:39.365 [2024-12-06 03:34:59.208689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.365 [2024-12-06 03:34:59.208700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.365 qpair failed and we were unable to recover it. 00:26:39.365 [2024-12-06 03:34:59.208854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.365 [2024-12-06 03:34:59.208866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.365 qpair failed and we were unable to recover it. 00:26:39.365 [2024-12-06 03:34:59.209090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.365 [2024-12-06 03:34:59.209102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.365 qpair failed and we were unable to recover it. 00:26:39.365 [2024-12-06 03:34:59.209191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.365 [2024-12-06 03:34:59.209203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.365 qpair failed and we were unable to recover it. 00:26:39.365 [2024-12-06 03:34:59.209292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.365 [2024-12-06 03:34:59.209303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.365 qpair failed and we were unable to recover it. 00:26:39.365 [2024-12-06 03:34:59.209507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.365 [2024-12-06 03:34:59.209520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.365 qpair failed and we were unable to recover it. 00:26:39.365 [2024-12-06 03:34:59.209666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.365 [2024-12-06 03:34:59.209679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.365 qpair failed and we were unable to recover it. 00:26:39.365 [2024-12-06 03:34:59.209893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.365 [2024-12-06 03:34:59.209905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.365 qpair failed and we were unable to recover it. 00:26:39.365 [2024-12-06 03:34:59.210051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.365 [2024-12-06 03:34:59.210065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.365 qpair failed and we were unable to recover it. 00:26:39.365 [2024-12-06 03:34:59.210231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.365 [2024-12-06 03:34:59.210243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.365 qpair failed and we were unable to recover it. 00:26:39.365 [2024-12-06 03:34:59.210418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.365 [2024-12-06 03:34:59.210430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.365 qpair failed and we were unable to recover it. 00:26:39.365 [2024-12-06 03:34:59.210651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.365 [2024-12-06 03:34:59.210663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.365 qpair failed and we were unable to recover it. 00:26:39.365 [2024-12-06 03:34:59.210805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.365 [2024-12-06 03:34:59.210817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.365 qpair failed and we were unable to recover it. 00:26:39.365 [2024-12-06 03:34:59.210883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.365 [2024-12-06 03:34:59.210895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.365 qpair failed and we were unable to recover it. 00:26:39.365 [2024-12-06 03:34:59.211027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.365 [2024-12-06 03:34:59.211039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.365 qpair failed and we were unable to recover it. 00:26:39.365 [2024-12-06 03:34:59.211173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.365 [2024-12-06 03:34:59.211185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.365 qpair failed and we were unable to recover it. 00:26:39.365 [2024-12-06 03:34:59.211343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.365 [2024-12-06 03:34:59.211355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.365 qpair failed and we were unable to recover it. 00:26:39.365 [2024-12-06 03:34:59.211487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.365 [2024-12-06 03:34:59.211499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.366 qpair failed and we were unable to recover it. 00:26:39.366 [2024-12-06 03:34:59.211741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.366 [2024-12-06 03:34:59.211753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.366 qpair failed and we were unable to recover it. 00:26:39.366 [2024-12-06 03:34:59.211956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.366 [2024-12-06 03:34:59.211969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.366 qpair failed and we were unable to recover it. 00:26:39.366 [2024-12-06 03:34:59.212118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.366 [2024-12-06 03:34:59.212130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.366 qpair failed and we were unable to recover it. 00:26:39.366 [2024-12-06 03:34:59.212346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.366 [2024-12-06 03:34:59.212358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.366 qpair failed and we were unable to recover it. 00:26:39.366 [2024-12-06 03:34:59.212600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.366 [2024-12-06 03:34:59.212612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.366 qpair failed and we were unable to recover it. 00:26:39.366 [2024-12-06 03:34:59.212710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.366 [2024-12-06 03:34:59.212721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.366 qpair failed and we were unable to recover it. 00:26:39.366 [2024-12-06 03:34:59.212804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.366 [2024-12-06 03:34:59.212815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.366 qpair failed and we were unable to recover it. 00:26:39.366 [2024-12-06 03:34:59.212888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.366 [2024-12-06 03:34:59.212899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.366 qpair failed and we were unable to recover it. 00:26:39.366 [2024-12-06 03:34:59.213162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.366 [2024-12-06 03:34:59.213174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.366 qpair failed and we were unable to recover it. 00:26:39.366 [2024-12-06 03:34:59.213275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.366 [2024-12-06 03:34:59.213287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.366 qpair failed and we were unable to recover it. 00:26:39.366 [2024-12-06 03:34:59.213510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.366 [2024-12-06 03:34:59.213522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.366 qpair failed and we were unable to recover it. 00:26:39.366 [2024-12-06 03:34:59.213745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.366 [2024-12-06 03:34:59.213757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.366 qpair failed and we were unable to recover it. 00:26:39.366 [2024-12-06 03:34:59.214011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.366 [2024-12-06 03:34:59.214023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.366 qpair failed and we were unable to recover it. 00:26:39.366 [2024-12-06 03:34:59.214172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.366 [2024-12-06 03:34:59.214184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.366 qpair failed and we were unable to recover it. 00:26:39.366 [2024-12-06 03:34:59.214327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.366 [2024-12-06 03:34:59.214341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.366 qpair failed and we were unable to recover it. 00:26:39.366 [2024-12-06 03:34:59.214561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.366 [2024-12-06 03:34:59.214573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.366 qpair failed and we were unable to recover it. 00:26:39.366 [2024-12-06 03:34:59.214723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.366 [2024-12-06 03:34:59.214735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.366 qpair failed and we were unable to recover it. 00:26:39.366 [2024-12-06 03:34:59.214868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.366 [2024-12-06 03:34:59.214880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.366 qpair failed and we were unable to recover it. 00:26:39.366 [2024-12-06 03:34:59.214966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.366 [2024-12-06 03:34:59.214978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.366 qpair failed and we were unable to recover it. 00:26:39.366 [2024-12-06 03:34:59.215113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.366 [2024-12-06 03:34:59.215125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.366 qpair failed and we were unable to recover it. 00:26:39.366 [2024-12-06 03:34:59.215279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.366 [2024-12-06 03:34:59.215291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.366 qpair failed and we were unable to recover it. 00:26:39.366 [2024-12-06 03:34:59.215518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.366 [2024-12-06 03:34:59.215530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.366 qpair failed and we were unable to recover it. 00:26:39.366 [2024-12-06 03:34:59.215676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.366 [2024-12-06 03:34:59.215689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.366 qpair failed and we were unable to recover it. 00:26:39.366 [2024-12-06 03:34:59.215906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.366 [2024-12-06 03:34:59.215918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.366 qpair failed and we were unable to recover it. 00:26:39.366 [2024-12-06 03:34:59.216001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.366 [2024-12-06 03:34:59.216013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.366 qpair failed and we were unable to recover it. 00:26:39.366 [2024-12-06 03:34:59.216185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.366 [2024-12-06 03:34:59.216198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.366 qpair failed and we were unable to recover it. 00:26:39.366 [2024-12-06 03:34:59.216276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.366 [2024-12-06 03:34:59.216287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.366 qpair failed and we were unable to recover it. 00:26:39.366 [2024-12-06 03:34:59.216503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.366 [2024-12-06 03:34:59.216516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.366 qpair failed and we were unable to recover it. 00:26:39.367 [2024-12-06 03:34:59.216716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.367 [2024-12-06 03:34:59.216729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.367 qpair failed and we were unable to recover it. 00:26:39.367 [2024-12-06 03:34:59.216952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.367 [2024-12-06 03:34:59.216965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.367 qpair failed and we were unable to recover it. 00:26:39.367 [2024-12-06 03:34:59.217037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.367 [2024-12-06 03:34:59.217049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.367 qpair failed and we were unable to recover it. 00:26:39.367 [2024-12-06 03:34:59.217275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.367 [2024-12-06 03:34:59.217287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.367 qpair failed and we were unable to recover it. 00:26:39.367 [2024-12-06 03:34:59.217457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.367 [2024-12-06 03:34:59.217468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.367 qpair failed and we were unable to recover it. 00:26:39.367 [2024-12-06 03:34:59.217693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.367 [2024-12-06 03:34:59.217706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.367 qpair failed and we were unable to recover it. 00:26:39.367 [2024-12-06 03:34:59.217909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.367 [2024-12-06 03:34:59.217921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.367 qpair failed and we were unable to recover it. 00:26:39.367 [2024-12-06 03:34:59.218121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.367 [2024-12-06 03:34:59.218133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.367 qpair failed and we were unable to recover it. 00:26:39.367 [2024-12-06 03:34:59.218348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.367 [2024-12-06 03:34:59.218360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.367 qpair failed and we were unable to recover it. 00:26:39.367 [2024-12-06 03:34:59.218560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.367 [2024-12-06 03:34:59.218572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.367 qpair failed and we were unable to recover it. 00:26:39.367 [2024-12-06 03:34:59.218716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.367 [2024-12-06 03:34:59.218728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.367 qpair failed and we were unable to recover it. 00:26:39.367 [2024-12-06 03:34:59.218963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.367 [2024-12-06 03:34:59.218976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.367 qpair failed and we were unable to recover it. 00:26:39.367 [2024-12-06 03:34:59.219107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.367 [2024-12-06 03:34:59.219119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.367 qpair failed and we were unable to recover it. 00:26:39.367 [2024-12-06 03:34:59.219274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.367 [2024-12-06 03:34:59.219286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.367 qpair failed and we were unable to recover it. 00:26:39.367 [2024-12-06 03:34:59.219429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.367 [2024-12-06 03:34:59.219441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.367 qpair failed and we were unable to recover it. 00:26:39.367 [2024-12-06 03:34:59.219668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.367 [2024-12-06 03:34:59.219681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.367 qpair failed and we were unable to recover it. 00:26:39.367 [2024-12-06 03:34:59.219830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.367 [2024-12-06 03:34:59.219844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.367 qpair failed and we were unable to recover it. 00:26:39.367 [2024-12-06 03:34:59.219985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.367 [2024-12-06 03:34:59.219998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.367 qpair failed and we were unable to recover it. 00:26:39.367 [2024-12-06 03:34:59.220141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.367 [2024-12-06 03:34:59.220155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.367 qpair failed and we were unable to recover it. 00:26:39.367 [2024-12-06 03:34:59.220298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.367 [2024-12-06 03:34:59.220310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.367 qpair failed and we were unable to recover it. 00:26:39.367 [2024-12-06 03:34:59.220525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.367 [2024-12-06 03:34:59.220537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.367 qpair failed and we were unable to recover it. 00:26:39.367 [2024-12-06 03:34:59.220632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.367 [2024-12-06 03:34:59.220645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.367 qpair failed and we were unable to recover it. 00:26:39.367 [2024-12-06 03:34:59.220876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.367 [2024-12-06 03:34:59.220888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.367 qpair failed and we were unable to recover it. 00:26:39.367 [2024-12-06 03:34:59.220994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.367 [2024-12-06 03:34:59.221007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.367 qpair failed and we were unable to recover it. 00:26:39.367 [2024-12-06 03:34:59.221233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.367 [2024-12-06 03:34:59.221246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.367 qpair failed and we were unable to recover it. 00:26:39.367 [2024-12-06 03:34:59.221322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.367 [2024-12-06 03:34:59.221332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.368 qpair failed and we were unable to recover it. 00:26:39.368 [2024-12-06 03:34:59.221514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.368 [2024-12-06 03:34:59.221529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.368 qpair failed and we were unable to recover it. 00:26:39.368 [2024-12-06 03:34:59.221684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.368 [2024-12-06 03:34:59.221696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.368 qpair failed and we were unable to recover it. 00:26:39.368 [2024-12-06 03:34:59.221893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.368 [2024-12-06 03:34:59.221906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.368 qpair failed and we were unable to recover it. 00:26:39.368 [2024-12-06 03:34:59.222166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.368 [2024-12-06 03:34:59.222179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.368 qpair failed and we were unable to recover it. 00:26:39.368 [2024-12-06 03:34:59.222427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.368 [2024-12-06 03:34:59.222439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.368 qpair failed and we were unable to recover it. 00:26:39.368 [2024-12-06 03:34:59.222539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.368 [2024-12-06 03:34:59.222549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.368 qpair failed and we were unable to recover it. 00:26:39.368 [2024-12-06 03:34:59.222726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.368 [2024-12-06 03:34:59.222738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.368 qpair failed and we were unable to recover it. 00:26:39.368 [2024-12-06 03:34:59.222943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.368 [2024-12-06 03:34:59.222969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.368 qpair failed and we were unable to recover it. 00:26:39.368 [2024-12-06 03:34:59.223041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.368 [2024-12-06 03:34:59.223051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.368 qpair failed and we were unable to recover it. 00:26:39.368 [2024-12-06 03:34:59.223146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.368 [2024-12-06 03:34:59.223157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.368 qpair failed and we were unable to recover it. 00:26:39.368 [2024-12-06 03:34:59.223375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.368 [2024-12-06 03:34:59.223387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.368 qpair failed and we were unable to recover it. 00:26:39.368 [2024-12-06 03:34:59.223572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.368 [2024-12-06 03:34:59.223584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.368 qpair failed and we were unable to recover it. 00:26:39.368 [2024-12-06 03:34:59.223741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.368 [2024-12-06 03:34:59.223753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.368 qpair failed and we were unable to recover it. 00:26:39.368 [2024-12-06 03:34:59.223900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.368 [2024-12-06 03:34:59.223912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.368 qpair failed and we were unable to recover it. 00:26:39.368 [2024-12-06 03:34:59.224066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.368 [2024-12-06 03:34:59.224080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.368 qpair failed and we were unable to recover it. 00:26:39.368 [2024-12-06 03:34:59.224166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.368 [2024-12-06 03:34:59.224176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.368 qpair failed and we were unable to recover it. 00:26:39.368 [2024-12-06 03:34:59.224247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.368 [2024-12-06 03:34:59.224258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.368 qpair failed and we were unable to recover it. 00:26:39.368 [2024-12-06 03:34:59.224410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.368 [2024-12-06 03:34:59.224423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.368 qpair failed and we were unable to recover it. 00:26:39.368 [2024-12-06 03:34:59.224678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.368 [2024-12-06 03:34:59.224691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.368 qpair failed and we were unable to recover it. 00:26:39.368 [2024-12-06 03:34:59.224892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.368 [2024-12-06 03:34:59.224905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.368 qpair failed and we were unable to recover it. 00:26:39.368 [2024-12-06 03:34:59.225167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.368 [2024-12-06 03:34:59.225179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.368 qpair failed and we were unable to recover it. 00:26:39.368 [2024-12-06 03:34:59.225322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.368 [2024-12-06 03:34:59.225334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.368 qpair failed and we were unable to recover it. 00:26:39.368 [2024-12-06 03:34:59.225476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.368 [2024-12-06 03:34:59.225488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.368 qpair failed and we were unable to recover it. 00:26:39.368 [2024-12-06 03:34:59.225735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.368 [2024-12-06 03:34:59.225747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.368 qpair failed and we were unable to recover it. 00:26:39.368 [2024-12-06 03:34:59.225826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.368 [2024-12-06 03:34:59.225838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.368 qpair failed and we were unable to recover it. 00:26:39.368 [2024-12-06 03:34:59.225987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.368 [2024-12-06 03:34:59.225999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.368 qpair failed and we were unable to recover it. 00:26:39.368 [2024-12-06 03:34:59.226086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.368 [2024-12-06 03:34:59.226098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.368 qpair failed and we were unable to recover it. 00:26:39.368 [2024-12-06 03:34:59.226300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.369 [2024-12-06 03:34:59.226312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.369 qpair failed and we were unable to recover it. 00:26:39.369 [2024-12-06 03:34:59.226535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.369 [2024-12-06 03:34:59.226547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.369 qpair failed and we were unable to recover it. 00:26:39.369 [2024-12-06 03:34:59.226628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.369 [2024-12-06 03:34:59.226639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.369 qpair failed and we were unable to recover it. 00:26:39.369 [2024-12-06 03:34:59.226778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.369 [2024-12-06 03:34:59.226790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.369 qpair failed and we were unable to recover it. 00:26:39.369 [2024-12-06 03:34:59.227014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.369 [2024-12-06 03:34:59.227027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.369 qpair failed and we were unable to recover it. 00:26:39.369 [2024-12-06 03:34:59.227267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.369 [2024-12-06 03:34:59.227279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.369 qpair failed and we were unable to recover it. 00:26:39.369 [2024-12-06 03:34:59.227455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.369 [2024-12-06 03:34:59.227467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.369 qpair failed and we were unable to recover it. 00:26:39.369 [2024-12-06 03:34:59.227677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.369 [2024-12-06 03:34:59.227689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.369 qpair failed and we were unable to recover it. 00:26:39.369 [2024-12-06 03:34:59.227896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.369 [2024-12-06 03:34:59.227908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.369 qpair failed and we were unable to recover it. 00:26:39.369 [2024-12-06 03:34:59.228046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.369 [2024-12-06 03:34:59.228058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.369 qpair failed and we were unable to recover it. 00:26:39.369 [2024-12-06 03:34:59.228237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.369 [2024-12-06 03:34:59.228250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.369 qpair failed and we were unable to recover it. 00:26:39.369 [2024-12-06 03:34:59.228422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.369 [2024-12-06 03:34:59.228434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.369 qpair failed and we were unable to recover it. 00:26:39.369 [2024-12-06 03:34:59.228647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.369 [2024-12-06 03:34:59.228659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.369 qpair failed and we were unable to recover it. 00:26:39.369 [2024-12-06 03:34:59.228875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.369 [2024-12-06 03:34:59.228890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.369 qpair failed and we were unable to recover it. 00:26:39.369 [2024-12-06 03:34:59.229041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.369 [2024-12-06 03:34:59.229054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.369 qpair failed and we were unable to recover it. 00:26:39.369 [2024-12-06 03:34:59.229205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.369 [2024-12-06 03:34:59.229218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.369 qpair failed and we were unable to recover it. 00:26:39.369 [2024-12-06 03:34:59.229314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.369 [2024-12-06 03:34:59.229325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.369 qpair failed and we were unable to recover it. 00:26:39.369 [2024-12-06 03:34:59.229474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.369 [2024-12-06 03:34:59.229486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.369 qpair failed and we were unable to recover it. 00:26:39.369 [2024-12-06 03:34:59.229619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.369 [2024-12-06 03:34:59.229631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.369 qpair failed and we were unable to recover it. 00:26:39.369 [2024-12-06 03:34:59.229766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.369 [2024-12-06 03:34:59.229779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.369 qpair failed and we were unable to recover it. 00:26:39.369 [2024-12-06 03:34:59.230007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.369 [2024-12-06 03:34:59.230020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.369 qpair failed and we were unable to recover it. 00:26:39.369 [2024-12-06 03:34:59.230250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.369 [2024-12-06 03:34:59.230262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.369 qpair failed and we were unable to recover it. 00:26:39.369 [2024-12-06 03:34:59.230437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.369 [2024-12-06 03:34:59.230450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.369 qpair failed and we were unable to recover it. 00:26:39.369 [2024-12-06 03:34:59.230671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.369 [2024-12-06 03:34:59.230683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.369 qpair failed and we were unable to recover it. 00:26:39.369 [2024-12-06 03:34:59.230779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.369 [2024-12-06 03:34:59.230791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.369 qpair failed and we were unable to recover it. 00:26:39.369 [2024-12-06 03:34:59.230958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.369 [2024-12-06 03:34:59.230972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.369 qpair failed and we were unable to recover it. 00:26:39.369 [2024-12-06 03:34:59.231108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.369 [2024-12-06 03:34:59.231120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.369 qpair failed and we were unable to recover it. 00:26:39.369 [2024-12-06 03:34:59.231290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.369 [2024-12-06 03:34:59.231303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.369 qpair failed and we were unable to recover it. 00:26:39.369 [2024-12-06 03:34:59.231553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.369 [2024-12-06 03:34:59.231565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.369 qpair failed and we were unable to recover it. 00:26:39.369 [2024-12-06 03:34:59.231701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.369 [2024-12-06 03:34:59.231712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.369 qpair failed and we were unable to recover it. 00:26:39.369 [2024-12-06 03:34:59.231778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.370 [2024-12-06 03:34:59.231790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.370 qpair failed and we were unable to recover it. 00:26:39.370 [2024-12-06 03:34:59.231961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.370 [2024-12-06 03:34:59.231973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.370 qpair failed and we were unable to recover it. 00:26:39.370 [2024-12-06 03:34:59.232198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.370 [2024-12-06 03:34:59.232212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.370 qpair failed and we were unable to recover it. 00:26:39.370 [2024-12-06 03:34:59.232392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.370 [2024-12-06 03:34:59.232405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.370 qpair failed and we were unable to recover it. 00:26:39.370 [2024-12-06 03:34:59.232581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.370 [2024-12-06 03:34:59.232594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.370 qpair failed and we were unable to recover it. 00:26:39.370 [2024-12-06 03:34:59.232746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.370 [2024-12-06 03:34:59.232759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.370 qpair failed and we were unable to recover it. 00:26:39.370 [2024-12-06 03:34:59.232930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.370 [2024-12-06 03:34:59.232942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.370 qpair failed and we were unable to recover it. 00:26:39.370 [2024-12-06 03:34:59.233105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.370 [2024-12-06 03:34:59.233119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.370 qpair failed and we were unable to recover it. 00:26:39.370 [2024-12-06 03:34:59.233202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.370 [2024-12-06 03:34:59.233214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.370 qpair failed and we were unable to recover it. 00:26:39.370 [2024-12-06 03:34:59.233364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.370 [2024-12-06 03:34:59.233377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.370 qpair failed and we were unable to recover it. 00:26:39.370 [2024-12-06 03:34:59.233450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.370 [2024-12-06 03:34:59.233462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.370 qpair failed and we were unable to recover it. 00:26:39.370 [2024-12-06 03:34:59.233597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.370 [2024-12-06 03:34:59.233610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.370 qpair failed and we were unable to recover it. 00:26:39.370 [2024-12-06 03:34:59.233823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.370 [2024-12-06 03:34:59.233835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.370 qpair failed and we were unable to recover it. 00:26:39.370 [2024-12-06 03:34:59.233973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.370 [2024-12-06 03:34:59.233985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.370 qpair failed and we were unable to recover it. 00:26:39.370 [2024-12-06 03:34:59.234151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.370 [2024-12-06 03:34:59.234163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.370 qpair failed and we were unable to recover it. 00:26:39.370 [2024-12-06 03:34:59.234413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.370 [2024-12-06 03:34:59.234426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.370 qpair failed and we were unable to recover it. 00:26:39.370 [2024-12-06 03:34:59.234495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.370 [2024-12-06 03:34:59.234505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.370 qpair failed and we were unable to recover it. 00:26:39.370 [2024-12-06 03:34:59.234664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.370 [2024-12-06 03:34:59.234676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.370 qpair failed and we were unable to recover it. 00:26:39.370 [2024-12-06 03:34:59.234955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.370 [2024-12-06 03:34:59.234968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.370 qpair failed and we were unable to recover it. 00:26:39.370 [2024-12-06 03:34:59.235114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.370 [2024-12-06 03:34:59.235126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.370 qpair failed and we were unable to recover it. 00:26:39.370 [2024-12-06 03:34:59.235351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.370 [2024-12-06 03:34:59.235364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.370 qpair failed and we were unable to recover it. 00:26:39.370 [2024-12-06 03:34:59.235439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.370 [2024-12-06 03:34:59.235450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.370 qpair failed and we were unable to recover it. 00:26:39.370 [2024-12-06 03:34:59.235516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.370 [2024-12-06 03:34:59.235527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.370 qpair failed and we were unable to recover it. 00:26:39.370 [2024-12-06 03:34:59.235664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.370 [2024-12-06 03:34:59.235679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.370 qpair failed and we were unable to recover it. 00:26:39.370 [2024-12-06 03:34:59.235761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.370 [2024-12-06 03:34:59.235772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.370 qpair failed and we were unable to recover it. 00:26:39.370 [2024-12-06 03:34:59.235998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.370 [2024-12-06 03:34:59.236011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.370 qpair failed and we were unable to recover it. 00:26:39.370 [2024-12-06 03:34:59.236166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.370 [2024-12-06 03:34:59.236178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.370 qpair failed and we were unable to recover it. 00:26:39.370 [2024-12-06 03:34:59.236254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.370 [2024-12-06 03:34:59.236265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.370 qpair failed and we were unable to recover it. 00:26:39.370 [2024-12-06 03:34:59.236342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.370 [2024-12-06 03:34:59.236353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.370 qpair failed and we were unable to recover it. 00:26:39.370 [2024-12-06 03:34:59.236572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.370 [2024-12-06 03:34:59.236584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.370 qpair failed and we were unable to recover it. 00:26:39.370 [2024-12-06 03:34:59.236716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.371 [2024-12-06 03:34:59.236727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.371 qpair failed and we were unable to recover it. 00:26:39.371 [2024-12-06 03:34:59.236956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.371 [2024-12-06 03:34:59.236968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.371 qpair failed and we were unable to recover it. 00:26:39.371 [2024-12-06 03:34:59.237120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.371 [2024-12-06 03:34:59.237132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.371 qpair failed and we were unable to recover it. 00:26:39.371 [2024-12-06 03:34:59.237355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.371 [2024-12-06 03:34:59.237367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.371 qpair failed and we were unable to recover it. 00:26:39.371 [2024-12-06 03:34:59.237861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.371 [2024-12-06 03:34:59.237883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.371 qpair failed and we were unable to recover it. 00:26:39.371 [2024-12-06 03:34:59.237979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.371 [2024-12-06 03:34:59.237992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.371 qpair failed and we were unable to recover it. 00:26:39.371 [2024-12-06 03:34:59.238200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.371 [2024-12-06 03:34:59.238212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.371 qpair failed and we were unable to recover it. 00:26:39.371 [2024-12-06 03:34:59.238373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.371 [2024-12-06 03:34:59.238385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.371 qpair failed and we were unable to recover it. 00:26:39.371 [2024-12-06 03:34:59.238531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.371 [2024-12-06 03:34:59.238543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.371 qpair failed and we were unable to recover it. 00:26:39.371 [2024-12-06 03:34:59.238748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.371 [2024-12-06 03:34:59.238760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.371 qpair failed and we were unable to recover it. 00:26:39.371 [2024-12-06 03:34:59.238908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.371 [2024-12-06 03:34:59.238920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.371 qpair failed and we were unable to recover it. 00:26:39.371 [2024-12-06 03:34:59.239087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.371 [2024-12-06 03:34:59.239100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.371 qpair failed and we were unable to recover it. 00:26:39.371 [2024-12-06 03:34:59.239196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.371 [2024-12-06 03:34:59.239207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.371 qpair failed and we were unable to recover it. 00:26:39.371 [2024-12-06 03:34:59.239413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.371 [2024-12-06 03:34:59.239425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.371 qpair failed and we were unable to recover it. 00:26:39.371 [2024-12-06 03:34:59.239671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.371 [2024-12-06 03:34:59.239684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.371 qpair failed and we were unable to recover it. 00:26:39.371 [2024-12-06 03:34:59.239828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.371 [2024-12-06 03:34:59.239841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.371 qpair failed and we were unable to recover it. 00:26:39.371 [2024-12-06 03:34:59.240077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.371 [2024-12-06 03:34:59.240091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.371 qpair failed and we were unable to recover it. 00:26:39.371 [2024-12-06 03:34:59.240240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.371 [2024-12-06 03:34:59.240254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.371 qpair failed and we were unable to recover it. 00:26:39.371 [2024-12-06 03:34:59.240398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.371 [2024-12-06 03:34:59.240410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.371 qpair failed and we were unable to recover it. 00:26:39.371 [2024-12-06 03:34:59.240553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.371 [2024-12-06 03:34:59.240566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.371 qpair failed and we were unable to recover it. 00:26:39.371 [2024-12-06 03:34:59.240666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.371 [2024-12-06 03:34:59.240678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.371 qpair failed and we were unable to recover it. 00:26:39.371 [2024-12-06 03:34:59.240756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.371 [2024-12-06 03:34:59.240768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.371 qpair failed and we were unable to recover it. 00:26:39.371 [2024-12-06 03:34:59.240848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.371 [2024-12-06 03:34:59.240859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.371 qpair failed and we were unable to recover it. 00:26:39.371 [2024-12-06 03:34:59.241081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.371 [2024-12-06 03:34:59.241095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.371 qpair failed and we were unable to recover it. 00:26:39.371 [2024-12-06 03:34:59.241183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.371 [2024-12-06 03:34:59.241194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.371 qpair failed and we were unable to recover it. 00:26:39.371 [2024-12-06 03:34:59.241280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.371 [2024-12-06 03:34:59.241292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.371 qpair failed and we were unable to recover it. 00:26:39.371 [2024-12-06 03:34:59.241430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.371 [2024-12-06 03:34:59.241442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.371 qpair failed and we were unable to recover it. 00:26:39.371 [2024-12-06 03:34:59.241667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.371 [2024-12-06 03:34:59.241679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.371 qpair failed and we were unable to recover it. 00:26:39.371 [2024-12-06 03:34:59.241869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.371 [2024-12-06 03:34:59.241882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.371 qpair failed and we were unable to recover it. 00:26:39.371 [2024-12-06 03:34:59.242017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.372 [2024-12-06 03:34:59.242029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.372 qpair failed and we were unable to recover it. 00:26:39.372 [2024-12-06 03:34:59.242180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.372 [2024-12-06 03:34:59.242194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.372 qpair failed and we were unable to recover it. 00:26:39.372 [2024-12-06 03:34:59.242279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.372 [2024-12-06 03:34:59.242290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.372 qpair failed and we were unable to recover it. 00:26:39.372 [2024-12-06 03:34:59.242482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.372 [2024-12-06 03:34:59.242494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.372 qpair failed and we were unable to recover it. 00:26:39.372 [2024-12-06 03:34:59.242641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.372 [2024-12-06 03:34:59.242657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.372 qpair failed and we were unable to recover it. 00:26:39.372 [2024-12-06 03:34:59.242880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.372 [2024-12-06 03:34:59.242893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.372 qpair failed and we were unable to recover it. 00:26:39.372 [2024-12-06 03:34:59.243056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.372 [2024-12-06 03:34:59.243068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.372 qpair failed and we were unable to recover it. 00:26:39.372 [2024-12-06 03:34:59.243254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.372 [2024-12-06 03:34:59.243267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.372 qpair failed and we were unable to recover it. 00:26:39.372 [2024-12-06 03:34:59.243506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.372 [2024-12-06 03:34:59.243520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.372 qpair failed and we were unable to recover it. 00:26:39.372 [2024-12-06 03:34:59.243612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.372 [2024-12-06 03:34:59.243625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.372 qpair failed and we were unable to recover it. 00:26:39.372 [2024-12-06 03:34:59.243790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.372 [2024-12-06 03:34:59.243803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.372 qpair failed and we were unable to recover it. 00:26:39.372 [2024-12-06 03:34:59.244017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.372 [2024-12-06 03:34:59.244031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.372 qpair failed and we were unable to recover it. 00:26:39.372 [2024-12-06 03:34:59.244255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.372 [2024-12-06 03:34:59.244268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.372 qpair failed and we were unable to recover it. 00:26:39.372 [2024-12-06 03:34:59.244417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.372 [2024-12-06 03:34:59.244429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.372 qpair failed and we were unable to recover it. 00:26:39.372 [2024-12-06 03:34:59.244574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.372 [2024-12-06 03:34:59.244586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.372 qpair failed and we were unable to recover it. 00:26:39.372 [2024-12-06 03:34:59.244737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.372 [2024-12-06 03:34:59.244750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.372 qpair failed and we were unable to recover it. 00:26:39.372 [2024-12-06 03:34:59.244997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.372 [2024-12-06 03:34:59.245010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.372 qpair failed and we were unable to recover it. 00:26:39.372 [2024-12-06 03:34:59.245190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.372 [2024-12-06 03:34:59.245202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.372 qpair failed and we were unable to recover it. 00:26:39.372 [2024-12-06 03:34:59.245337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.372 [2024-12-06 03:34:59.245349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.372 qpair failed and we were unable to recover it. 00:26:39.372 [2024-12-06 03:34:59.245506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.372 [2024-12-06 03:34:59.245518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.372 qpair failed and we were unable to recover it. 00:26:39.372 [2024-12-06 03:34:59.245653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.372 [2024-12-06 03:34:59.245666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.372 qpair failed and we were unable to recover it. 00:26:39.372 [2024-12-06 03:34:59.245914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.372 [2024-12-06 03:34:59.245926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.372 qpair failed and we were unable to recover it. 00:26:39.372 [2024-12-06 03:34:59.246146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.372 [2024-12-06 03:34:59.246159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.372 qpair failed and we were unable to recover it. 00:26:39.372 [2024-12-06 03:34:59.246367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.372 [2024-12-06 03:34:59.246380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.372 qpair failed and we were unable to recover it. 00:26:39.372 [2024-12-06 03:34:59.246579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.372 [2024-12-06 03:34:59.246591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.372 qpair failed and we were unable to recover it. 00:26:39.372 [2024-12-06 03:34:59.246821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.372 [2024-12-06 03:34:59.246833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.372 qpair failed and we were unable to recover it. 00:26:39.372 [2024-12-06 03:34:59.246987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.372 [2024-12-06 03:34:59.246999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.372 qpair failed and we were unable to recover it. 00:26:39.372 [2024-12-06 03:34:59.247248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.372 [2024-12-06 03:34:59.247260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.372 qpair failed and we were unable to recover it. 00:26:39.372 [2024-12-06 03:34:59.247420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.372 [2024-12-06 03:34:59.247431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.373 qpair failed and we were unable to recover it. 00:26:39.373 [2024-12-06 03:34:59.247526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.373 [2024-12-06 03:34:59.247537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.373 qpair failed and we were unable to recover it. 00:26:39.373 [2024-12-06 03:34:59.247599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.373 [2024-12-06 03:34:59.247610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.373 qpair failed and we were unable to recover it. 00:26:39.373 [2024-12-06 03:34:59.247765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.373 [2024-12-06 03:34:59.247778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.373 qpair failed and we were unable to recover it. 00:26:39.373 [2024-12-06 03:34:59.247883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.373 [2024-12-06 03:34:59.247895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.373 qpair failed and we were unable to recover it. 00:26:39.373 [2024-12-06 03:34:59.248053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.373 [2024-12-06 03:34:59.248066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.373 qpair failed and we were unable to recover it. 00:26:39.373 [2024-12-06 03:34:59.248208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.373 [2024-12-06 03:34:59.248220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.373 qpair failed and we were unable to recover it. 00:26:39.373 [2024-12-06 03:34:59.248422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.373 [2024-12-06 03:34:59.248436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.373 qpair failed and we were unable to recover it. 00:26:39.373 [2024-12-06 03:34:59.248594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.373 [2024-12-06 03:34:59.248608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.373 qpair failed and we were unable to recover it. 00:26:39.373 [2024-12-06 03:34:59.248767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.373 [2024-12-06 03:34:59.248780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.373 qpair failed and we were unable to recover it. 00:26:39.373 [2024-12-06 03:34:59.248952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.373 [2024-12-06 03:34:59.248964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.373 qpair failed and we were unable to recover it. 00:26:39.373 [2024-12-06 03:34:59.249050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.373 [2024-12-06 03:34:59.249061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.373 qpair failed and we were unable to recover it. 00:26:39.373 [2024-12-06 03:34:59.249226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.373 [2024-12-06 03:34:59.249238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.373 qpair failed and we were unable to recover it. 00:26:39.373 [2024-12-06 03:34:59.249404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.373 [2024-12-06 03:34:59.249417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.373 qpair failed and we were unable to recover it. 00:26:39.373 [2024-12-06 03:34:59.249568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.373 [2024-12-06 03:34:59.249581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.373 qpair failed and we were unable to recover it. 00:26:39.373 [2024-12-06 03:34:59.249725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.373 [2024-12-06 03:34:59.249737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.373 qpair failed and we were unable to recover it. 00:26:39.373 [2024-12-06 03:34:59.249870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.373 [2024-12-06 03:34:59.249886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.373 qpair failed and we were unable to recover it. 00:26:39.373 [2024-12-06 03:34:59.250035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.373 [2024-12-06 03:34:59.250047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.373 qpair failed and we were unable to recover it. 00:26:39.373 [2024-12-06 03:34:59.250193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.373 [2024-12-06 03:34:59.250205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.373 qpair failed and we were unable to recover it. 00:26:39.373 [2024-12-06 03:34:59.250457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.373 [2024-12-06 03:34:59.250470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.373 qpair failed and we were unable to recover it. 00:26:39.373 [2024-12-06 03:34:59.250549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.373 [2024-12-06 03:34:59.250560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.373 qpair failed and we were unable to recover it. 00:26:39.373 [2024-12-06 03:34:59.250644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.373 [2024-12-06 03:34:59.250655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.373 qpair failed and we were unable to recover it. 00:26:39.373 [2024-12-06 03:34:59.250751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.373 [2024-12-06 03:34:59.250762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.373 qpair failed and we were unable to recover it. 00:26:39.373 [2024-12-06 03:34:59.250864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.373 [2024-12-06 03:34:59.250875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.373 qpair failed and we were unable to recover it. 00:26:39.373 [2024-12-06 03:34:59.250978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.374 [2024-12-06 03:34:59.250990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.374 qpair failed and we were unable to recover it. 00:26:39.374 [2024-12-06 03:34:59.251153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.374 [2024-12-06 03:34:59.251167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.374 qpair failed and we were unable to recover it. 00:26:39.374 [2024-12-06 03:34:59.251363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.374 [2024-12-06 03:34:59.251376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.374 qpair failed and we were unable to recover it. 00:26:39.374 [2024-12-06 03:34:59.251469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.374 [2024-12-06 03:34:59.251480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.374 qpair failed and we were unable to recover it. 00:26:39.374 [2024-12-06 03:34:59.251548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.374 [2024-12-06 03:34:59.251559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.374 qpair failed and we were unable to recover it. 00:26:39.374 [2024-12-06 03:34:59.251805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.374 [2024-12-06 03:34:59.251818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.374 qpair failed and we were unable to recover it. 00:26:39.374 [2024-12-06 03:34:59.251999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.374 [2024-12-06 03:34:59.252012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.374 qpair failed and we were unable to recover it. 00:26:39.374 [2024-12-06 03:34:59.252159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.374 [2024-12-06 03:34:59.252172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.374 qpair failed and we were unable to recover it. 00:26:39.374 [2024-12-06 03:34:59.252343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.374 [2024-12-06 03:34:59.252356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.374 qpair failed and we were unable to recover it. 00:26:39.374 [2024-12-06 03:34:59.252489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.374 [2024-12-06 03:34:59.252502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.374 qpair failed and we were unable to recover it. 00:26:39.374 [2024-12-06 03:34:59.252644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.374 [2024-12-06 03:34:59.252657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.374 qpair failed and we were unable to recover it. 00:26:39.374 [2024-12-06 03:34:59.252869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.374 [2024-12-06 03:34:59.252882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.374 qpair failed and we were unable to recover it. 00:26:39.374 [2024-12-06 03:34:59.253032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.374 [2024-12-06 03:34:59.253045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.374 qpair failed and we were unable to recover it. 00:26:39.374 [2024-12-06 03:34:59.253285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.374 [2024-12-06 03:34:59.253297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.374 qpair failed and we were unable to recover it. 00:26:39.374 [2024-12-06 03:34:59.253500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.374 [2024-12-06 03:34:59.253514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.374 qpair failed and we were unable to recover it. 00:26:39.374 [2024-12-06 03:34:59.253677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.374 [2024-12-06 03:34:59.253689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.374 qpair failed and we were unable to recover it. 00:26:39.374 [2024-12-06 03:34:59.253831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.374 [2024-12-06 03:34:59.253844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.374 qpair failed and we were unable to recover it. 00:26:39.374 [2024-12-06 03:34:59.253991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.374 [2024-12-06 03:34:59.254004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.374 qpair failed and we were unable to recover it. 00:26:39.374 [2024-12-06 03:34:59.254092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.374 [2024-12-06 03:34:59.254104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.374 qpair failed and we were unable to recover it. 00:26:39.374 [2024-12-06 03:34:59.254185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.374 [2024-12-06 03:34:59.254196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.374 qpair failed and we were unable to recover it. 00:26:39.374 [2024-12-06 03:34:59.254332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.374 [2024-12-06 03:34:59.254344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.374 qpair failed and we were unable to recover it. 00:26:39.374 [2024-12-06 03:34:59.254487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.374 [2024-12-06 03:34:59.254499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.374 qpair failed and we were unable to recover it. 00:26:39.374 [2024-12-06 03:34:59.254603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.374 [2024-12-06 03:34:59.254617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.374 qpair failed and we were unable to recover it. 00:26:39.374 [2024-12-06 03:34:59.254755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.374 [2024-12-06 03:34:59.254767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.374 qpair failed and we were unable to recover it. 00:26:39.374 [2024-12-06 03:34:59.254966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.374 [2024-12-06 03:34:59.254979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.374 qpair failed and we were unable to recover it. 00:26:39.374 [2024-12-06 03:34:59.255064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.374 [2024-12-06 03:34:59.255075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.374 qpair failed and we were unable to recover it. 00:26:39.374 [2024-12-06 03:34:59.255207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.374 [2024-12-06 03:34:59.255219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.374 qpair failed and we were unable to recover it. 00:26:39.374 [2024-12-06 03:34:59.255388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.374 [2024-12-06 03:34:59.255400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.374 qpair failed and we were unable to recover it. 00:26:39.374 [2024-12-06 03:34:59.255488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.375 [2024-12-06 03:34:59.255500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.375 qpair failed and we were unable to recover it. 00:26:39.375 [2024-12-06 03:34:59.255692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.375 [2024-12-06 03:34:59.255704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.375 qpair failed and we were unable to recover it. 00:26:39.375 [2024-12-06 03:34:59.255795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.375 [2024-12-06 03:34:59.255805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.375 qpair failed and we were unable to recover it. 00:26:39.375 [2024-12-06 03:34:59.255942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.375 [2024-12-06 03:34:59.255959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.375 qpair failed and we were unable to recover it. 00:26:39.375 [2024-12-06 03:34:59.256095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.375 [2024-12-06 03:34:59.256113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.375 qpair failed and we were unable to recover it. 00:26:39.375 [2024-12-06 03:34:59.256273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.375 [2024-12-06 03:34:59.256285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.375 qpair failed and we were unable to recover it. 00:26:39.375 [2024-12-06 03:34:59.256361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.375 [2024-12-06 03:34:59.256371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.375 qpair failed and we were unable to recover it. 00:26:39.375 [2024-12-06 03:34:59.256508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.375 [2024-12-06 03:34:59.256519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.375 qpair failed and we were unable to recover it. 00:26:39.375 [2024-12-06 03:34:59.256722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.375 [2024-12-06 03:34:59.256735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.375 qpair failed and we were unable to recover it. 00:26:39.375 [2024-12-06 03:34:59.256871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.375 [2024-12-06 03:34:59.256882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.375 qpair failed and we were unable to recover it. 00:26:39.375 [2024-12-06 03:34:59.257049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.375 [2024-12-06 03:34:59.257062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.375 qpair failed and we were unable to recover it. 00:26:39.375 [2024-12-06 03:34:59.257223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.375 [2024-12-06 03:34:59.257234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.375 qpair failed and we were unable to recover it. 00:26:39.375 [2024-12-06 03:34:59.257399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.375 [2024-12-06 03:34:59.257413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.375 qpair failed and we were unable to recover it. 00:26:39.375 [2024-12-06 03:34:59.257491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.375 [2024-12-06 03:34:59.257503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.375 qpair failed and we were unable to recover it. 00:26:39.375 [2024-12-06 03:34:59.257726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.375 [2024-12-06 03:34:59.257739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.375 qpair failed and we were unable to recover it. 00:26:39.375 [2024-12-06 03:34:59.257900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.375 [2024-12-06 03:34:59.257912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.375 qpair failed and we were unable to recover it. 00:26:39.375 [2024-12-06 03:34:59.257990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.375 [2024-12-06 03:34:59.258002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.375 qpair failed and we were unable to recover it. 00:26:39.375 [2024-12-06 03:34:59.258134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.375 [2024-12-06 03:34:59.258147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.375 qpair failed and we were unable to recover it. 00:26:39.375 [2024-12-06 03:34:59.258239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.375 [2024-12-06 03:34:59.258252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.375 qpair failed and we were unable to recover it. 00:26:39.375 [2024-12-06 03:34:59.258450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.375 [2024-12-06 03:34:59.258462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.375 qpair failed and we were unable to recover it. 00:26:39.375 [2024-12-06 03:34:59.258554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.375 [2024-12-06 03:34:59.258566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.375 qpair failed and we were unable to recover it. 00:26:39.375 [2024-12-06 03:34:59.258741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.375 [2024-12-06 03:34:59.258754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.375 qpair failed and we were unable to recover it. 00:26:39.375 [2024-12-06 03:34:59.258905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.375 [2024-12-06 03:34:59.258917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.375 qpair failed and we were unable to recover it. 00:26:39.375 [2024-12-06 03:34:59.259144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.375 [2024-12-06 03:34:59.259157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.375 qpair failed and we were unable to recover it. 00:26:39.375 [2024-12-06 03:34:59.259237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.375 [2024-12-06 03:34:59.259248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.375 qpair failed and we were unable to recover it. 00:26:39.375 [2024-12-06 03:34:59.259335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.375 [2024-12-06 03:34:59.259347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.375 qpair failed and we were unable to recover it. 00:26:39.375 [2024-12-06 03:34:59.259572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.375 [2024-12-06 03:34:59.259585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.375 qpair failed and we were unable to recover it. 00:26:39.375 [2024-12-06 03:34:59.259725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.375 [2024-12-06 03:34:59.259739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.375 qpair failed and we were unable to recover it. 00:26:39.375 [2024-12-06 03:34:59.259989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.376 [2024-12-06 03:34:59.260002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.376 qpair failed and we were unable to recover it. 00:26:39.376 [2024-12-06 03:34:59.260197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.376 [2024-12-06 03:34:59.260209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.376 qpair failed and we were unable to recover it. 00:26:39.376 [2024-12-06 03:34:59.260343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.376 [2024-12-06 03:34:59.260356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.376 qpair failed and we were unable to recover it. 00:26:39.376 [2024-12-06 03:34:59.260613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.376 [2024-12-06 03:34:59.260640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.376 qpair failed and we were unable to recover it. 00:26:39.376 [2024-12-06 03:34:59.260856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.376 [2024-12-06 03:34:59.260873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.376 qpair failed and we were unable to recover it. 00:26:39.376 [2024-12-06 03:34:59.261110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.376 [2024-12-06 03:34:59.261127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.376 qpair failed and we were unable to recover it. 00:26:39.376 [2024-12-06 03:34:59.261335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.376 [2024-12-06 03:34:59.261351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.376 qpair failed and we were unable to recover it. 00:26:39.376 [2024-12-06 03:34:59.261455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.376 [2024-12-06 03:34:59.261470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.376 qpair failed and we were unable to recover it. 00:26:39.376 [2024-12-06 03:34:59.261642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.376 [2024-12-06 03:34:59.261659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.376 qpair failed and we were unable to recover it. 00:26:39.376 [2024-12-06 03:34:59.261867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.376 [2024-12-06 03:34:59.261883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.376 qpair failed and we were unable to recover it. 00:26:39.376 [2024-12-06 03:34:59.262070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.376 [2024-12-06 03:34:59.262087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.376 qpair failed and we were unable to recover it. 00:26:39.376 [2024-12-06 03:34:59.262179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.376 [2024-12-06 03:34:59.262193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.376 qpair failed and we were unable to recover it. 00:26:39.376 [2024-12-06 03:34:59.262365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.376 [2024-12-06 03:34:59.262382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.376 qpair failed and we were unable to recover it. 00:26:39.376 [2024-12-06 03:34:59.262541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.376 [2024-12-06 03:34:59.262557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.376 qpair failed and we were unable to recover it. 00:26:39.376 [2024-12-06 03:34:59.262851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.376 [2024-12-06 03:34:59.262866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.376 qpair failed and we were unable to recover it. 00:26:39.376 [2024-12-06 03:34:59.262951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.376 [2024-12-06 03:34:59.262967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.376 qpair failed and we were unable to recover it. 00:26:39.376 [2024-12-06 03:34:59.263132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.376 [2024-12-06 03:34:59.263154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.376 qpair failed and we were unable to recover it. 00:26:39.376 [2024-12-06 03:34:59.263320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.376 [2024-12-06 03:34:59.263336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.376 qpair failed and we were unable to recover it. 00:26:39.376 [2024-12-06 03:34:59.263510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.376 [2024-12-06 03:34:59.263527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.376 qpair failed and we were unable to recover it. 00:26:39.376 [2024-12-06 03:34:59.263761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.376 [2024-12-06 03:34:59.263777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.376 qpair failed and we were unable to recover it. 00:26:39.376 [2024-12-06 03:34:59.263953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.376 [2024-12-06 03:34:59.263970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.376 qpair failed and we were unable to recover it. 00:26:39.376 [2024-12-06 03:34:59.264124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.376 [2024-12-06 03:34:59.264141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.376 qpair failed and we were unable to recover it. 00:26:39.376 [2024-12-06 03:34:59.264249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.376 [2024-12-06 03:34:59.264265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.376 qpair failed and we were unable to recover it. 00:26:39.376 [2024-12-06 03:34:59.264411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.376 [2024-12-06 03:34:59.264427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.376 qpair failed and we were unable to recover it. 00:26:39.376 [2024-12-06 03:34:59.264528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.376 [2024-12-06 03:34:59.264543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.376 qpair failed and we were unable to recover it. 00:26:39.376 [2024-12-06 03:34:59.264693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.376 [2024-12-06 03:34:59.264705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.376 qpair failed and we were unable to recover it. 00:26:39.376 [2024-12-06 03:34:59.264839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.376 [2024-12-06 03:34:59.264851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.376 qpair failed and we were unable to recover it. 00:26:39.376 [2024-12-06 03:34:59.265062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.376 [2024-12-06 03:34:59.265075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.376 qpair failed and we were unable to recover it. 00:26:39.376 [2024-12-06 03:34:59.265153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.376 [2024-12-06 03:34:59.265164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.376 qpair failed and we were unable to recover it. 00:26:39.376 [2024-12-06 03:34:59.265362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.376 [2024-12-06 03:34:59.265375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.377 qpair failed and we were unable to recover it. 00:26:39.377 [2024-12-06 03:34:59.265529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.377 [2024-12-06 03:34:59.265541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.377 qpair failed and we were unable to recover it. 00:26:39.377 [2024-12-06 03:34:59.265740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.377 [2024-12-06 03:34:59.265752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.377 qpair failed and we were unable to recover it. 00:26:39.377 [2024-12-06 03:34:59.265900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.377 [2024-12-06 03:34:59.265912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.377 qpair failed and we were unable to recover it. 00:26:39.377 [2024-12-06 03:34:59.266067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.377 [2024-12-06 03:34:59.266080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.377 qpair failed and we were unable to recover it. 00:26:39.377 [2024-12-06 03:34:59.266278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.377 [2024-12-06 03:34:59.266295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.377 qpair failed and we were unable to recover it. 00:26:39.377 [2024-12-06 03:34:59.266404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.377 [2024-12-06 03:34:59.266416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.377 qpair failed and we were unable to recover it. 00:26:39.377 [2024-12-06 03:34:59.266506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.377 [2024-12-06 03:34:59.266517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.377 qpair failed and we were unable to recover it. 00:26:39.377 [2024-12-06 03:34:59.266675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.377 [2024-12-06 03:34:59.266687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.377 qpair failed and we were unable to recover it. 00:26:39.377 [2024-12-06 03:34:59.266843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.377 [2024-12-06 03:34:59.266855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.377 qpair failed and we were unable to recover it. 00:26:39.377 [2024-12-06 03:34:59.266997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.377 [2024-12-06 03:34:59.267009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.377 qpair failed and we were unable to recover it. 00:26:39.377 [2024-12-06 03:34:59.267163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.377 [2024-12-06 03:34:59.267176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.377 qpair failed and we were unable to recover it. 00:26:39.377 [2024-12-06 03:34:59.267349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.377 [2024-12-06 03:34:59.267360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.377 qpair failed and we were unable to recover it. 00:26:39.377 [2024-12-06 03:34:59.267445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.377 [2024-12-06 03:34:59.267457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.377 qpair failed and we were unable to recover it. 00:26:39.377 [2024-12-06 03:34:59.267566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.377 [2024-12-06 03:34:59.267596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.377 qpair failed and we were unable to recover it. 00:26:39.377 [2024-12-06 03:34:59.267770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.377 [2024-12-06 03:34:59.267792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.377 qpair failed and we were unable to recover it. 00:26:39.377 [2024-12-06 03:34:59.267885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.377 [2024-12-06 03:34:59.267904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.377 qpair failed and we were unable to recover it. 00:26:39.377 [2024-12-06 03:34:59.268067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.377 [2024-12-06 03:34:59.268081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.377 qpair failed and we were unable to recover it. 00:26:39.377 [2024-12-06 03:34:59.268191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.377 [2024-12-06 03:34:59.268205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.377 qpair failed and we were unable to recover it. 00:26:39.377 [2024-12-06 03:34:59.268297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.377 [2024-12-06 03:34:59.268308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.377 qpair failed and we were unable to recover it. 00:26:39.377 [2024-12-06 03:34:59.268392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.377 [2024-12-06 03:34:59.268405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.377 qpair failed and we were unable to recover it. 00:26:39.377 [2024-12-06 03:34:59.268517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.377 [2024-12-06 03:34:59.268529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.377 qpair failed and we were unable to recover it. 00:26:39.377 [2024-12-06 03:34:59.268765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.377 [2024-12-06 03:34:59.268777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.377 qpair failed and we were unable to recover it. 00:26:39.377 [2024-12-06 03:34:59.268966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.377 [2024-12-06 03:34:59.268980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.377 qpair failed and we were unable to recover it. 00:26:39.377 [2024-12-06 03:34:59.269178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.377 [2024-12-06 03:34:59.269190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.377 qpair failed and we were unable to recover it. 00:26:39.377 [2024-12-06 03:34:59.269390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.377 [2024-12-06 03:34:59.269402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.377 qpair failed and we were unable to recover it. 00:26:39.377 [2024-12-06 03:34:59.269536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.377 [2024-12-06 03:34:59.269549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.377 qpair failed and we were unable to recover it. 00:26:39.377 [2024-12-06 03:34:59.269721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.377 [2024-12-06 03:34:59.269736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.377 qpair failed and we were unable to recover it. 00:26:39.377 [2024-12-06 03:34:59.269834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.378 [2024-12-06 03:34:59.269851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.378 qpair failed and we were unable to recover it. 00:26:39.378 [2024-12-06 03:34:59.270079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.378 [2024-12-06 03:34:59.270096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.378 qpair failed and we were unable to recover it. 00:26:39.378 [2024-12-06 03:34:59.270282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.378 [2024-12-06 03:34:59.270299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.378 qpair failed and we were unable to recover it. 00:26:39.378 [2024-12-06 03:34:59.270463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.378 [2024-12-06 03:34:59.270479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.378 qpair failed and we were unable to recover it. 00:26:39.378 [2024-12-06 03:34:59.270729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.378 [2024-12-06 03:34:59.270745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.378 qpair failed and we were unable to recover it. 00:26:39.378 [2024-12-06 03:34:59.270854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.378 [2024-12-06 03:34:59.270866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.378 qpair failed and we were unable to recover it. 00:26:39.378 [2024-12-06 03:34:59.271011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.378 [2024-12-06 03:34:59.271024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.378 qpair failed and we were unable to recover it. 00:26:39.378 [2024-12-06 03:34:59.271103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.378 [2024-12-06 03:34:59.271114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.378 qpair failed and we were unable to recover it. 00:26:39.378 [2024-12-06 03:34:59.271262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.378 [2024-12-06 03:34:59.271274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.378 qpair failed and we were unable to recover it. 00:26:39.378 [2024-12-06 03:34:59.271473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.378 [2024-12-06 03:34:59.271485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.378 qpair failed and we were unable to recover it. 00:26:39.378 [2024-12-06 03:34:59.271623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.378 [2024-12-06 03:34:59.271635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.378 qpair failed and we were unable to recover it. 00:26:39.378 [2024-12-06 03:34:59.271881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.378 [2024-12-06 03:34:59.271893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.378 qpair failed and we were unable to recover it. 00:26:39.378 [2024-12-06 03:34:59.272048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.378 [2024-12-06 03:34:59.272061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.378 qpair failed and we were unable to recover it. 00:26:39.378 [2024-12-06 03:34:59.272217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.378 [2024-12-06 03:34:59.272230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.378 qpair failed and we were unable to recover it. 00:26:39.378 [2024-12-06 03:34:59.272377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.378 [2024-12-06 03:34:59.272389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.378 qpair failed and we were unable to recover it. 00:26:39.378 [2024-12-06 03:34:59.272637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.378 [2024-12-06 03:34:59.272649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.378 qpair failed and we were unable to recover it. 00:26:39.378 [2024-12-06 03:34:59.272865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.378 [2024-12-06 03:34:59.272878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.378 qpair failed and we were unable to recover it. 00:26:39.378 [2024-12-06 03:34:59.273045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.378 [2024-12-06 03:34:59.273058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.378 qpair failed and we were unable to recover it. 00:26:39.378 [2024-12-06 03:34:59.273252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.378 [2024-12-06 03:34:59.273264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.378 qpair failed and we were unable to recover it. 00:26:39.378 [2024-12-06 03:34:59.273468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.378 [2024-12-06 03:34:59.273480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.378 qpair failed and we were unable to recover it. 00:26:39.378 [2024-12-06 03:34:59.273587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.378 [2024-12-06 03:34:59.273600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.378 qpair failed and we were unable to recover it. 00:26:39.378 [2024-12-06 03:34:59.273752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.378 [2024-12-06 03:34:59.273764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.378 qpair failed and we were unable to recover it. 00:26:39.378 [2024-12-06 03:34:59.273860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.378 [2024-12-06 03:34:59.273872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.378 qpair failed and we were unable to recover it. 00:26:39.378 [2024-12-06 03:34:59.274141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.378 [2024-12-06 03:34:59.274154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.378 qpair failed and we were unable to recover it. 00:26:39.378 [2024-12-06 03:34:59.274251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.378 [2024-12-06 03:34:59.274263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.378 qpair failed and we were unable to recover it. 00:26:39.378 [2024-12-06 03:34:59.274506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.378 [2024-12-06 03:34:59.274518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.378 qpair failed and we were unable to recover it. 00:26:39.378 [2024-12-06 03:34:59.274620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.378 [2024-12-06 03:34:59.274641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.378 qpair failed and we were unable to recover it. 00:26:39.378 [2024-12-06 03:34:59.274831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.378 [2024-12-06 03:34:59.274849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.378 qpair failed and we were unable to recover it. 00:26:39.378 [2024-12-06 03:34:59.275009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.378 [2024-12-06 03:34:59.275027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.378 qpair failed and we were unable to recover it. 00:26:39.378 [2024-12-06 03:34:59.275241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.378 [2024-12-06 03:34:59.275259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.378 qpair failed and we were unable to recover it. 00:26:39.378 [2024-12-06 03:34:59.275364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.379 [2024-12-06 03:34:59.275381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.379 qpair failed and we were unable to recover it. 00:26:39.379 [2024-12-06 03:34:59.275563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.379 [2024-12-06 03:34:59.275580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.379 qpair failed and we were unable to recover it. 00:26:39.379 [2024-12-06 03:34:59.275792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.379 [2024-12-06 03:34:59.275810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.379 qpair failed and we were unable to recover it. 00:26:39.379 [2024-12-06 03:34:59.275921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.379 [2024-12-06 03:34:59.275937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.379 qpair failed and we were unable to recover it. 00:26:39.379 [2024-12-06 03:34:59.276121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.379 [2024-12-06 03:34:59.276138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.379 qpair failed and we were unable to recover it. 00:26:39.379 [2024-12-06 03:34:59.276286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.379 [2024-12-06 03:34:59.276302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.379 qpair failed and we were unable to recover it. 00:26:39.379 [2024-12-06 03:34:59.276462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.379 [2024-12-06 03:34:59.276478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.379 qpair failed and we were unable to recover it. 00:26:39.379 [2024-12-06 03:34:59.276641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.379 [2024-12-06 03:34:59.276657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.379 qpair failed and we were unable to recover it. 00:26:39.379 [2024-12-06 03:34:59.276753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.379 [2024-12-06 03:34:59.276769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.379 qpair failed and we were unable to recover it. 00:26:39.379 [2024-12-06 03:34:59.276979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.379 [2024-12-06 03:34:59.276996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.379 qpair failed and we were unable to recover it. 00:26:39.379 [2024-12-06 03:34:59.277087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.379 [2024-12-06 03:34:59.277104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.379 qpair failed and we were unable to recover it. 00:26:39.379 [2024-12-06 03:34:59.277282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.379 [2024-12-06 03:34:59.277298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.379 qpair failed and we were unable to recover it. 00:26:39.379 [2024-12-06 03:34:59.277397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.379 [2024-12-06 03:34:59.277412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.379 qpair failed and we were unable to recover it. 00:26:39.379 [2024-12-06 03:34:59.277574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.379 [2024-12-06 03:34:59.277590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.379 qpair failed and we were unable to recover it. 00:26:39.379 [2024-12-06 03:34:59.277744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.379 [2024-12-06 03:34:59.277761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.379 qpair failed and we were unable to recover it. 00:26:39.379 [2024-12-06 03:34:59.277933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.379 [2024-12-06 03:34:59.277956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.379 qpair failed and we were unable to recover it. 00:26:39.379 [2024-12-06 03:34:59.278221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.379 [2024-12-06 03:34:59.278237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.379 qpair failed and we were unable to recover it. 00:26:39.379 [2024-12-06 03:34:59.278408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.379 [2024-12-06 03:34:59.278425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.379 qpair failed and we were unable to recover it. 00:26:39.379 [2024-12-06 03:34:59.278572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.379 [2024-12-06 03:34:59.278588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.379 qpair failed and we were unable to recover it. 00:26:39.379 [2024-12-06 03:34:59.278792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.379 [2024-12-06 03:34:59.278808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.379 qpair failed and we were unable to recover it. 00:26:39.379 [2024-12-06 03:34:59.279047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.379 [2024-12-06 03:34:59.279064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.379 qpair failed and we were unable to recover it. 00:26:39.379 [2024-12-06 03:34:59.279220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.379 [2024-12-06 03:34:59.279236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.379 qpair failed and we were unable to recover it. 00:26:39.379 [2024-12-06 03:34:59.279317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.379 [2024-12-06 03:34:59.279331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.379 qpair failed and we were unable to recover it. 00:26:39.379 [2024-12-06 03:34:59.279510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.379 [2024-12-06 03:34:59.279524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.379 qpair failed and we were unable to recover it. 00:26:39.379 [2024-12-06 03:34:59.279608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.379 [2024-12-06 03:34:59.279619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.379 qpair failed and we were unable to recover it. 00:26:39.379 [2024-12-06 03:34:59.279692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.379 [2024-12-06 03:34:59.279703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.379 qpair failed and we were unable to recover it. 00:26:39.380 [2024-12-06 03:34:59.279838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.380 [2024-12-06 03:34:59.279850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.380 qpair failed and we were unable to recover it. 00:26:39.380 [2024-12-06 03:34:59.279964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.380 [2024-12-06 03:34:59.279976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.380 qpair failed and we were unable to recover it. 00:26:39.380 [2024-12-06 03:34:59.280192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.380 [2024-12-06 03:34:59.280205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.380 qpair failed and we were unable to recover it. 00:26:39.380 [2024-12-06 03:34:59.280419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.380 [2024-12-06 03:34:59.280432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.380 qpair failed and we were unable to recover it. 00:26:39.380 [2024-12-06 03:34:59.280639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.380 [2024-12-06 03:34:59.280651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.380 qpair failed and we were unable to recover it. 00:26:39.380 [2024-12-06 03:34:59.280798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.380 [2024-12-06 03:34:59.280811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.380 qpair failed and we were unable to recover it. 00:26:39.380 [2024-12-06 03:34:59.281014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.380 [2024-12-06 03:34:59.281027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.380 qpair failed and we were unable to recover it. 00:26:39.380 [2024-12-06 03:34:59.281172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.380 [2024-12-06 03:34:59.281185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.380 qpair failed and we were unable to recover it. 00:26:39.380 [2024-12-06 03:34:59.281378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.380 [2024-12-06 03:34:59.281391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.380 qpair failed and we were unable to recover it. 00:26:39.380 [2024-12-06 03:34:59.281543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.380 [2024-12-06 03:34:59.281555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.380 qpair failed and we were unable to recover it. 00:26:39.380 [2024-12-06 03:34:59.281719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.380 [2024-12-06 03:34:59.281734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.380 qpair failed and we were unable to recover it. 00:26:39.380 [2024-12-06 03:34:59.281916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.380 [2024-12-06 03:34:59.281928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.380 qpair failed and we were unable to recover it. 00:26:39.380 [2024-12-06 03:34:59.282090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.380 [2024-12-06 03:34:59.282103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.380 qpair failed and we were unable to recover it. 00:26:39.380 [2024-12-06 03:34:59.282202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.380 [2024-12-06 03:34:59.282215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.380 qpair failed and we were unable to recover it. 00:26:39.380 [2024-12-06 03:34:59.282450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.380 [2024-12-06 03:34:59.282462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.380 qpair failed and we were unable to recover it. 00:26:39.380 [2024-12-06 03:34:59.282757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.380 [2024-12-06 03:34:59.282770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.380 qpair failed and we were unable to recover it. 00:26:39.380 [2024-12-06 03:34:59.282906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.380 [2024-12-06 03:34:59.282919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.380 qpair failed and we were unable to recover it. 00:26:39.380 [2024-12-06 03:34:59.283023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.380 [2024-12-06 03:34:59.283035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.380 qpair failed and we were unable to recover it. 00:26:39.380 [2024-12-06 03:34:59.283133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.380 [2024-12-06 03:34:59.283145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.380 qpair failed and we were unable to recover it. 00:26:39.380 [2024-12-06 03:34:59.283315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.380 [2024-12-06 03:34:59.283328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.380 qpair failed and we were unable to recover it. 00:26:39.380 [2024-12-06 03:34:59.283472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.380 [2024-12-06 03:34:59.283484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.380 qpair failed and we were unable to recover it. 00:26:39.380 [2024-12-06 03:34:59.283575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.380 [2024-12-06 03:34:59.283588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.380 qpair failed and we were unable to recover it. 00:26:39.380 [2024-12-06 03:34:59.283724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.380 [2024-12-06 03:34:59.283737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.380 qpair failed and we were unable to recover it. 00:26:39.380 [2024-12-06 03:34:59.283982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.380 [2024-12-06 03:34:59.283994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.380 qpair failed and we were unable to recover it. 00:26:39.380 [2024-12-06 03:34:59.284062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.380 [2024-12-06 03:34:59.284074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.380 qpair failed and we were unable to recover it. 00:26:39.380 [2024-12-06 03:34:59.284208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.380 [2024-12-06 03:34:59.284220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.380 qpair failed and we were unable to recover it. 00:26:39.380 [2024-12-06 03:34:59.284367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.381 [2024-12-06 03:34:59.284381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.381 qpair failed and we were unable to recover it. 00:26:39.381 [2024-12-06 03:34:59.284660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.381 [2024-12-06 03:34:59.284673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.381 qpair failed and we were unable to recover it. 00:26:39.381 [2024-12-06 03:34:59.284817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.381 [2024-12-06 03:34:59.284829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.381 qpair failed and we were unable to recover it. 00:26:39.381 [2024-12-06 03:34:59.284914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.381 [2024-12-06 03:34:59.284926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.381 qpair failed and we were unable to recover it. 00:26:39.381 [2024-12-06 03:34:59.285022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.381 [2024-12-06 03:34:59.285036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.381 qpair failed and we were unable to recover it. 00:26:39.381 [2024-12-06 03:34:59.285121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.381 [2024-12-06 03:34:59.285135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.381 qpair failed and we were unable to recover it. 00:26:39.381 [2024-12-06 03:34:59.285337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.381 [2024-12-06 03:34:59.285350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.381 qpair failed and we were unable to recover it. 00:26:39.381 [2024-12-06 03:34:59.285450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.381 [2024-12-06 03:34:59.285462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.381 qpair failed and we were unable to recover it. 00:26:39.381 [2024-12-06 03:34:59.285723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.381 [2024-12-06 03:34:59.285736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.381 qpair failed and we were unable to recover it. 00:26:39.381 [2024-12-06 03:34:59.285905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.381 [2024-12-06 03:34:59.285918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.381 qpair failed and we were unable to recover it. 00:26:39.381 [2024-12-06 03:34:59.286143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.381 [2024-12-06 03:34:59.286155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.381 qpair failed and we were unable to recover it. 00:26:39.381 [2024-12-06 03:34:59.286359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.381 [2024-12-06 03:34:59.286371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.381 qpair failed and we were unable to recover it. 00:26:39.381 [2024-12-06 03:34:59.286598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.381 [2024-12-06 03:34:59.286611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.381 qpair failed and we were unable to recover it. 00:26:39.381 [2024-12-06 03:34:59.286766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.381 [2024-12-06 03:34:59.286779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.381 qpair failed and we were unable to recover it. 00:26:39.381 [2024-12-06 03:34:59.286921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.381 [2024-12-06 03:34:59.286932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.381 qpair failed and we were unable to recover it. 00:26:39.381 [2024-12-06 03:34:59.287037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.381 [2024-12-06 03:34:59.287051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.381 qpair failed and we were unable to recover it. 00:26:39.381 [2024-12-06 03:34:59.287247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.381 [2024-12-06 03:34:59.287259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.381 qpair failed and we were unable to recover it. 00:26:39.381 [2024-12-06 03:34:59.287425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.381 [2024-12-06 03:34:59.287438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.381 qpair failed and we were unable to recover it. 00:26:39.381 [2024-12-06 03:34:59.287532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.381 [2024-12-06 03:34:59.287543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.381 qpair failed and we were unable to recover it. 00:26:39.381 [2024-12-06 03:34:59.287768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.381 [2024-12-06 03:34:59.287780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.381 qpair failed and we were unable to recover it. 00:26:39.381 [2024-12-06 03:34:59.287979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.381 [2024-12-06 03:34:59.287992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.381 qpair failed and we were unable to recover it. 00:26:39.381 [2024-12-06 03:34:59.288158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.381 [2024-12-06 03:34:59.288171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.381 qpair failed and we were unable to recover it. 00:26:39.381 [2024-12-06 03:34:59.288400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.381 [2024-12-06 03:34:59.288412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.381 qpair failed and we were unable to recover it. 00:26:39.381 [2024-12-06 03:34:59.288514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.381 [2024-12-06 03:34:59.288526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.381 qpair failed and we were unable to recover it. 00:26:39.381 [2024-12-06 03:34:59.288605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.381 [2024-12-06 03:34:59.288619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.381 qpair failed and we were unable to recover it. 00:26:39.381 [2024-12-06 03:34:59.288805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.381 [2024-12-06 03:34:59.288817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.381 qpair failed and we were unable to recover it. 00:26:39.381 [2024-12-06 03:34:59.289018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.381 [2024-12-06 03:34:59.289031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.381 qpair failed and we were unable to recover it. 00:26:39.381 [2024-12-06 03:34:59.289163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.381 [2024-12-06 03:34:59.289175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.381 qpair failed and we were unable to recover it. 00:26:39.381 [2024-12-06 03:34:59.289322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.381 [2024-12-06 03:34:59.289334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.382 qpair failed and we were unable to recover it. 00:26:39.382 [2024-12-06 03:34:59.289478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.382 [2024-12-06 03:34:59.289490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.382 qpair failed and we were unable to recover it. 00:26:39.382 [2024-12-06 03:34:59.289623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.382 [2024-12-06 03:34:59.289635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.382 qpair failed and we were unable to recover it. 00:26:39.382 [2024-12-06 03:34:59.289837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.382 [2024-12-06 03:34:59.289850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.382 qpair failed and we were unable to recover it. 00:26:39.382 [2024-12-06 03:34:59.289995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.382 [2024-12-06 03:34:59.290008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.382 qpair failed and we were unable to recover it. 00:26:39.382 [2024-12-06 03:34:59.290152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.382 [2024-12-06 03:34:59.290165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.382 qpair failed and we were unable to recover it. 00:26:39.382 [2024-12-06 03:34:59.290328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.382 [2024-12-06 03:34:59.290340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.382 qpair failed and we were unable to recover it. 00:26:39.382 [2024-12-06 03:34:59.290572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.382 [2024-12-06 03:34:59.290585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.382 qpair failed and we were unable to recover it. 00:26:39.382 [2024-12-06 03:34:59.290761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.382 [2024-12-06 03:34:59.290774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.382 qpair failed and we were unable to recover it. 00:26:39.382 [2024-12-06 03:34:59.290927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.382 [2024-12-06 03:34:59.290940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.382 qpair failed and we were unable to recover it. 00:26:39.382 [2024-12-06 03:34:59.291097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.382 [2024-12-06 03:34:59.291110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.382 qpair failed and we were unable to recover it. 00:26:39.382 [2024-12-06 03:34:59.291339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.382 [2024-12-06 03:34:59.291352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.382 qpair failed and we were unable to recover it. 00:26:39.382 [2024-12-06 03:34:59.291556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.382 [2024-12-06 03:34:59.291568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.382 qpair failed and we were unable to recover it. 00:26:39.382 [2024-12-06 03:34:59.291718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.382 [2024-12-06 03:34:59.291731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.382 qpair failed and we were unable to recover it. 00:26:39.382 [2024-12-06 03:34:59.291929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.382 [2024-12-06 03:34:59.291941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.382 qpair failed and we were unable to recover it. 00:26:39.382 [2024-12-06 03:34:59.292107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.382 [2024-12-06 03:34:59.292119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.382 qpair failed and we were unable to recover it. 00:26:39.382 [2024-12-06 03:34:59.292262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.382 [2024-12-06 03:34:59.292275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.382 qpair failed and we were unable to recover it. 00:26:39.382 [2024-12-06 03:34:59.292417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.382 [2024-12-06 03:34:59.292429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.382 qpair failed and we were unable to recover it. 00:26:39.382 [2024-12-06 03:34:59.292647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.382 [2024-12-06 03:34:59.292659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.382 qpair failed and we were unable to recover it. 00:26:39.382 [2024-12-06 03:34:59.292854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.382 [2024-12-06 03:34:59.292866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.382 qpair failed and we were unable to recover it. 00:26:39.382 [2024-12-06 03:34:59.293009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.382 [2024-12-06 03:34:59.293022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.382 qpair failed and we were unable to recover it. 00:26:39.382 [2024-12-06 03:34:59.293245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.382 [2024-12-06 03:34:59.293258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.382 qpair failed and we were unable to recover it. 00:26:39.382 [2024-12-06 03:34:59.293427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.382 [2024-12-06 03:34:59.293439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.382 qpair failed and we were unable to recover it. 00:26:39.382 [2024-12-06 03:34:59.293608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.382 [2024-12-06 03:34:59.293620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.382 qpair failed and we were unable to recover it. 00:26:39.382 [2024-12-06 03:34:59.293752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.382 [2024-12-06 03:34:59.293764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.382 qpair failed and we were unable to recover it. 00:26:39.382 [2024-12-06 03:34:59.293935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.382 [2024-12-06 03:34:59.293951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.382 qpair failed and we were unable to recover it. 00:26:39.382 [2024-12-06 03:34:59.294126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.382 [2024-12-06 03:34:59.294139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.382 qpair failed and we were unable to recover it. 00:26:39.382 [2024-12-06 03:34:59.294273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.382 [2024-12-06 03:34:59.294286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.382 qpair failed and we were unable to recover it. 00:26:39.382 [2024-12-06 03:34:59.294454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.382 [2024-12-06 03:34:59.294466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.382 qpair failed and we were unable to recover it. 00:26:39.382 [2024-12-06 03:34:59.294612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.383 [2024-12-06 03:34:59.294626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.383 qpair failed and we were unable to recover it. 00:26:39.383 [2024-12-06 03:34:59.294774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.383 [2024-12-06 03:34:59.294786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.383 qpair failed and we were unable to recover it. 00:26:39.383 [2024-12-06 03:34:59.294985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.383 [2024-12-06 03:34:59.294999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.383 qpair failed and we were unable to recover it. 00:26:39.383 [2024-12-06 03:34:59.295215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.383 [2024-12-06 03:34:59.295228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.383 qpair failed and we were unable to recover it. 00:26:39.383 [2024-12-06 03:34:59.295295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.383 [2024-12-06 03:34:59.295306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.383 qpair failed and we were unable to recover it. 00:26:39.383 [2024-12-06 03:34:59.295405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.383 [2024-12-06 03:34:59.295418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.383 qpair failed and we were unable to recover it. 00:26:39.383 [2024-12-06 03:34:59.295555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.383 [2024-12-06 03:34:59.295567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.383 qpair failed and we were unable to recover it. 00:26:39.383 [2024-12-06 03:34:59.295720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.383 [2024-12-06 03:34:59.295734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.383 qpair failed and we were unable to recover it. 00:26:39.383 [2024-12-06 03:34:59.295961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.383 [2024-12-06 03:34:59.295974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.383 qpair failed and we were unable to recover it. 00:26:39.383 [2024-12-06 03:34:59.296064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.383 [2024-12-06 03:34:59.296076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.383 qpair failed and we were unable to recover it. 00:26:39.383 [2024-12-06 03:34:59.296227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.383 [2024-12-06 03:34:59.296239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.383 qpair failed and we were unable to recover it. 00:26:39.383 [2024-12-06 03:34:59.296333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.383 [2024-12-06 03:34:59.296345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.383 qpair failed and we were unable to recover it. 00:26:39.383 [2024-12-06 03:34:59.296517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.383 [2024-12-06 03:34:59.296530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.383 qpair failed and we were unable to recover it. 00:26:39.383 [2024-12-06 03:34:59.296662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.383 [2024-12-06 03:34:59.296675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.383 qpair failed and we were unable to recover it. 00:26:39.383 [2024-12-06 03:34:59.296768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.383 [2024-12-06 03:34:59.296780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.383 qpair failed and we were unable to recover it. 00:26:39.383 [2024-12-06 03:34:59.296956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.383 [2024-12-06 03:34:59.296970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.383 qpair failed and we were unable to recover it. 00:26:39.383 [2024-12-06 03:34:59.297148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.383 [2024-12-06 03:34:59.297160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.383 qpair failed and we were unable to recover it. 00:26:39.383 [2024-12-06 03:34:59.297394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.383 [2024-12-06 03:34:59.297407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.383 qpair failed and we were unable to recover it. 00:26:39.383 [2024-12-06 03:34:59.297606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.383 [2024-12-06 03:34:59.297618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.383 qpair failed and we were unable to recover it. 00:26:39.383 [2024-12-06 03:34:59.297706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.383 [2024-12-06 03:34:59.297719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.383 qpair failed and we were unable to recover it. 00:26:39.383 [2024-12-06 03:34:59.297924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.383 [2024-12-06 03:34:59.297937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.383 qpair failed and we were unable to recover it. 00:26:39.383 [2024-12-06 03:34:59.298084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.383 [2024-12-06 03:34:59.298097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.383 qpair failed and we were unable to recover it. 00:26:39.383 [2024-12-06 03:34:59.298316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.383 [2024-12-06 03:34:59.298330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.383 qpair failed and we were unable to recover it. 00:26:39.383 [2024-12-06 03:34:59.298463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.383 [2024-12-06 03:34:59.298475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.383 qpair failed and we were unable to recover it. 00:26:39.383 [2024-12-06 03:34:59.298696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.383 [2024-12-06 03:34:59.298708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.383 qpair failed and we were unable to recover it. 00:26:39.383 [2024-12-06 03:34:59.298787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.383 [2024-12-06 03:34:59.298798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.383 qpair failed and we were unable to recover it. 00:26:39.383 [2024-12-06 03:34:59.298878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.383 [2024-12-06 03:34:59.298889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.383 qpair failed and we were unable to recover it. 00:26:39.383 [2024-12-06 03:34:59.298974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.383 [2024-12-06 03:34:59.298986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.383 qpair failed and we were unable to recover it. 00:26:39.383 [2024-12-06 03:34:59.299184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.383 [2024-12-06 03:34:59.299196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.383 qpair failed and we were unable to recover it. 00:26:39.383 [2024-12-06 03:34:59.299288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.383 [2024-12-06 03:34:59.299300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.383 qpair failed and we were unable to recover it. 00:26:39.383 [2024-12-06 03:34:59.299402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.384 [2024-12-06 03:34:59.299415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.384 qpair failed and we were unable to recover it. 00:26:39.384 [2024-12-06 03:34:59.299517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.384 [2024-12-06 03:34:59.299530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.384 qpair failed and we were unable to recover it. 00:26:39.384 [2024-12-06 03:34:59.299618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.384 [2024-12-06 03:34:59.299631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.384 qpair failed and we were unable to recover it. 00:26:39.384 [2024-12-06 03:34:59.299778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.384 [2024-12-06 03:34:59.299791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.384 qpair failed and we were unable to recover it. 00:26:39.384 [2024-12-06 03:34:59.300003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.384 [2024-12-06 03:34:59.300017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.384 qpair failed and we were unable to recover it. 00:26:39.384 [2024-12-06 03:34:59.300090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.384 [2024-12-06 03:34:59.300101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.384 qpair failed and we were unable to recover it. 00:26:39.384 [2024-12-06 03:34:59.300178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.384 [2024-12-06 03:34:59.300189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.384 qpair failed and we were unable to recover it. 00:26:39.384 [2024-12-06 03:34:59.300275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.384 [2024-12-06 03:34:59.300288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.384 qpair failed and we were unable to recover it. 00:26:39.384 [2024-12-06 03:34:59.300359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.384 [2024-12-06 03:34:59.300371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.384 qpair failed and we were unable to recover it. 00:26:39.384 [2024-12-06 03:34:59.300458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.384 [2024-12-06 03:34:59.300469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.384 qpair failed and we were unable to recover it. 00:26:39.384 [2024-12-06 03:34:59.300559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.384 [2024-12-06 03:34:59.300572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.384 qpair failed and we were unable to recover it. 00:26:39.384 [2024-12-06 03:34:59.300660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.384 [2024-12-06 03:34:59.300673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.384 qpair failed and we were unable to recover it. 00:26:39.384 [2024-12-06 03:34:59.300813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.384 [2024-12-06 03:34:59.300826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.384 qpair failed and we were unable to recover it. 00:26:39.384 [2024-12-06 03:34:59.300909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.384 [2024-12-06 03:34:59.300920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.384 qpair failed and we were unable to recover it. 00:26:39.384 [2024-12-06 03:34:59.301062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.384 [2024-12-06 03:34:59.301074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.384 qpair failed and we were unable to recover it. 00:26:39.384 [2024-12-06 03:34:59.301163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.384 [2024-12-06 03:34:59.301175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.384 qpair failed and we were unable to recover it. 00:26:39.384 [2024-12-06 03:34:59.301319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.384 [2024-12-06 03:34:59.301331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.384 qpair failed and we were unable to recover it. 00:26:39.384 [2024-12-06 03:34:59.301480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.384 [2024-12-06 03:34:59.301495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.384 qpair failed and we were unable to recover it. 00:26:39.384 [2024-12-06 03:34:59.301595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.384 [2024-12-06 03:34:59.301608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.384 qpair failed and we were unable to recover it. 00:26:39.384 [2024-12-06 03:34:59.301685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.384 [2024-12-06 03:34:59.301696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.384 qpair failed and we were unable to recover it. 00:26:39.384 [2024-12-06 03:34:59.301839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.384 [2024-12-06 03:34:59.301852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.384 qpair failed and we were unable to recover it. 00:26:39.384 [2024-12-06 03:34:59.301928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.384 [2024-12-06 03:34:59.301939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.384 qpair failed and we were unable to recover it. 00:26:39.384 [2024-12-06 03:34:59.302125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.384 [2024-12-06 03:34:59.302139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.384 qpair failed and we were unable to recover it. 00:26:39.384 [2024-12-06 03:34:59.302212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.384 [2024-12-06 03:34:59.302223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.384 qpair failed and we were unable to recover it. 00:26:39.384 [2024-12-06 03:34:59.302288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.384 [2024-12-06 03:34:59.302299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.384 qpair failed and we were unable to recover it. 00:26:39.384 [2024-12-06 03:34:59.302429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.384 [2024-12-06 03:34:59.302442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.384 qpair failed and we were unable to recover it. 00:26:39.384 [2024-12-06 03:34:59.302528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.384 [2024-12-06 03:34:59.302539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.384 qpair failed and we were unable to recover it. 00:26:39.384 [2024-12-06 03:34:59.302611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.384 [2024-12-06 03:34:59.302622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.384 qpair failed and we were unable to recover it. 00:26:39.384 [2024-12-06 03:34:59.302842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.384 [2024-12-06 03:34:59.302855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.384 qpair failed and we were unable to recover it. 00:26:39.384 [2024-12-06 03:34:59.303009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.384 [2024-12-06 03:34:59.303023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.385 qpair failed and we were unable to recover it. 00:26:39.385 [2024-12-06 03:34:59.303158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.385 [2024-12-06 03:34:59.303171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.385 qpair failed and we were unable to recover it. 00:26:39.385 [2024-12-06 03:34:59.303255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.385 [2024-12-06 03:34:59.303268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.385 qpair failed and we were unable to recover it. 00:26:39.385 [2024-12-06 03:34:59.303353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.385 [2024-12-06 03:34:59.303365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.385 qpair failed and we were unable to recover it. 00:26:39.385 [2024-12-06 03:34:59.303448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.385 [2024-12-06 03:34:59.303459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.385 qpair failed and we were unable to recover it. 00:26:39.385 [2024-12-06 03:34:59.303596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.385 [2024-12-06 03:34:59.303609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.385 qpair failed and we were unable to recover it. 00:26:39.385 [2024-12-06 03:34:59.303758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.385 [2024-12-06 03:34:59.303771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.385 qpair failed and we were unable to recover it. 00:26:39.385 [2024-12-06 03:34:59.303902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.385 [2024-12-06 03:34:59.303914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.385 qpair failed and we were unable to recover it. 00:26:39.385 [2024-12-06 03:34:59.303992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.385 [2024-12-06 03:34:59.304006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.385 qpair failed and we were unable to recover it. 00:26:39.385 [2024-12-06 03:34:59.304183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.385 [2024-12-06 03:34:59.304195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.385 qpair failed and we were unable to recover it. 00:26:39.385 [2024-12-06 03:34:59.304267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.385 [2024-12-06 03:34:59.304278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.385 qpair failed and we were unable to recover it. 00:26:39.385 [2024-12-06 03:34:59.304360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.385 [2024-12-06 03:34:59.304372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.385 qpair failed and we were unable to recover it. 00:26:39.385 [2024-12-06 03:34:59.304438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.385 [2024-12-06 03:34:59.304449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.385 qpair failed and we were unable to recover it. 00:26:39.385 [2024-12-06 03:34:59.304518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.385 [2024-12-06 03:34:59.304530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.385 qpair failed and we were unable to recover it. 00:26:39.385 [2024-12-06 03:34:59.304605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.385 [2024-12-06 03:34:59.304616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.385 qpair failed and we were unable to recover it. 00:26:39.385 [2024-12-06 03:34:59.304683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.385 [2024-12-06 03:34:59.304694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.385 qpair failed and we were unable to recover it. 00:26:39.385 [2024-12-06 03:34:59.304842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.385 [2024-12-06 03:34:59.304856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.385 qpair failed and we were unable to recover it. 00:26:39.385 [2024-12-06 03:34:59.305081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.385 [2024-12-06 03:34:59.305094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.385 qpair failed and we were unable to recover it. 00:26:39.385 [2024-12-06 03:34:59.305174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.385 [2024-12-06 03:34:59.305185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.385 qpair failed and we were unable to recover it. 00:26:39.385 [2024-12-06 03:34:59.305257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.385 [2024-12-06 03:34:59.305269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.385 qpair failed and we were unable to recover it. 00:26:39.385 [2024-12-06 03:34:59.305352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.385 [2024-12-06 03:34:59.305364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.385 qpair failed and we were unable to recover it. 00:26:39.385 [2024-12-06 03:34:59.305497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.385 [2024-12-06 03:34:59.305510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.385 qpair failed and we were unable to recover it. 00:26:39.385 [2024-12-06 03:34:59.305595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.385 [2024-12-06 03:34:59.305606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.385 qpair failed and we were unable to recover it. 00:26:39.385 [2024-12-06 03:34:59.305677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.385 [2024-12-06 03:34:59.305688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.385 qpair failed and we were unable to recover it. 00:26:39.385 [2024-12-06 03:34:59.305777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.385 [2024-12-06 03:34:59.305788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.385 qpair failed and we were unable to recover it. 00:26:39.385 [2024-12-06 03:34:59.305913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.385 [2024-12-06 03:34:59.305925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.385 qpair failed and we were unable to recover it. 00:26:39.385 [2024-12-06 03:34:59.306088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.385 [2024-12-06 03:34:59.306101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.386 qpair failed and we were unable to recover it. 00:26:39.386 [2024-12-06 03:34:59.306170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.386 [2024-12-06 03:34:59.306181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.386 qpair failed and we were unable to recover it. 00:26:39.386 [2024-12-06 03:34:59.306324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.386 [2024-12-06 03:34:59.306340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.386 qpair failed and we were unable to recover it. 00:26:39.386 [2024-12-06 03:34:59.306411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.386 [2024-12-06 03:34:59.306422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.386 qpair failed and we were unable to recover it. 00:26:39.386 [2024-12-06 03:34:59.306486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.386 [2024-12-06 03:34:59.306497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.386 qpair failed and we were unable to recover it. 00:26:39.386 [2024-12-06 03:34:59.306629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.386 [2024-12-06 03:34:59.306640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.386 qpair failed and we were unable to recover it. 00:26:39.386 [2024-12-06 03:34:59.306771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.386 [2024-12-06 03:34:59.306783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.386 qpair failed and we were unable to recover it. 00:26:39.386 [2024-12-06 03:34:59.306858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.386 [2024-12-06 03:34:59.306869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.386 qpair failed and we were unable to recover it. 00:26:39.386 [2024-12-06 03:34:59.307018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.386 [2024-12-06 03:34:59.307032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.386 qpair failed and we were unable to recover it. 00:26:39.386 [2024-12-06 03:34:59.307168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.386 [2024-12-06 03:34:59.307181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.386 qpair failed and we were unable to recover it. 00:26:39.386 [2024-12-06 03:34:59.307263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.386 [2024-12-06 03:34:59.307275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.386 qpair failed and we were unable to recover it. 00:26:39.386 [2024-12-06 03:34:59.307476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.386 [2024-12-06 03:34:59.307489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.386 qpair failed and we were unable to recover it. 00:26:39.386 [2024-12-06 03:34:59.307561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.386 [2024-12-06 03:34:59.307572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.386 qpair failed and we were unable to recover it. 00:26:39.386 [2024-12-06 03:34:59.307650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.386 [2024-12-06 03:34:59.307661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.386 qpair failed and we were unable to recover it. 00:26:39.386 [2024-12-06 03:34:59.307744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.386 [2024-12-06 03:34:59.307755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.386 qpair failed and we were unable to recover it. 00:26:39.386 [2024-12-06 03:34:59.307847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.386 [2024-12-06 03:34:59.307858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.386 qpair failed and we were unable to recover it. 00:26:39.386 [2024-12-06 03:34:59.308010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.386 [2024-12-06 03:34:59.308023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.386 qpair failed and we were unable to recover it. 00:26:39.386 [2024-12-06 03:34:59.308090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.386 [2024-12-06 03:34:59.308101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.386 qpair failed and we were unable to recover it. 00:26:39.386 [2024-12-06 03:34:59.308238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.386 [2024-12-06 03:34:59.308251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.386 qpair failed and we were unable to recover it. 00:26:39.386 [2024-12-06 03:34:59.308315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.386 [2024-12-06 03:34:59.308326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.386 qpair failed and we were unable to recover it. 00:26:39.386 [2024-12-06 03:34:59.308486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.386 [2024-12-06 03:34:59.308499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.386 qpair failed and we were unable to recover it. 00:26:39.386 [2024-12-06 03:34:59.308582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.386 [2024-12-06 03:34:59.308594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.386 qpair failed and we were unable to recover it. 00:26:39.386 [2024-12-06 03:34:59.308736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.386 [2024-12-06 03:34:59.308749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.386 qpair failed and we were unable to recover it. 00:26:39.386 [2024-12-06 03:34:59.308837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.386 [2024-12-06 03:34:59.308849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.386 qpair failed and we were unable to recover it. 00:26:39.386 [2024-12-06 03:34:59.309006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.386 [2024-12-06 03:34:59.309018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.386 qpair failed and we were unable to recover it. 00:26:39.386 [2024-12-06 03:34:59.309191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.386 [2024-12-06 03:34:59.309204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.386 qpair failed and we were unable to recover it. 00:26:39.386 [2024-12-06 03:34:59.309293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.387 [2024-12-06 03:34:59.309305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.387 qpair failed and we were unable to recover it. 00:26:39.387 [2024-12-06 03:34:59.309387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.387 [2024-12-06 03:34:59.309400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.387 qpair failed and we were unable to recover it. 00:26:39.387 [2024-12-06 03:34:59.309481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.387 [2024-12-06 03:34:59.309493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.387 qpair failed and we were unable to recover it. 00:26:39.387 [2024-12-06 03:34:59.309578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.387 [2024-12-06 03:34:59.309590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.387 qpair failed and we were unable to recover it. 00:26:39.387 [2024-12-06 03:34:59.309663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.387 [2024-12-06 03:34:59.309674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.387 qpair failed and we were unable to recover it. 00:26:39.387 [2024-12-06 03:34:59.309814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.387 [2024-12-06 03:34:59.309826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.387 qpair failed and we were unable to recover it. 00:26:39.387 [2024-12-06 03:34:59.309893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.387 [2024-12-06 03:34:59.309905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.387 qpair failed and we were unable to recover it. 00:26:39.387 [2024-12-06 03:34:59.309991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.387 [2024-12-06 03:34:59.310002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.387 qpair failed and we were unable to recover it. 00:26:39.387 [2024-12-06 03:34:59.310129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.387 [2024-12-06 03:34:59.310142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.387 qpair failed and we were unable to recover it. 00:26:39.387 [2024-12-06 03:34:59.310220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.387 [2024-12-06 03:34:59.310231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.387 qpair failed and we were unable to recover it. 00:26:39.387 [2024-12-06 03:34:59.310366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.387 [2024-12-06 03:34:59.310378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.387 qpair failed and we were unable to recover it. 00:26:39.387 [2024-12-06 03:34:59.310520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.387 [2024-12-06 03:34:59.310532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.387 qpair failed and we were unable to recover it. 00:26:39.387 [2024-12-06 03:34:59.310686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.387 [2024-12-06 03:34:59.310699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.387 qpair failed and we were unable to recover it. 00:26:39.387 [2024-12-06 03:34:59.310843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.387 [2024-12-06 03:34:59.310855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.387 qpair failed and we were unable to recover it. 00:26:39.387 [2024-12-06 03:34:59.310920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.387 [2024-12-06 03:34:59.310932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.387 qpair failed and we were unable to recover it. 00:26:39.387 [2024-12-06 03:34:59.311094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.387 [2024-12-06 03:34:59.311106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.387 qpair failed and we were unable to recover it. 00:26:39.387 [2024-12-06 03:34:59.311245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.387 [2024-12-06 03:34:59.311259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.387 qpair failed and we were unable to recover it. 00:26:39.387 [2024-12-06 03:34:59.311333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.387 [2024-12-06 03:34:59.311354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.387 qpair failed and we were unable to recover it. 00:26:39.387 [2024-12-06 03:34:59.311529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.387 [2024-12-06 03:34:59.311542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.387 qpair failed and we were unable to recover it. 00:26:39.387 [2024-12-06 03:34:59.311616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.387 [2024-12-06 03:34:59.311627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.387 qpair failed and we were unable to recover it. 00:26:39.387 [2024-12-06 03:34:59.311720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.387 [2024-12-06 03:34:59.311732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.387 qpair failed and we were unable to recover it. 00:26:39.387 [2024-12-06 03:34:59.311825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.387 [2024-12-06 03:34:59.311836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.387 qpair failed and we were unable to recover it. 00:26:39.387 [2024-12-06 03:34:59.311902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.387 [2024-12-06 03:34:59.311914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.387 qpair failed and we were unable to recover it. 00:26:39.387 [2024-12-06 03:34:59.312000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.387 [2024-12-06 03:34:59.312012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.387 qpair failed and we were unable to recover it. 00:26:39.387 [2024-12-06 03:34:59.312222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.387 [2024-12-06 03:34:59.312234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.387 qpair failed and we were unable to recover it. 00:26:39.387 [2024-12-06 03:34:59.312393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.387 [2024-12-06 03:34:59.312405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.387 qpair failed and we were unable to recover it. 00:26:39.387 [2024-12-06 03:34:59.312479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.387 [2024-12-06 03:34:59.312491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.387 qpair failed and we were unable to recover it. 00:26:39.387 [2024-12-06 03:34:59.312643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.387 [2024-12-06 03:34:59.312655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.387 qpair failed and we were unable to recover it. 00:26:39.388 [2024-12-06 03:34:59.312748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.388 [2024-12-06 03:34:59.312761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.388 qpair failed and we were unable to recover it. 00:26:39.388 [2024-12-06 03:34:59.312903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.388 [2024-12-06 03:34:59.312916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.388 qpair failed and we were unable to recover it. 00:26:39.388 [2024-12-06 03:34:59.312984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.388 [2024-12-06 03:34:59.312995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.388 qpair failed and we were unable to recover it. 00:26:39.388 [2024-12-06 03:34:59.313242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.388 [2024-12-06 03:34:59.313255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.388 qpair failed and we were unable to recover it. 00:26:39.388 [2024-12-06 03:34:59.313322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.388 [2024-12-06 03:34:59.313333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.388 qpair failed and we were unable to recover it. 00:26:39.388 [2024-12-06 03:34:59.313401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.388 [2024-12-06 03:34:59.313413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.388 qpair failed and we were unable to recover it. 00:26:39.388 [2024-12-06 03:34:59.313503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.388 [2024-12-06 03:34:59.313515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.388 qpair failed and we were unable to recover it. 00:26:39.388 [2024-12-06 03:34:59.313613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.388 [2024-12-06 03:34:59.313626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.388 qpair failed and we were unable to recover it. 00:26:39.388 [2024-12-06 03:34:59.313702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.388 [2024-12-06 03:34:59.313714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.388 qpair failed and we were unable to recover it. 00:26:39.388 [2024-12-06 03:34:59.313796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.388 [2024-12-06 03:34:59.313809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.388 qpair failed and we were unable to recover it. 00:26:39.388 [2024-12-06 03:34:59.313876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.388 [2024-12-06 03:34:59.313888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.388 qpair failed and we were unable to recover it. 00:26:39.388 [2024-12-06 03:34:59.314040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.388 [2024-12-06 03:34:59.314054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.388 qpair failed and we were unable to recover it. 00:26:39.388 [2024-12-06 03:34:59.314125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.388 [2024-12-06 03:34:59.314137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.388 qpair failed and we were unable to recover it. 00:26:39.388 [2024-12-06 03:34:59.314222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.388 [2024-12-06 03:34:59.314234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.388 qpair failed and we were unable to recover it. 00:26:39.388 [2024-12-06 03:34:59.314437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.388 [2024-12-06 03:34:59.314449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.388 qpair failed and we were unable to recover it. 00:26:39.388 [2024-12-06 03:34:59.314586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.388 [2024-12-06 03:34:59.314599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.388 qpair failed and we were unable to recover it. 00:26:39.388 [2024-12-06 03:34:59.314692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.388 [2024-12-06 03:34:59.314704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.388 qpair failed and we were unable to recover it. 00:26:39.388 [2024-12-06 03:34:59.314851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.388 [2024-12-06 03:34:59.314863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.388 qpair failed and we were unable to recover it. 00:26:39.388 [2024-12-06 03:34:59.314953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.388 [2024-12-06 03:34:59.314965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.388 qpair failed and we were unable to recover it. 00:26:39.388 [2024-12-06 03:34:59.315136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.388 [2024-12-06 03:34:59.315149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.388 qpair failed and we were unable to recover it. 00:26:39.388 [2024-12-06 03:34:59.315281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.388 [2024-12-06 03:34:59.315292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.388 qpair failed and we were unable to recover it. 00:26:39.388 [2024-12-06 03:34:59.315386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.388 [2024-12-06 03:34:59.315399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.388 qpair failed and we were unable to recover it. 00:26:39.388 [2024-12-06 03:34:59.315472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.388 [2024-12-06 03:34:59.315483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.388 qpair failed and we were unable to recover it. 00:26:39.388 [2024-12-06 03:34:59.315569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.388 [2024-12-06 03:34:59.315581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.388 qpair failed and we were unable to recover it. 00:26:39.388 [2024-12-06 03:34:59.315654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.388 [2024-12-06 03:34:59.315665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.388 qpair failed and we were unable to recover it. 00:26:39.388 [2024-12-06 03:34:59.315740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.388 [2024-12-06 03:34:59.315750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.388 qpair failed and we were unable to recover it. 00:26:39.388 [2024-12-06 03:34:59.315823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.388 [2024-12-06 03:34:59.315834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.388 qpair failed and we were unable to recover it. 00:26:39.388 [2024-12-06 03:34:59.315915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.389 [2024-12-06 03:34:59.315926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.389 qpair failed and we were unable to recover it. 00:26:39.389 [2024-12-06 03:34:59.316004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.389 [2024-12-06 03:34:59.316018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.389 qpair failed and we were unable to recover it. 00:26:39.389 [2024-12-06 03:34:59.316152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.389 [2024-12-06 03:34:59.316164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.389 qpair failed and we were unable to recover it. 00:26:39.389 [2024-12-06 03:34:59.316324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.389 [2024-12-06 03:34:59.316337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.389 qpair failed and we were unable to recover it. 00:26:39.389 [2024-12-06 03:34:59.316427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.389 [2024-12-06 03:34:59.316439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.389 qpair failed and we were unable to recover it. 00:26:39.389 [2024-12-06 03:34:59.316591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.389 [2024-12-06 03:34:59.316603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.389 qpair failed and we were unable to recover it. 00:26:39.389 [2024-12-06 03:34:59.316753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.389 [2024-12-06 03:34:59.316766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.389 qpair failed and we were unable to recover it. 00:26:39.389 [2024-12-06 03:34:59.316830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.389 [2024-12-06 03:34:59.316841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.389 qpair failed and we were unable to recover it. 00:26:39.389 [2024-12-06 03:34:59.316918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.389 [2024-12-06 03:34:59.316929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.389 qpair failed and we were unable to recover it. 00:26:39.389 [2024-12-06 03:34:59.317085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.389 [2024-12-06 03:34:59.317098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.389 qpair failed and we were unable to recover it. 00:26:39.389 [2024-12-06 03:34:59.317245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.389 [2024-12-06 03:34:59.317258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.389 qpair failed and we were unable to recover it. 00:26:39.389 [2024-12-06 03:34:59.317503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.389 [2024-12-06 03:34:59.317515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.389 qpair failed and we were unable to recover it. 00:26:39.389 [2024-12-06 03:34:59.317593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.389 [2024-12-06 03:34:59.317604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.389 qpair failed and we were unable to recover it. 00:26:39.389 [2024-12-06 03:34:59.317677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.389 [2024-12-06 03:34:59.317689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.389 qpair failed and we were unable to recover it. 00:26:39.389 [2024-12-06 03:34:59.317829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.389 [2024-12-06 03:34:59.317841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.389 qpair failed and we were unable to recover it. 00:26:39.389 [2024-12-06 03:34:59.317919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.389 [2024-12-06 03:34:59.317930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.389 qpair failed and we were unable to recover it. 00:26:39.389 [2024-12-06 03:34:59.318009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.389 [2024-12-06 03:34:59.318021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.389 qpair failed and we were unable to recover it. 00:26:39.389 [2024-12-06 03:34:59.318243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.389 [2024-12-06 03:34:59.318254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.389 qpair failed and we were unable to recover it. 00:26:39.389 [2024-12-06 03:34:59.318337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.389 [2024-12-06 03:34:59.318349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.389 qpair failed and we were unable to recover it. 00:26:39.389 [2024-12-06 03:34:59.318435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.389 [2024-12-06 03:34:59.318448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.389 qpair failed and we were unable to recover it. 00:26:39.389 [2024-12-06 03:34:59.318584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.389 [2024-12-06 03:34:59.318597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.389 qpair failed and we were unable to recover it. 00:26:39.389 [2024-12-06 03:34:59.318745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.389 [2024-12-06 03:34:59.318757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.389 qpair failed and we were unable to recover it. 00:26:39.389 [2024-12-06 03:34:59.319000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.389 [2024-12-06 03:34:59.319019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.389 qpair failed and we were unable to recover it. 00:26:39.389 [2024-12-06 03:34:59.319100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.389 [2024-12-06 03:34:59.319112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.389 qpair failed and we were unable to recover it. 00:26:39.389 [2024-12-06 03:34:59.319259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.389 [2024-12-06 03:34:59.319271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.389 qpair failed and we were unable to recover it. 00:26:39.389 [2024-12-06 03:34:59.319353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.389 [2024-12-06 03:34:59.319365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.389 qpair failed and we were unable to recover it. 00:26:39.389 [2024-12-06 03:34:59.319430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.389 [2024-12-06 03:34:59.319444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.389 qpair failed and we were unable to recover it. 00:26:39.389 [2024-12-06 03:34:59.319591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.389 [2024-12-06 03:34:59.319602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.389 qpair failed and we were unable to recover it. 00:26:39.389 [2024-12-06 03:34:59.319750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-12-06 03:34:59.319763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-12-06 03:34:59.319860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-12-06 03:34:59.319873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-12-06 03:34:59.320095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-12-06 03:34:59.320108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-12-06 03:34:59.320192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-12-06 03:34:59.320204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-12-06 03:34:59.320359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-12-06 03:34:59.320371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-12-06 03:34:59.320468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-12-06 03:34:59.320480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-12-06 03:34:59.320570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-12-06 03:34:59.320582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-12-06 03:34:59.320717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-12-06 03:34:59.320730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-12-06 03:34:59.320813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-12-06 03:34:59.320825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-12-06 03:34:59.320904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-12-06 03:34:59.320915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-12-06 03:34:59.321000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-12-06 03:34:59.321012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-12-06 03:34:59.321156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-12-06 03:34:59.321169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-12-06 03:34:59.321240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-12-06 03:34:59.321251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-12-06 03:34:59.321386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-12-06 03:34:59.321401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-12-06 03:34:59.321479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-12-06 03:34:59.321492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-12-06 03:34:59.321555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-12-06 03:34:59.321566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-12-06 03:34:59.321658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-12-06 03:34:59.321670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-12-06 03:34:59.321734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-12-06 03:34:59.321745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-12-06 03:34:59.321883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-12-06 03:34:59.321895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-12-06 03:34:59.322046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-12-06 03:34:59.322058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-12-06 03:34:59.322126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-12-06 03:34:59.322137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-12-06 03:34:59.322265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-12-06 03:34:59.322277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-12-06 03:34:59.322341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-12-06 03:34:59.322352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-12-06 03:34:59.322484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-12-06 03:34:59.322497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-12-06 03:34:59.322574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-12-06 03:34:59.322586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-12-06 03:34:59.322716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-12-06 03:34:59.322727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-12-06 03:34:59.322858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-12-06 03:34:59.322870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-12-06 03:34:59.323002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-12-06 03:34:59.323014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-12-06 03:34:59.323149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-12-06 03:34:59.323160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-12-06 03:34:59.323293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-12-06 03:34:59.323305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-12-06 03:34:59.323401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-12-06 03:34:59.323414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-12-06 03:34:59.323496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-12-06 03:34:59.323509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-12-06 03:34:59.323640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-12-06 03:34:59.323653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-12-06 03:34:59.323811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-12-06 03:34:59.323823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-12-06 03:34:59.324023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-12-06 03:34:59.324035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-12-06 03:34:59.324199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-12-06 03:34:59.324211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-12-06 03:34:59.324296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-12-06 03:34:59.324308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-12-06 03:34:59.324506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-12-06 03:34:59.324518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-12-06 03:34:59.324683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-12-06 03:34:59.324695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-12-06 03:34:59.324806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-12-06 03:34:59.324817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-12-06 03:34:59.324957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-12-06 03:34:59.324970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-12-06 03:34:59.325112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-12-06 03:34:59.325123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-12-06 03:34:59.325200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-12-06 03:34:59.325214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-12-06 03:34:59.325299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-12-06 03:34:59.325310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-12-06 03:34:59.325485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-12-06 03:34:59.325497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-12-06 03:34:59.325566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-12-06 03:34:59.325577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-12-06 03:34:59.325642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-12-06 03:34:59.325654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-12-06 03:34:59.325733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-12-06 03:34:59.325745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-12-06 03:34:59.325883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-12-06 03:34:59.325895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-12-06 03:34:59.326027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-12-06 03:34:59.326040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-12-06 03:34:59.326190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-12-06 03:34:59.326202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-12-06 03:34:59.326349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-12-06 03:34:59.326360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-12-06 03:34:59.326491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-12-06 03:34:59.326503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-12-06 03:34:59.326577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-12-06 03:34:59.326592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-12-06 03:34:59.326737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-12-06 03:34:59.326749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-12-06 03:34:59.326842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-12-06 03:34:59.326855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-12-06 03:34:59.326927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-12-06 03:34:59.326939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-12-06 03:34:59.327023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-12-06 03:34:59.327035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-12-06 03:34:59.327170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-12-06 03:34:59.327182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-12-06 03:34:59.327318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-12-06 03:34:59.327330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-12-06 03:34:59.327464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-12-06 03:34:59.327476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-12-06 03:34:59.327611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-12-06 03:34:59.327623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-12-06 03:34:59.327699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-12-06 03:34:59.327710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-12-06 03:34:59.327841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-12-06 03:34:59.327853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-12-06 03:34:59.327952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-12-06 03:34:59.327964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-12-06 03:34:59.328167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-12-06 03:34:59.328180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-12-06 03:34:59.328261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-12-06 03:34:59.328273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-12-06 03:34:59.328420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-12-06 03:34:59.328433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-12-06 03:34:59.328698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-12-06 03:34:59.328710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-12-06 03:34:59.328801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-12-06 03:34:59.328813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-12-06 03:34:59.328959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-12-06 03:34:59.328972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-12-06 03:34:59.329110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-12-06 03:34:59.329122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-12-06 03:34:59.329195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-12-06 03:34:59.329207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-12-06 03:34:59.329358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-12-06 03:34:59.329370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-12-06 03:34:59.329434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-12-06 03:34:59.329445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-12-06 03:34:59.329598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-12-06 03:34:59.329610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-12-06 03:34:59.329746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-12-06 03:34:59.329758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-12-06 03:34:59.329908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-12-06 03:34:59.329921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-12-06 03:34:59.329998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-12-06 03:34:59.330009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-12-06 03:34:59.330159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-12-06 03:34:59.330172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-12-06 03:34:59.330301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-12-06 03:34:59.330313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-12-06 03:34:59.330381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-12-06 03:34:59.330395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-12-06 03:34:59.330537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-12-06 03:34:59.330549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-12-06 03:34:59.330744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-12-06 03:34:59.330756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-12-06 03:34:59.330957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-12-06 03:34:59.330970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-12-06 03:34:59.331113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-12-06 03:34:59.331125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-12-06 03:34:59.331287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-12-06 03:34:59.331300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-12-06 03:34:59.331366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-12-06 03:34:59.331377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-12-06 03:34:59.331445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-12-06 03:34:59.331456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-12-06 03:34:59.331530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-12-06 03:34:59.331541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-12-06 03:34:59.331609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-12-06 03:34:59.331620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-12-06 03:34:59.331753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-12-06 03:34:59.331765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-12-06 03:34:59.331967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-12-06 03:34:59.331979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-12-06 03:34:59.332120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-12-06 03:34:59.332143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-12-06 03:34:59.332273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-12-06 03:34:59.332286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-12-06 03:34:59.332375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-12-06 03:34:59.332387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-12-06 03:34:59.332534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-12-06 03:34:59.332547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-12-06 03:34:59.332630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-12-06 03:34:59.332642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-12-06 03:34:59.332719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-12-06 03:34:59.332731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-12-06 03:34:59.332976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-12-06 03:34:59.332988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-12-06 03:34:59.333057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-12-06 03:34:59.333070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-12-06 03:34:59.333142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-12-06 03:34:59.333153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-12-06 03:34:59.333282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-12-06 03:34:59.333294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-12-06 03:34:59.333456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-12-06 03:34:59.333468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-12-06 03:34:59.333733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-12-06 03:34:59.333745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-12-06 03:34:59.333897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-12-06 03:34:59.333910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-12-06 03:34:59.334078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-12-06 03:34:59.334090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-12-06 03:34:59.334227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-12-06 03:34:59.334239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-12-06 03:34:59.334458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-12-06 03:34:59.334471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-12-06 03:34:59.334623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-12-06 03:34:59.334636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-12-06 03:34:59.334801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-12-06 03:34:59.334813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-12-06 03:34:59.335018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-12-06 03:34:59.335031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-12-06 03:34:59.335229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-12-06 03:34:59.335241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-12-06 03:34:59.335455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-12-06 03:34:59.335467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-12-06 03:34:59.335598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-12-06 03:34:59.335611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-12-06 03:34:59.335750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-12-06 03:34:59.335762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-12-06 03:34:59.335905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-12-06 03:34:59.335918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-12-06 03:34:59.336066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-12-06 03:34:59.336078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-12-06 03:34:59.336329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-12-06 03:34:59.336341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-12-06 03:34:59.336524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-12-06 03:34:59.336535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-12-06 03:34:59.336629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-12-06 03:34:59.336641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-12-06 03:34:59.336838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-12-06 03:34:59.336849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-12-06 03:34:59.337067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-12-06 03:34:59.337079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-12-06 03:34:59.337227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-12-06 03:34:59.337239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-12-06 03:34:59.337383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-12-06 03:34:59.337396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-12-06 03:34:59.337477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-12-06 03:34:59.337489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-12-06 03:34:59.337632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-12-06 03:34:59.337644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-12-06 03:34:59.337871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-12-06 03:34:59.337883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-12-06 03:34:59.338101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-12-06 03:34:59.338113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-12-06 03:34:59.338227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-12-06 03:34:59.338241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-12-06 03:34:59.338495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-12-06 03:34:59.338508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-12-06 03:34:59.338637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-12-06 03:34:59.338649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-12-06 03:34:59.338848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-12-06 03:34:59.338860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-12-06 03:34:59.339006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-12-06 03:34:59.339021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-12-06 03:34:59.339253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-12-06 03:34:59.339265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-12-06 03:34:59.339449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-12-06 03:34:59.339461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-12-06 03:34:59.339662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-12-06 03:34:59.339674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-12-06 03:34:59.339822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-12-06 03:34:59.339835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-12-06 03:34:59.340032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-12-06 03:34:59.340045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-12-06 03:34:59.340131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-12-06 03:34:59.340142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-12-06 03:34:59.340232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-12-06 03:34:59.340245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-12-06 03:34:59.340392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-12-06 03:34:59.340404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-12-06 03:34:59.340622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-12-06 03:34:59.340635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-12-06 03:34:59.340765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-12-06 03:34:59.340777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-12-06 03:34:59.340990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-12-06 03:34:59.341004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-12-06 03:34:59.341180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-12-06 03:34:59.341191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-12-06 03:34:59.341323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-12-06 03:34:59.341335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-12-06 03:34:59.341480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-12-06 03:34:59.341492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-12-06 03:34:59.341651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-12-06 03:34:59.341662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-12-06 03:34:59.341813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-12-06 03:34:59.341826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-12-06 03:34:59.341978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-12-06 03:34:59.341990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-12-06 03:34:59.342221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-12-06 03:34:59.342233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-12-06 03:34:59.342328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-12-06 03:34:59.342340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-12-06 03:34:59.342469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-12-06 03:34:59.342481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-12-06 03:34:59.342626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-12-06 03:34:59.342638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-12-06 03:34:59.342793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-12-06 03:34:59.342805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-12-06 03:34:59.343006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-12-06 03:34:59.343019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-12-06 03:34:59.343192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-12-06 03:34:59.343205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-12-06 03:34:59.343401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-12-06 03:34:59.343413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-12-06 03:34:59.343508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-12-06 03:34:59.343520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-12-06 03:34:59.343666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-12-06 03:34:59.343679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-12-06 03:34:59.343829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-12-06 03:34:59.343842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-12-06 03:34:59.344042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-12-06 03:34:59.344055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-12-06 03:34:59.344192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-12-06 03:34:59.344205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-12-06 03:34:59.344406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-12-06 03:34:59.344419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-12-06 03:34:59.344507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-12-06 03:34:59.344518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-12-06 03:34:59.344599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-12-06 03:34:59.344610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-12-06 03:34:59.344686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-12-06 03:34:59.344697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-12-06 03:34:59.344836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-12-06 03:34:59.344848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-12-06 03:34:59.345008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-12-06 03:34:59.345021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-12-06 03:34:59.345171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-12-06 03:34:59.345184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-12-06 03:34:59.345354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-12-06 03:34:59.345366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-12-06 03:34:59.345471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-12-06 03:34:59.345483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-12-06 03:34:59.345633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-12-06 03:34:59.345648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-12-06 03:34:59.345854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-12-06 03:34:59.345867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-12-06 03:34:59.346067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-12-06 03:34:59.346079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-12-06 03:34:59.346300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-12-06 03:34:59.346312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-12-06 03:34:59.346463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-12-06 03:34:59.346475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-12-06 03:34:59.346804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-12-06 03:34:59.346816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.396 [2024-12-06 03:34:59.346986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-12-06 03:34:59.346999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-12-06 03:34:59.347237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-12-06 03:34:59.347248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-12-06 03:34:59.347398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-12-06 03:34:59.347410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-12-06 03:34:59.347496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-12-06 03:34:59.347507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-12-06 03:34:59.347705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-12-06 03:34:59.347717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-12-06 03:34:59.347886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-12-06 03:34:59.347898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-12-06 03:34:59.348133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-12-06 03:34:59.348145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-12-06 03:34:59.348292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-12-06 03:34:59.348304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-12-06 03:34:59.348437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-12-06 03:34:59.348450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-12-06 03:34:59.348658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-12-06 03:34:59.348671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-12-06 03:34:59.348802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-12-06 03:34:59.348814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-12-06 03:34:59.348994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-12-06 03:34:59.349007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-12-06 03:34:59.349207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-12-06 03:34:59.349220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-12-06 03:34:59.349407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-12-06 03:34:59.349420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-12-06 03:34:59.349629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-12-06 03:34:59.349642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-12-06 03:34:59.349728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-12-06 03:34:59.349739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-12-06 03:34:59.349872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-12-06 03:34:59.349884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-12-06 03:34:59.350098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-12-06 03:34:59.350111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-12-06 03:34:59.350257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-12-06 03:34:59.350269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-12-06 03:34:59.350487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-12-06 03:34:59.350499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-12-06 03:34:59.350791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-12-06 03:34:59.350803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-12-06 03:34:59.351056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-12-06 03:34:59.351069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-12-06 03:34:59.351298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-12-06 03:34:59.351310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-12-06 03:34:59.351403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-12-06 03:34:59.351416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-12-06 03:34:59.351510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-12-06 03:34:59.351522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-12-06 03:34:59.351593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-12-06 03:34:59.351604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-12-06 03:34:59.351829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-12-06 03:34:59.351842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-12-06 03:34:59.351921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-12-06 03:34:59.351932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-12-06 03:34:59.352094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-12-06 03:34:59.352107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-12-06 03:34:59.352199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-12-06 03:34:59.352211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-12-06 03:34:59.352361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-12-06 03:34:59.352374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-12-06 03:34:59.352594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-12-06 03:34:59.352612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.397 [2024-12-06 03:34:59.352813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-12-06 03:34:59.352826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-12-06 03:34:59.353073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-12-06 03:34:59.353087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-12-06 03:34:59.353259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-12-06 03:34:59.353271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-12-06 03:34:59.353431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-12-06 03:34:59.353443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-12-06 03:34:59.353693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-12-06 03:34:59.353706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-12-06 03:34:59.353930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-12-06 03:34:59.353942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-12-06 03:34:59.354042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-12-06 03:34:59.354054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-12-06 03:34:59.354268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-12-06 03:34:59.354280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-12-06 03:34:59.354429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-12-06 03:34:59.354441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-12-06 03:34:59.354588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-12-06 03:34:59.354600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-12-06 03:34:59.354763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-12-06 03:34:59.354775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-12-06 03:34:59.354999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-12-06 03:34:59.355013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-12-06 03:34:59.355106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-12-06 03:34:59.355117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-12-06 03:34:59.355247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-12-06 03:34:59.355259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-12-06 03:34:59.355340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-12-06 03:34:59.355351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-12-06 03:34:59.355454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-12-06 03:34:59.355466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-12-06 03:34:59.355630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-12-06 03:34:59.355642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-12-06 03:34:59.355794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-12-06 03:34:59.355806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-12-06 03:34:59.356004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-12-06 03:34:59.356016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-12-06 03:34:59.356175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-12-06 03:34:59.356186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-12-06 03:34:59.356329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-12-06 03:34:59.356341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-12-06 03:34:59.356435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-12-06 03:34:59.356447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-12-06 03:34:59.356604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-12-06 03:34:59.356616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-12-06 03:34:59.356755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-12-06 03:34:59.356767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-12-06 03:34:59.356970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-12-06 03:34:59.356984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-12-06 03:34:59.357191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-12-06 03:34:59.357204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-12-06 03:34:59.357426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-12-06 03:34:59.357438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-12-06 03:34:59.357598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-12-06 03:34:59.357611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-12-06 03:34:59.357756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-12-06 03:34:59.357768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-12-06 03:34:59.357990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-12-06 03:34:59.358006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-12-06 03:34:59.358149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-12-06 03:34:59.358161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-12-06 03:34:59.358242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-12-06 03:34:59.358255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-12-06 03:34:59.358486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-12-06 03:34:59.358498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-12-06 03:34:59.358663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-12-06 03:34:59.358675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-12-06 03:34:59.358896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-12-06 03:34:59.358909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-12-06 03:34:59.359051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-12-06 03:34:59.359064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-12-06 03:34:59.359266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-12-06 03:34:59.359278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-12-06 03:34:59.359411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-12-06 03:34:59.359423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-12-06 03:34:59.359627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-12-06 03:34:59.359640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-12-06 03:34:59.359838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-12-06 03:34:59.359850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-12-06 03:34:59.360028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-12-06 03:34:59.360041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-12-06 03:34:59.360285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-12-06 03:34:59.360298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-12-06 03:34:59.360501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-12-06 03:34:59.360513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-12-06 03:34:59.360685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-12-06 03:34:59.360699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-12-06 03:34:59.360777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-12-06 03:34:59.360788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-12-06 03:34:59.360934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-12-06 03:34:59.360946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-12-06 03:34:59.361088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-12-06 03:34:59.361101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-12-06 03:34:59.361313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-12-06 03:34:59.361326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-12-06 03:34:59.361529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-12-06 03:34:59.361541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-12-06 03:34:59.361771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-12-06 03:34:59.361784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-12-06 03:34:59.361880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-12-06 03:34:59.361891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-12-06 03:34:59.362116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-12-06 03:34:59.362129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-12-06 03:34:59.362277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-12-06 03:34:59.362289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-12-06 03:34:59.362380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-12-06 03:34:59.362391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-12-06 03:34:59.362602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-12-06 03:34:59.362615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-12-06 03:34:59.362816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-12-06 03:34:59.362828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-12-06 03:34:59.363027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-12-06 03:34:59.363041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-12-06 03:34:59.363261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-12-06 03:34:59.363273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-12-06 03:34:59.363472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-12-06 03:34:59.363484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-12-06 03:34:59.363666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-12-06 03:34:59.363679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-12-06 03:34:59.363833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-12-06 03:34:59.363846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-12-06 03:34:59.363997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-12-06 03:34:59.364010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-12-06 03:34:59.364257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-12-06 03:34:59.364270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-12-06 03:34:59.364352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-12-06 03:34:59.364370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-12-06 03:34:59.364518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-12-06 03:34:59.364530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-12-06 03:34:59.364673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-12-06 03:34:59.364685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-12-06 03:34:59.364855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-12-06 03:34:59.364868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-12-06 03:34:59.365066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-12-06 03:34:59.365079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-12-06 03:34:59.365324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-12-06 03:34:59.365336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-12-06 03:34:59.365557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-12-06 03:34:59.365571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-12-06 03:34:59.365806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-12-06 03:34:59.365818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-12-06 03:34:59.365966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-12-06 03:34:59.365978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-12-06 03:34:59.366127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-12-06 03:34:59.366139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-12-06 03:34:59.366287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-12-06 03:34:59.366299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-12-06 03:34:59.366520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-12-06 03:34:59.366533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-12-06 03:34:59.366754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-12-06 03:34:59.366766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-12-06 03:34:59.366965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-12-06 03:34:59.366977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-12-06 03:34:59.367147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-12-06 03:34:59.367159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-12-06 03:34:59.367343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-12-06 03:34:59.367356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-12-06 03:34:59.367590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-12-06 03:34:59.367602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-12-06 03:34:59.367801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-12-06 03:34:59.367813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-12-06 03:34:59.367970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-12-06 03:34:59.367982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-12-06 03:34:59.368204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-12-06 03:34:59.368217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-12-06 03:34:59.368431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-12-06 03:34:59.368444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-12-06 03:34:59.368599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-12-06 03:34:59.368611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-12-06 03:34:59.368817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-12-06 03:34:59.368830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-12-06 03:34:59.369062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-12-06 03:34:59.369075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-12-06 03:34:59.369275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-12-06 03:34:59.369287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-12-06 03:34:59.369449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-12-06 03:34:59.369461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-12-06 03:34:59.369641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-12-06 03:34:59.369653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-12-06 03:34:59.369899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-12-06 03:34:59.369911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-12-06 03:34:59.370134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-12-06 03:34:59.370146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-12-06 03:34:59.370312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-12-06 03:34:59.370324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-12-06 03:34:59.370482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-12-06 03:34:59.370494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-12-06 03:34:59.370734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-12-06 03:34:59.370746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-12-06 03:34:59.370893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-12-06 03:34:59.370906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-12-06 03:34:59.371043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-12-06 03:34:59.371056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-12-06 03:34:59.371142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-12-06 03:34:59.371154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-12-06 03:34:59.371255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-12-06 03:34:59.371268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-12-06 03:34:59.371510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-12-06 03:34:59.371523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-12-06 03:34:59.371697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-12-06 03:34:59.371710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-12-06 03:34:59.371804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-12-06 03:34:59.371815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-12-06 03:34:59.371958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-12-06 03:34:59.371971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-12-06 03:34:59.372198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-12-06 03:34:59.372211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-12-06 03:34:59.372354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-12-06 03:34:59.372367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-12-06 03:34:59.372535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-12-06 03:34:59.372548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-12-06 03:34:59.372715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-12-06 03:34:59.372728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-12-06 03:34:59.372891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-12-06 03:34:59.372903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-12-06 03:34:59.373168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-12-06 03:34:59.373181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-12-06 03:34:59.373405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-12-06 03:34:59.373420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-12-06 03:34:59.373673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-12-06 03:34:59.373685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-12-06 03:34:59.373860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-12-06 03:34:59.373873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-12-06 03:34:59.374095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-12-06 03:34:59.374107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-12-06 03:34:59.374244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-12-06 03:34:59.374256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-12-06 03:34:59.374417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-12-06 03:34:59.374429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-12-06 03:34:59.374649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-12-06 03:34:59.374662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-12-06 03:34:59.374868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-12-06 03:34:59.374880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-12-06 03:34:59.375029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-12-06 03:34:59.375043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-12-06 03:34:59.375202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-12-06 03:34:59.375214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-12-06 03:34:59.375366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-12-06 03:34:59.375378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-12-06 03:34:59.375522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-12-06 03:34:59.375535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-12-06 03:34:59.375616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-12-06 03:34:59.375628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-12-06 03:34:59.375777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-12-06 03:34:59.375790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-12-06 03:34:59.375940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-12-06 03:34:59.375955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-12-06 03:34:59.376089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-12-06 03:34:59.376102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-12-06 03:34:59.376269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-12-06 03:34:59.376281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-12-06 03:34:59.376519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-12-06 03:34:59.376531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-12-06 03:34:59.376737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-12-06 03:34:59.376749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-12-06 03:34:59.376825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-12-06 03:34:59.376837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-12-06 03:34:59.376988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-12-06 03:34:59.377000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-12-06 03:34:59.377176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-12-06 03:34:59.377189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-12-06 03:34:59.377361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-12-06 03:34:59.377373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-12-06 03:34:59.377583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-12-06 03:34:59.377595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-12-06 03:34:59.377738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-12-06 03:34:59.377749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-12-06 03:34:59.377882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-12-06 03:34:59.377895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-12-06 03:34:59.378056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-12-06 03:34:59.378069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-12-06 03:34:59.378296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-12-06 03:34:59.378308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-12-06 03:34:59.378538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-12-06 03:34:59.378551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-12-06 03:34:59.378703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-12-06 03:34:59.378716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-12-06 03:34:59.378924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-12-06 03:34:59.378937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-12-06 03:34:59.379167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-12-06 03:34:59.379180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-12-06 03:34:59.379349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-12-06 03:34:59.379361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-12-06 03:34:59.379570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-12-06 03:34:59.379583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-12-06 03:34:59.379683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-12-06 03:34:59.379695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-12-06 03:34:59.379836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-12-06 03:34:59.379848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-12-06 03:34:59.379982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-12-06 03:34:59.379995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-12-06 03:34:59.380128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-12-06 03:34:59.380140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-12-06 03:34:59.380308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-12-06 03:34:59.380320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-12-06 03:34:59.380488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-12-06 03:34:59.380501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-12-06 03:34:59.380668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-12-06 03:34:59.380684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-12-06 03:34:59.380910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-12-06 03:34:59.380921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-12-06 03:34:59.381074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-12-06 03:34:59.381086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-12-06 03:34:59.381219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-12-06 03:34:59.381232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-12-06 03:34:59.381313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-12-06 03:34:59.381324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-12-06 03:34:59.381470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-12-06 03:34:59.381484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-12-06 03:34:59.381557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-12-06 03:34:59.381569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-12-06 03:34:59.381720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-12-06 03:34:59.381733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-12-06 03:34:59.381965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-12-06 03:34:59.381977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-12-06 03:34:59.382137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-12-06 03:34:59.382149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-12-06 03:34:59.382244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-12-06 03:34:59.382256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-12-06 03:34:59.382501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-12-06 03:34:59.382513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-12-06 03:34:59.382729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-12-06 03:34:59.382742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-12-06 03:34:59.382897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-12-06 03:34:59.382909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-12-06 03:34:59.383080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-12-06 03:34:59.383092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-12-06 03:34:59.383347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-12-06 03:34:59.383360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-12-06 03:34:59.383494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-12-06 03:34:59.383507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-12-06 03:34:59.383723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-12-06 03:34:59.383735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-12-06 03:34:59.383955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-12-06 03:34:59.383968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-12-06 03:34:59.384174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-12-06 03:34:59.384186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-12-06 03:34:59.384284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-12-06 03:34:59.384297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-12-06 03:34:59.384499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-12-06 03:34:59.384511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-12-06 03:34:59.384714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-12-06 03:34:59.384726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-12-06 03:34:59.384955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-12-06 03:34:59.384968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-12-06 03:34:59.385112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-12-06 03:34:59.385124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-12-06 03:34:59.385358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-12-06 03:34:59.385370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-12-06 03:34:59.385517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-12-06 03:34:59.385530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-12-06 03:34:59.385761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-12-06 03:34:59.385774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-12-06 03:34:59.385914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-12-06 03:34:59.385927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-12-06 03:34:59.386082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-12-06 03:34:59.386095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-12-06 03:34:59.386238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-12-06 03:34:59.386251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-12-06 03:34:59.386422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-12-06 03:34:59.386434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-12-06 03:34:59.386575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-12-06 03:34:59.386587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-12-06 03:34:59.386806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-12-06 03:34:59.386818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-12-06 03:34:59.386904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-12-06 03:34:59.386916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-12-06 03:34:59.387160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-12-06 03:34:59.387173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-12-06 03:34:59.387339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-12-06 03:34:59.387351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-12-06 03:34:59.387433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-12-06 03:34:59.387444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-12-06 03:34:59.387587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-12-06 03:34:59.387599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-12-06 03:34:59.387773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-12-06 03:34:59.387785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-12-06 03:34:59.387992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-12-06 03:34:59.388007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-12-06 03:34:59.388234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-12-06 03:34:59.388246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-12-06 03:34:59.388388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-12-06 03:34:59.388401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-12-06 03:34:59.388636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-12-06 03:34:59.388648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-12-06 03:34:59.388800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-12-06 03:34:59.388812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-12-06 03:34:59.389039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-12-06 03:34:59.389052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-12-06 03:34:59.389197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-12-06 03:34:59.389210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-12-06 03:34:59.389406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-12-06 03:34:59.389418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-12-06 03:34:59.389611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-12-06 03:34:59.389624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-12-06 03:34:59.389821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-12-06 03:34:59.389834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-12-06 03:34:59.390032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-12-06 03:34:59.390045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-12-06 03:34:59.390270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-12-06 03:34:59.390283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-12-06 03:34:59.390379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-12-06 03:34:59.390392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-12-06 03:34:59.390540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-12-06 03:34:59.390553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-12-06 03:34:59.390695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-12-06 03:34:59.390707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-12-06 03:34:59.390872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-12-06 03:34:59.390884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-12-06 03:34:59.390966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-12-06 03:34:59.390978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-12-06 03:34:59.391204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-12-06 03:34:59.391215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-12-06 03:34:59.391428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-12-06 03:34:59.391440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-12-06 03:34:59.391606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-12-06 03:34:59.391619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-12-06 03:34:59.391790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-12-06 03:34:59.391803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-12-06 03:34:59.391999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-12-06 03:34:59.392012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-12-06 03:34:59.392184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-12-06 03:34:59.392196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-12-06 03:34:59.392411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-12-06 03:34:59.392424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-12-06 03:34:59.392613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-12-06 03:34:59.392626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-12-06 03:34:59.392786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-12-06 03:34:59.392799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-12-06 03:34:59.393041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-12-06 03:34:59.393054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-12-06 03:34:59.393209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-12-06 03:34:59.393222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-12-06 03:34:59.393399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-12-06 03:34:59.393412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-12-06 03:34:59.393507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-12-06 03:34:59.393519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-12-06 03:34:59.393742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-12-06 03:34:59.393755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-12-06 03:34:59.393954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-12-06 03:34:59.393967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-12-06 03:34:59.394065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-12-06 03:34:59.394077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-12-06 03:34:59.394172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-12-06 03:34:59.394183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-12-06 03:34:59.394346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-12-06 03:34:59.394358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-12-06 03:34:59.394511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-12-06 03:34:59.394523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-12-06 03:34:59.394726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-12-06 03:34:59.394738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-12-06 03:34:59.394937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-12-06 03:34:59.394953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-12-06 03:34:59.395098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-12-06 03:34:59.395110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-12-06 03:34:59.395309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-12-06 03:34:59.395321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-12-06 03:34:59.395563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-12-06 03:34:59.395578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-12-06 03:34:59.395740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-12-06 03:34:59.395753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-12-06 03:34:59.395971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-12-06 03:34:59.395984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-12-06 03:34:59.396145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-12-06 03:34:59.396158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-12-06 03:34:59.396302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-12-06 03:34:59.396313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-12-06 03:34:59.396539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-12-06 03:34:59.396551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-12-06 03:34:59.396772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-12-06 03:34:59.396784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.404 [2024-12-06 03:34:59.396930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-12-06 03:34:59.396943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-12-06 03:34:59.397108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-12-06 03:34:59.397120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-12-06 03:34:59.397286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-12-06 03:34:59.397298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-12-06 03:34:59.397472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-12-06 03:34:59.397485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-12-06 03:34:59.397704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-12-06 03:34:59.397716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-12-06 03:34:59.397936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-12-06 03:34:59.397957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-12-06 03:34:59.398043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-12-06 03:34:59.398054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-12-06 03:34:59.398305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-12-06 03:34:59.398317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-12-06 03:34:59.398403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-12-06 03:34:59.398415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-12-06 03:34:59.398503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-12-06 03:34:59.398515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-12-06 03:34:59.398671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-12-06 03:34:59.398683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-12-06 03:34:59.398834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-12-06 03:34:59.398847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-12-06 03:34:59.398979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-12-06 03:34:59.398992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-12-06 03:34:59.399207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-12-06 03:34:59.399220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-12-06 03:34:59.399447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-12-06 03:34:59.399460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-12-06 03:34:59.399538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-12-06 03:34:59.399550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-12-06 03:34:59.399693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-12-06 03:34:59.399705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-12-06 03:34:59.399839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-12-06 03:34:59.399852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-12-06 03:34:59.400006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-12-06 03:34:59.400019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-12-06 03:34:59.400222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-12-06 03:34:59.400234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-12-06 03:34:59.400401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-12-06 03:34:59.400414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-12-06 03:34:59.400636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-12-06 03:34:59.400648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-12-06 03:34:59.400849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-12-06 03:34:59.400861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-12-06 03:34:59.401044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-12-06 03:34:59.401058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-12-06 03:34:59.401139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-12-06 03:34:59.401150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-12-06 03:34:59.401300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-12-06 03:34:59.401312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-12-06 03:34:59.401454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-12-06 03:34:59.401466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-12-06 03:34:59.401617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-12-06 03:34:59.401629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-12-06 03:34:59.401875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-12-06 03:34:59.401888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-12-06 03:34:59.402092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-12-06 03:34:59.402106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-12-06 03:34:59.402364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-12-06 03:34:59.402377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-12-06 03:34:59.402537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-12-06 03:34:59.402549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-12-06 03:34:59.402697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-12-06 03:34:59.402710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-12-06 03:34:59.402927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-12-06 03:34:59.402941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-12-06 03:34:59.403187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-12-06 03:34:59.403200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-12-06 03:34:59.403375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-12-06 03:34:59.403388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-12-06 03:34:59.403551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-12-06 03:34:59.403563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-12-06 03:34:59.403809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-12-06 03:34:59.403821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-12-06 03:34:59.404000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-12-06 03:34:59.404013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-12-06 03:34:59.404230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-12-06 03:34:59.404242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-12-06 03:34:59.404470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-12-06 03:34:59.404482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-12-06 03:34:59.404705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-12-06 03:34:59.404717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-12-06 03:34:59.404918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-12-06 03:34:59.404931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-12-06 03:34:59.405118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-12-06 03:34:59.405131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-12-06 03:34:59.405272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-12-06 03:34:59.405284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-12-06 03:34:59.405425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-12-06 03:34:59.405436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-12-06 03:34:59.405531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-12-06 03:34:59.405543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-12-06 03:34:59.405703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-12-06 03:34:59.405716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-12-06 03:34:59.405864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-12-06 03:34:59.405876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-12-06 03:34:59.406046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-12-06 03:34:59.406070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-12-06 03:34:59.406209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-12-06 03:34:59.406222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-12-06 03:34:59.406425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-12-06 03:34:59.406437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-12-06 03:34:59.406641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-12-06 03:34:59.406654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-12-06 03:34:59.406851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-12-06 03:34:59.406863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-12-06 03:34:59.407013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-12-06 03:34:59.407026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-12-06 03:34:59.407252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-12-06 03:34:59.407265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-12-06 03:34:59.407480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-12-06 03:34:59.407492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-12-06 03:34:59.407626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-12-06 03:34:59.407638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-12-06 03:34:59.407792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-12-06 03:34:59.407804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-12-06 03:34:59.407953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-12-06 03:34:59.407967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-12-06 03:34:59.408108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-12-06 03:34:59.408121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-12-06 03:34:59.408320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-12-06 03:34:59.408332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-12-06 03:34:59.408558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-12-06 03:34:59.408571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-12-06 03:34:59.408704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-12-06 03:34:59.408717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-12-06 03:34:59.408866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-12-06 03:34:59.408878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-12-06 03:34:59.409084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-12-06 03:34:59.409097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-12-06 03:34:59.409329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-12-06 03:34:59.409341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-12-06 03:34:59.409565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-12-06 03:34:59.409577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-12-06 03:34:59.409825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-12-06 03:34:59.409838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-12-06 03:34:59.409976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-12-06 03:34:59.409989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-12-06 03:34:59.410212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-12-06 03:34:59.410224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-12-06 03:34:59.410454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-12-06 03:34:59.410466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-12-06 03:34:59.410617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-12-06 03:34:59.410629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-12-06 03:34:59.410760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-12-06 03:34:59.410774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-12-06 03:34:59.410939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-12-06 03:34:59.410954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-12-06 03:34:59.411128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-12-06 03:34:59.411141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-12-06 03:34:59.411337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-12-06 03:34:59.411349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.406 [2024-12-06 03:34:59.411495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-12-06 03:34:59.411507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-12-06 03:34:59.411599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-12-06 03:34:59.411612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-12-06 03:34:59.411834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-12-06 03:34:59.411847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-12-06 03:34:59.411950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-12-06 03:34:59.411962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-12-06 03:34:59.412160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-12-06 03:34:59.412173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-12-06 03:34:59.412316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-12-06 03:34:59.412329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-12-06 03:34:59.412526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-12-06 03:34:59.412538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-12-06 03:34:59.412673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-12-06 03:34:59.412686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-12-06 03:34:59.412884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-12-06 03:34:59.412897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-12-06 03:34:59.413040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-12-06 03:34:59.413053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-12-06 03:34:59.413255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-12-06 03:34:59.413267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-12-06 03:34:59.413498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-12-06 03:34:59.413511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-12-06 03:34:59.413640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-12-06 03:34:59.413652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-12-06 03:34:59.413880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-12-06 03:34:59.413893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-12-06 03:34:59.414117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-12-06 03:34:59.414130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-12-06 03:34:59.414287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-12-06 03:34:59.414299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-12-06 03:34:59.414449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-12-06 03:34:59.414461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-12-06 03:34:59.414687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-12-06 03:34:59.414700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-12-06 03:34:59.414920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-12-06 03:34:59.414933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-12-06 03:34:59.415074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-12-06 03:34:59.415087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-12-06 03:34:59.415311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-12-06 03:34:59.415324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-12-06 03:34:59.415475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-12-06 03:34:59.415488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-12-06 03:34:59.415724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-12-06 03:34:59.415737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-12-06 03:34:59.415977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-12-06 03:34:59.415997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-12-06 03:34:59.416207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-12-06 03:34:59.416223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-12-06 03:34:59.416463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-12-06 03:34:59.416480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-12-06 03:34:59.416646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-12-06 03:34:59.416662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-12-06 03:34:59.416810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-12-06 03:34:59.416826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-12-06 03:34:59.417034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-12-06 03:34:59.417050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-12-06 03:34:59.417144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-12-06 03:34:59.417159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-12-06 03:34:59.417399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-12-06 03:34:59.417416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-12-06 03:34:59.417632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-12-06 03:34:59.417648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-12-06 03:34:59.417908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-12-06 03:34:59.417925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-12-06 03:34:59.418117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-12-06 03:34:59.418134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-12-06 03:34:59.418289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-12-06 03:34:59.418306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-12-06 03:34:59.418478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-12-06 03:34:59.418494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-12-06 03:34:59.418666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-12-06 03:34:59.418686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-12-06 03:34:59.418919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-12-06 03:34:59.418935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-12-06 03:34:59.419095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-12-06 03:34:59.419111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-12-06 03:34:59.419296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-12-06 03:34:59.419312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-12-06 03:34:59.419526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-12-06 03:34:59.419543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.407 [2024-12-06 03:34:59.419719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-12-06 03:34:59.419735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-12-06 03:34:59.419966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-12-06 03:34:59.419983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-12-06 03:34:59.420076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-12-06 03:34:59.420091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-12-06 03:34:59.420232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-12-06 03:34:59.420249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-12-06 03:34:59.420344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-12-06 03:34:59.420359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-12-06 03:34:59.420566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-12-06 03:34:59.420582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-12-06 03:34:59.420673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-12-06 03:34:59.420687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-12-06 03:34:59.420846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-12-06 03:34:59.420863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-12-06 03:34:59.420971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-12-06 03:34:59.420989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-12-06 03:34:59.421148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-12-06 03:34:59.421164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-12-06 03:34:59.421329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-12-06 03:34:59.421346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-12-06 03:34:59.421516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-12-06 03:34:59.421533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-12-06 03:34:59.421677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-12-06 03:34:59.421693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-12-06 03:34:59.421890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-12-06 03:34:59.421907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-12-06 03:34:59.422088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-12-06 03:34:59.422105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-12-06 03:34:59.422216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-12-06 03:34:59.422233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-12-06 03:34:59.422397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-12-06 03:34:59.422413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-12-06 03:34:59.422498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-12-06 03:34:59.422513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-12-06 03:34:59.422718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-12-06 03:34:59.422735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-12-06 03:34:59.422876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-12-06 03:34:59.422892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-12-06 03:34:59.423047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-12-06 03:34:59.423064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-12-06 03:34:59.423218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-12-06 03:34:59.423234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-12-06 03:34:59.423415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-12-06 03:34:59.423435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-12-06 03:34:59.423594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-12-06 03:34:59.423610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-12-06 03:34:59.423848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-12-06 03:34:59.423864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-12-06 03:34:59.424125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-12-06 03:34:59.424142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-12-06 03:34:59.424352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-12-06 03:34:59.424368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-12-06 03:34:59.424526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-12-06 03:34:59.424541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-12-06 03:34:59.424756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-12-06 03:34:59.424773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-12-06 03:34:59.424987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-12-06 03:34:59.425003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-12-06 03:34:59.425236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-12-06 03:34:59.425252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-12-06 03:34:59.425492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-12-06 03:34:59.425508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-12-06 03:34:59.425668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-12-06 03:34:59.425684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-12-06 03:34:59.425843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-12-06 03:34:59.425859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-12-06 03:34:59.426120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-12-06 03:34:59.426137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-12-06 03:34:59.426326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-12-06 03:34:59.426345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-12-06 03:34:59.426502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-12-06 03:34:59.426518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-12-06 03:34:59.426752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-12-06 03:34:59.426769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-12-06 03:34:59.426974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-12-06 03:34:59.426991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-12-06 03:34:59.427159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-12-06 03:34:59.427175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-12-06 03:34:59.427275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-12-06 03:34:59.427290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-12-06 03:34:59.427497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-12-06 03:34:59.427513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-12-06 03:34:59.427662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-12-06 03:34:59.427678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-12-06 03:34:59.427753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-12-06 03:34:59.427768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-12-06 03:34:59.427930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-12-06 03:34:59.427950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-12-06 03:34:59.428200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-12-06 03:34:59.428217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-12-06 03:34:59.428394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-12-06 03:34:59.428409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-12-06 03:34:59.428664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-12-06 03:34:59.428681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-12-06 03:34:59.428918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-12-06 03:34:59.428934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-12-06 03:34:59.429172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-12-06 03:34:59.429190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-12-06 03:34:59.429349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-12-06 03:34:59.429365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-12-06 03:34:59.429599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-12-06 03:34:59.429615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-12-06 03:34:59.429850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-12-06 03:34:59.429866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-12-06 03:34:59.430018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-12-06 03:34:59.430035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-12-06 03:34:59.430188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-12-06 03:34:59.430204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-12-06 03:34:59.430432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-12-06 03:34:59.430448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-12-06 03:34:59.430625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-12-06 03:34:59.430641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-12-06 03:34:59.430819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-12-06 03:34:59.430835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-12-06 03:34:59.431084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-12-06 03:34:59.431101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-12-06 03:34:59.431351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-12-06 03:34:59.431367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-12-06 03:34:59.431584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-12-06 03:34:59.431601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-12-06 03:34:59.431831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-12-06 03:34:59.431848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-12-06 03:34:59.432103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-12-06 03:34:59.432126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-12-06 03:34:59.432347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-12-06 03:34:59.432361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-12-06 03:34:59.432590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-12-06 03:34:59.432603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-12-06 03:34:59.432811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-12-06 03:34:59.432823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-12-06 03:34:59.432972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-12-06 03:34:59.432985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-12-06 03:34:59.433147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-12-06 03:34:59.433159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-12-06 03:34:59.433346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-12-06 03:34:59.433358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-12-06 03:34:59.433491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-12-06 03:34:59.433503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-12-06 03:34:59.433649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-12-06 03:34:59.433661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-12-06 03:34:59.433883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-12-06 03:34:59.433895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-12-06 03:34:59.434122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-12-06 03:34:59.434134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-12-06 03:34:59.434334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-12-06 03:34:59.434346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-12-06 03:34:59.434491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-12-06 03:34:59.434503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-12-06 03:34:59.434673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-12-06 03:34:59.434687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-12-06 03:34:59.434891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-12-06 03:34:59.434903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-12-06 03:34:59.435120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-12-06 03:34:59.435133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-12-06 03:34:59.435219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-12-06 03:34:59.435230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-12-06 03:34:59.435370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-12-06 03:34:59.435382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-12-06 03:34:59.435600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-12-06 03:34:59.435613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-12-06 03:34:59.435861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-12-06 03:34:59.435873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-12-06 03:34:59.436130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-12-06 03:34:59.436143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-12-06 03:34:59.436291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-12-06 03:34:59.436304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-12-06 03:34:59.436502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-12-06 03:34:59.436514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-12-06 03:34:59.436662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-12-06 03:34:59.436675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-12-06 03:34:59.436876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-12-06 03:34:59.436888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-12-06 03:34:59.437117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-12-06 03:34:59.437130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-12-06 03:34:59.437233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-12-06 03:34:59.437245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-12-06 03:34:59.437381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-12-06 03:34:59.437394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-12-06 03:34:59.437571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-12-06 03:34:59.437583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-12-06 03:34:59.437649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-12-06 03:34:59.437660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-12-06 03:34:59.437805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-12-06 03:34:59.437816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-12-06 03:34:59.438013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-12-06 03:34:59.438026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-12-06 03:34:59.438169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-12-06 03:34:59.438182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-12-06 03:34:59.438347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-12-06 03:34:59.438359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-12-06 03:34:59.438527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-12-06 03:34:59.438539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-12-06 03:34:59.438792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-12-06 03:34:59.438804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-12-06 03:34:59.439035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-12-06 03:34:59.439049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-12-06 03:34:59.439274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-12-06 03:34:59.439286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-12-06 03:34:59.439451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-12-06 03:34:59.439463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-12-06 03:34:59.439686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-12-06 03:34:59.439699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-12-06 03:34:59.439868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-12-06 03:34:59.439889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-12-06 03:34:59.440072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-12-06 03:34:59.440091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-12-06 03:34:59.440191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-12-06 03:34:59.440206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-12-06 03:34:59.440435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-12-06 03:34:59.440451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-12-06 03:34:59.440596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-12-06 03:34:59.440612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-12-06 03:34:59.440763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-12-06 03:34:59.440780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-12-06 03:34:59.440876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-12-06 03:34:59.440891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-12-06 03:34:59.441099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-12-06 03:34:59.441117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-12-06 03:34:59.441291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-12-06 03:34:59.441307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-12-06 03:34:59.441458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-12-06 03:34:59.441474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-12-06 03:34:59.441558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-12-06 03:34:59.441574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-12-06 03:34:59.441785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-12-06 03:34:59.441801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-12-06 03:34:59.441967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-12-06 03:34:59.441984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-12-06 03:34:59.442214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-12-06 03:34:59.442231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-12-06 03:34:59.442413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-12-06 03:34:59.442430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-12-06 03:34:59.442658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-12-06 03:34:59.442674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-12-06 03:34:59.442833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-12-06 03:34:59.442849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-12-06 03:34:59.442952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-12-06 03:34:59.442968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-12-06 03:34:59.443199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-12-06 03:34:59.443216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-12-06 03:34:59.443357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-12-06 03:34:59.443373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-12-06 03:34:59.443524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-12-06 03:34:59.443540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-12-06 03:34:59.443712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-12-06 03:34:59.443729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-12-06 03:34:59.443955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-12-06 03:34:59.443972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-12-06 03:34:59.444125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-12-06 03:34:59.444141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-12-06 03:34:59.444284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-12-06 03:34:59.444301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-12-06 03:34:59.444531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-12-06 03:34:59.444548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-12-06 03:34:59.444702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-12-06 03:34:59.444718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-12-06 03:34:59.444953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-12-06 03:34:59.444971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-12-06 03:34:59.445205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-12-06 03:34:59.445221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-12-06 03:34:59.445401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-12-06 03:34:59.445417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-12-06 03:34:59.445625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-12-06 03:34:59.445642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.411 [2024-12-06 03:34:59.445893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-12-06 03:34:59.445909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-12-06 03:34:59.446069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-12-06 03:34:59.446086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-12-06 03:34:59.446179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-12-06 03:34:59.446195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-12-06 03:34:59.446336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-12-06 03:34:59.446352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-12-06 03:34:59.446582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-12-06 03:34:59.446599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-12-06 03:34:59.446738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.712 [2024-12-06 03:34:59.446755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.712 qpair failed and we were unable to recover it. 00:26:39.712 [2024-12-06 03:34:59.447004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.712 [2024-12-06 03:34:59.447021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.712 qpair failed and we were unable to recover it. 00:26:39.712 [2024-12-06 03:34:59.447270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.712 [2024-12-06 03:34:59.447286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.712 qpair failed and we were unable to recover it. 00:26:39.712 [2024-12-06 03:34:59.447466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.712 [2024-12-06 03:34:59.447483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.712 qpair failed and we were unable to recover it. 00:26:39.712 [2024-12-06 03:34:59.447712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.712 [2024-12-06 03:34:59.447732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.712 qpair failed and we were unable to recover it. 00:26:39.712 [2024-12-06 03:34:59.447885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.712 [2024-12-06 03:34:59.447902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.712 qpair failed and we were unable to recover it. 00:26:39.712 [2024-12-06 03:34:59.448083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.712 [2024-12-06 03:34:59.448100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.712 qpair failed and we were unable to recover it. 00:26:39.712 [2024-12-06 03:34:59.448339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.712 [2024-12-06 03:34:59.448355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.712 qpair failed and we were unable to recover it. 00:26:39.712 [2024-12-06 03:34:59.448528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.712 [2024-12-06 03:34:59.448545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.712 qpair failed and we were unable to recover it. 00:26:39.712 [2024-12-06 03:34:59.448708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.712 [2024-12-06 03:34:59.448724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.712 qpair failed and we were unable to recover it. 00:26:39.712 [2024-12-06 03:34:59.448894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.712 [2024-12-06 03:34:59.448910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.712 qpair failed and we were unable to recover it. 00:26:39.712 [2024-12-06 03:34:59.449150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.712 [2024-12-06 03:34:59.449166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.712 qpair failed and we were unable to recover it. 00:26:39.712 [2024-12-06 03:34:59.449376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.712 [2024-12-06 03:34:59.449393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.712 qpair failed and we were unable to recover it. 00:26:39.712 [2024-12-06 03:34:59.449648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.712 [2024-12-06 03:34:59.449664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.712 qpair failed and we were unable to recover it. 00:26:39.712 [2024-12-06 03:34:59.449816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.712 [2024-12-06 03:34:59.449832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.712 qpair failed and we were unable to recover it. 00:26:39.712 [2024-12-06 03:34:59.449933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.712 [2024-12-06 03:34:59.449958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.712 qpair failed and we were unable to recover it. 00:26:39.712 [2024-12-06 03:34:59.450167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.712 [2024-12-06 03:34:59.450184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.712 qpair failed and we were unable to recover it. 00:26:39.712 [2024-12-06 03:34:59.450410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.712 [2024-12-06 03:34:59.450426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.712 qpair failed and we were unable to recover it. 00:26:39.712 [2024-12-06 03:34:59.450587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.712 [2024-12-06 03:34:59.450603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.712 qpair failed and we were unable to recover it. 00:26:39.712 [2024-12-06 03:34:59.450811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.712 [2024-12-06 03:34:59.450828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.712 qpair failed and we were unable to recover it. 00:26:39.712 [2024-12-06 03:34:59.450931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.712 [2024-12-06 03:34:59.450952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.712 qpair failed and we were unable to recover it. 00:26:39.712 [2024-12-06 03:34:59.451163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.712 [2024-12-06 03:34:59.451180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.712 qpair failed and we were unable to recover it. 00:26:39.712 [2024-12-06 03:34:59.451389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.712 [2024-12-06 03:34:59.451406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.712 qpair failed and we were unable to recover it. 00:26:39.712 [2024-12-06 03:34:59.451576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.712 [2024-12-06 03:34:59.451593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.712 qpair failed and we were unable to recover it. 00:26:39.712 [2024-12-06 03:34:59.451820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.712 [2024-12-06 03:34:59.451836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.712 qpair failed and we were unable to recover it. 00:26:39.712 [2024-12-06 03:34:59.452069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.712 [2024-12-06 03:34:59.452087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.712 qpair failed and we were unable to recover it. 00:26:39.712 [2024-12-06 03:34:59.452339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.712 [2024-12-06 03:34:59.452356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.712 qpair failed and we were unable to recover it. 00:26:39.712 [2024-12-06 03:34:59.452582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.712 [2024-12-06 03:34:59.452598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.712 qpair failed and we were unable to recover it. 00:26:39.712 [2024-12-06 03:34:59.452756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.712 [2024-12-06 03:34:59.452773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.712 qpair failed and we were unable to recover it. 00:26:39.712 [2024-12-06 03:34:59.452957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.712 [2024-12-06 03:34:59.452974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.712 qpair failed and we were unable to recover it. 00:26:39.712 [2024-12-06 03:34:59.453121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.712 [2024-12-06 03:34:59.453138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.712 qpair failed and we were unable to recover it. 00:26:39.712 [2024-12-06 03:34:59.453380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.712 [2024-12-06 03:34:59.453396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.712 qpair failed and we were unable to recover it. 00:26:39.712 [2024-12-06 03:34:59.453607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.712 [2024-12-06 03:34:59.453623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.712 qpair failed and we were unable to recover it. 00:26:39.712 [2024-12-06 03:34:59.453833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.712 [2024-12-06 03:34:59.453850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.712 qpair failed and we were unable to recover it. 00:26:39.712 [2024-12-06 03:34:59.454083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.713 [2024-12-06 03:34:59.454100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.713 qpair failed and we were unable to recover it. 00:26:39.713 [2024-12-06 03:34:59.454264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.713 [2024-12-06 03:34:59.454280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.713 qpair failed and we were unable to recover it. 00:26:39.713 [2024-12-06 03:34:59.454457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.713 [2024-12-06 03:34:59.454474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.713 qpair failed and we were unable to recover it. 00:26:39.713 [2024-12-06 03:34:59.454618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.713 [2024-12-06 03:34:59.454635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.713 qpair failed and we were unable to recover it. 00:26:39.713 [2024-12-06 03:34:59.454791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.713 [2024-12-06 03:34:59.454807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.713 qpair failed and we were unable to recover it. 00:26:39.713 [2024-12-06 03:34:59.454969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.713 [2024-12-06 03:34:59.454986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.713 qpair failed and we were unable to recover it. 00:26:39.713 [2024-12-06 03:34:59.455133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.713 [2024-12-06 03:34:59.455149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.713 qpair failed and we were unable to recover it. 00:26:39.713 [2024-12-06 03:34:59.455309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.713 [2024-12-06 03:34:59.455325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.713 qpair failed and we were unable to recover it. 00:26:39.713 [2024-12-06 03:34:59.455486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.713 [2024-12-06 03:34:59.455503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.713 qpair failed and we were unable to recover it. 00:26:39.713 [2024-12-06 03:34:59.455691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.713 [2024-12-06 03:34:59.455707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.713 qpair failed and we were unable to recover it. 00:26:39.713 [2024-12-06 03:34:59.455862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.713 [2024-12-06 03:34:59.455882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.713 qpair failed and we were unable to recover it. 00:26:39.713 [2024-12-06 03:34:59.456037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.713 [2024-12-06 03:34:59.456054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.713 qpair failed and we were unable to recover it. 00:26:39.713 [2024-12-06 03:34:59.456262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.713 [2024-12-06 03:34:59.456278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.713 qpair failed and we were unable to recover it. 00:26:39.713 [2024-12-06 03:34:59.456430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.713 [2024-12-06 03:34:59.456447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.713 qpair failed and we were unable to recover it. 00:26:39.713 [2024-12-06 03:34:59.456533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.713 [2024-12-06 03:34:59.456548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.713 qpair failed and we were unable to recover it. 00:26:39.713 [2024-12-06 03:34:59.456799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.713 [2024-12-06 03:34:59.456815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.713 qpair failed and we were unable to recover it. 00:26:39.713 [2024-12-06 03:34:59.457024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.713 [2024-12-06 03:34:59.457041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.713 qpair failed and we were unable to recover it. 00:26:39.713 [2024-12-06 03:34:59.457131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.713 [2024-12-06 03:34:59.457145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.713 qpair failed and we were unable to recover it. 00:26:39.713 [2024-12-06 03:34:59.457370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.713 [2024-12-06 03:34:59.457383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.713 qpair failed and we were unable to recover it. 00:26:39.713 [2024-12-06 03:34:59.457533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.713 [2024-12-06 03:34:59.457545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.713 qpair failed and we were unable to recover it. 00:26:39.713 [2024-12-06 03:34:59.457769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.713 [2024-12-06 03:34:59.457781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.713 qpair failed and we were unable to recover it. 00:26:39.713 [2024-12-06 03:34:59.457997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.713 [2024-12-06 03:34:59.458009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.713 qpair failed and we were unable to recover it. 00:26:39.713 [2024-12-06 03:34:59.458233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.713 [2024-12-06 03:34:59.458246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.713 qpair failed and we were unable to recover it. 00:26:39.713 [2024-12-06 03:34:59.458391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.713 [2024-12-06 03:34:59.458404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.713 qpair failed and we were unable to recover it. 00:26:39.713 [2024-12-06 03:34:59.458499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.713 [2024-12-06 03:34:59.458510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.713 qpair failed and we were unable to recover it. 00:26:39.713 [2024-12-06 03:34:59.458677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.713 [2024-12-06 03:34:59.458690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.713 qpair failed and we were unable to recover it. 00:26:39.713 [2024-12-06 03:34:59.458887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.713 [2024-12-06 03:34:59.458900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.713 qpair failed and we were unable to recover it. 00:26:39.713 [2024-12-06 03:34:59.459125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.713 [2024-12-06 03:34:59.459137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.713 qpair failed and we were unable to recover it. 00:26:39.713 [2024-12-06 03:34:59.459227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.713 [2024-12-06 03:34:59.459238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.713 qpair failed and we were unable to recover it. 00:26:39.713 [2024-12-06 03:34:59.459433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.713 [2024-12-06 03:34:59.459445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.713 qpair failed and we were unable to recover it. 00:26:39.713 [2024-12-06 03:34:59.459511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.713 [2024-12-06 03:34:59.459522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.713 qpair failed and we were unable to recover it. 00:26:39.713 [2024-12-06 03:34:59.459727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.713 [2024-12-06 03:34:59.459740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.713 qpair failed and we were unable to recover it. 00:26:39.713 [2024-12-06 03:34:59.459977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.713 [2024-12-06 03:34:59.459990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.713 qpair failed and we were unable to recover it. 00:26:39.713 [2024-12-06 03:34:59.460134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.713 [2024-12-06 03:34:59.460146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.713 qpair failed and we were unable to recover it. 00:26:39.713 [2024-12-06 03:34:59.460246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.713 [2024-12-06 03:34:59.460257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.713 qpair failed and we were unable to recover it. 00:26:39.713 [2024-12-06 03:34:59.460416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.713 [2024-12-06 03:34:59.460429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.713 qpair failed and we were unable to recover it. 00:26:39.713 [2024-12-06 03:34:59.460624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.713 [2024-12-06 03:34:59.460637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.713 qpair failed and we were unable to recover it. 00:26:39.713 [2024-12-06 03:34:59.460867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.713 [2024-12-06 03:34:59.460880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.713 qpair failed and we were unable to recover it. 00:26:39.713 [2024-12-06 03:34:59.461124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.714 [2024-12-06 03:34:59.461137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.714 qpair failed and we were unable to recover it. 00:26:39.714 [2024-12-06 03:34:59.461338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.714 [2024-12-06 03:34:59.461351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.714 qpair failed and we were unable to recover it. 00:26:39.714 [2024-12-06 03:34:59.461569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.714 [2024-12-06 03:34:59.461582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.714 qpair failed and we were unable to recover it. 00:26:39.714 [2024-12-06 03:34:59.461780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.714 [2024-12-06 03:34:59.461792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.714 qpair failed and we were unable to recover it. 00:26:39.714 [2024-12-06 03:34:59.461991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.714 [2024-12-06 03:34:59.462003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.714 qpair failed and we were unable to recover it. 00:26:39.714 [2024-12-06 03:34:59.462204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.714 [2024-12-06 03:34:59.462217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.714 qpair failed and we were unable to recover it. 00:26:39.714 [2024-12-06 03:34:59.462294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.714 [2024-12-06 03:34:59.462305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.714 qpair failed and we were unable to recover it. 00:26:39.714 [2024-12-06 03:34:59.462441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.714 [2024-12-06 03:34:59.462454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.714 qpair failed and we were unable to recover it. 00:26:39.714 [2024-12-06 03:34:59.462651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.714 [2024-12-06 03:34:59.462664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.714 qpair failed and we were unable to recover it. 00:26:39.714 [2024-12-06 03:34:59.462796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.714 [2024-12-06 03:34:59.462808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.714 qpair failed and we were unable to recover it. 00:26:39.714 [2024-12-06 03:34:59.462978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.714 [2024-12-06 03:34:59.462991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.714 qpair failed and we were unable to recover it. 00:26:39.714 [2024-12-06 03:34:59.463216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.714 [2024-12-06 03:34:59.463228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.714 qpair failed and we were unable to recover it. 00:26:39.714 [2024-12-06 03:34:59.463455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.714 [2024-12-06 03:34:59.463470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.714 qpair failed and we were unable to recover it. 00:26:39.714 [2024-12-06 03:34:59.463636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.714 [2024-12-06 03:34:59.463648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.714 qpair failed and we were unable to recover it. 00:26:39.714 [2024-12-06 03:34:59.463798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.714 [2024-12-06 03:34:59.463810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.714 qpair failed and we were unable to recover it. 00:26:39.714 [2024-12-06 03:34:59.464025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.714 [2024-12-06 03:34:59.464038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.714 qpair failed and we were unable to recover it. 00:26:39.714 [2024-12-06 03:34:59.464255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.714 [2024-12-06 03:34:59.464268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.714 qpair failed and we were unable to recover it. 00:26:39.714 [2024-12-06 03:34:59.464489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.714 [2024-12-06 03:34:59.464501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.714 qpair failed and we were unable to recover it. 00:26:39.714 [2024-12-06 03:34:59.464726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.714 [2024-12-06 03:34:59.464738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.714 qpair failed and we were unable to recover it. 00:26:39.714 [2024-12-06 03:34:59.464964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.714 [2024-12-06 03:34:59.464977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.714 qpair failed and we were unable to recover it. 00:26:39.714 [2024-12-06 03:34:59.465124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.714 [2024-12-06 03:34:59.465137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.714 qpair failed and we were unable to recover it. 00:26:39.714 [2024-12-06 03:34:59.465335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.714 [2024-12-06 03:34:59.465348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.714 qpair failed and we were unable to recover it. 00:26:39.714 [2024-12-06 03:34:59.465492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.714 [2024-12-06 03:34:59.465505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.714 qpair failed and we were unable to recover it. 00:26:39.714 [2024-12-06 03:34:59.465650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.714 [2024-12-06 03:34:59.465662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.714 qpair failed and we were unable to recover it. 00:26:39.714 [2024-12-06 03:34:59.465800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.714 [2024-12-06 03:34:59.465812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.714 qpair failed and we were unable to recover it. 00:26:39.714 [2024-12-06 03:34:59.465905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.714 [2024-12-06 03:34:59.465916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.714 qpair failed and we were unable to recover it. 00:26:39.714 [2024-12-06 03:34:59.466149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.714 [2024-12-06 03:34:59.466162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.714 qpair failed and we were unable to recover it. 00:26:39.714 [2024-12-06 03:34:59.466256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.714 [2024-12-06 03:34:59.466267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.714 qpair failed and we were unable to recover it. 00:26:39.714 [2024-12-06 03:34:59.466425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.714 [2024-12-06 03:34:59.466438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.714 qpair failed and we were unable to recover it. 00:26:39.714 [2024-12-06 03:34:59.466519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.714 [2024-12-06 03:34:59.466531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.714 qpair failed and we were unable to recover it. 00:26:39.714 [2024-12-06 03:34:59.466737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.714 [2024-12-06 03:34:59.466749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.714 qpair failed and we were unable to recover it. 00:26:39.714 [2024-12-06 03:34:59.466957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.714 [2024-12-06 03:34:59.466970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.714 qpair failed and we were unable to recover it. 00:26:39.714 [2024-12-06 03:34:59.467118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.714 [2024-12-06 03:34:59.467131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.714 qpair failed and we were unable to recover it. 00:26:39.714 [2024-12-06 03:34:59.467319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.714 [2024-12-06 03:34:59.467332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.714 qpair failed and we were unable to recover it. 00:26:39.714 [2024-12-06 03:34:59.467462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.714 [2024-12-06 03:34:59.467474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.714 qpair failed and we were unable to recover it. 00:26:39.714 [2024-12-06 03:34:59.467649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.714 [2024-12-06 03:34:59.467662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.714 qpair failed and we were unable to recover it. 00:26:39.714 [2024-12-06 03:34:59.467746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.714 [2024-12-06 03:34:59.467757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.714 qpair failed and we were unable to recover it. 00:26:39.714 [2024-12-06 03:34:59.467842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.714 [2024-12-06 03:34:59.467853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.714 qpair failed and we were unable to recover it. 00:26:39.714 [2024-12-06 03:34:59.467955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.715 [2024-12-06 03:34:59.467966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.715 qpair failed and we were unable to recover it. 00:26:39.715 [2024-12-06 03:34:59.468103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.715 [2024-12-06 03:34:59.468116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.715 qpair failed and we were unable to recover it. 00:26:39.715 [2024-12-06 03:34:59.468194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.715 [2024-12-06 03:34:59.468205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.715 qpair failed and we were unable to recover it. 00:26:39.715 [2024-12-06 03:34:59.468367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.715 [2024-12-06 03:34:59.468379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.715 qpair failed and we were unable to recover it. 00:26:39.715 [2024-12-06 03:34:59.468597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.715 [2024-12-06 03:34:59.468610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.715 qpair failed and we were unable to recover it. 00:26:39.715 [2024-12-06 03:34:59.468696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.715 [2024-12-06 03:34:59.468707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.715 qpair failed and we were unable to recover it. 00:26:39.715 [2024-12-06 03:34:59.468856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.715 [2024-12-06 03:34:59.468868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.715 qpair failed and we were unable to recover it. 00:26:39.715 [2024-12-06 03:34:59.469040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.715 [2024-12-06 03:34:59.469053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.715 qpair failed and we were unable to recover it. 00:26:39.715 [2024-12-06 03:34:59.469250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.715 [2024-12-06 03:34:59.469263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.715 qpair failed and we were unable to recover it. 00:26:39.715 [2024-12-06 03:34:59.469460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.715 [2024-12-06 03:34:59.469475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.715 qpair failed and we were unable to recover it. 00:26:39.715 [2024-12-06 03:34:59.469559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.715 [2024-12-06 03:34:59.469569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.715 qpair failed and we were unable to recover it. 00:26:39.715 [2024-12-06 03:34:59.469732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.715 [2024-12-06 03:34:59.469745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.715 qpair failed and we were unable to recover it. 00:26:39.715 [2024-12-06 03:34:59.469986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.715 [2024-12-06 03:34:59.470003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.715 qpair failed and we were unable to recover it. 00:26:39.715 [2024-12-06 03:34:59.470308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.715 [2024-12-06 03:34:59.470323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.715 qpair failed and we were unable to recover it. 00:26:39.715 [2024-12-06 03:34:59.470560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.715 [2024-12-06 03:34:59.470580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.715 qpair failed and we were unable to recover it. 00:26:39.715 [2024-12-06 03:34:59.470673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.715 [2024-12-06 03:34:59.470687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.715 qpair failed and we were unable to recover it. 00:26:39.715 [2024-12-06 03:34:59.470890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.715 [2024-12-06 03:34:59.470904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.715 qpair failed and we were unable to recover it. 00:26:39.715 [2024-12-06 03:34:59.471126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.715 [2024-12-06 03:34:59.471138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.715 qpair failed and we were unable to recover it. 00:26:39.715 [2024-12-06 03:34:59.471303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.715 [2024-12-06 03:34:59.471315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.715 qpair failed and we were unable to recover it. 00:26:39.715 [2024-12-06 03:34:59.471520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.715 [2024-12-06 03:34:59.471533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.715 qpair failed and we were unable to recover it. 00:26:39.715 [2024-12-06 03:34:59.471730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.715 [2024-12-06 03:34:59.471743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.715 qpair failed and we were unable to recover it. 00:26:39.715 [2024-12-06 03:34:59.471903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.715 [2024-12-06 03:34:59.471915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.715 qpair failed and we were unable to recover it. 00:26:39.715 [2024-12-06 03:34:59.472170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.715 [2024-12-06 03:34:59.472184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.715 qpair failed and we were unable to recover it. 00:26:39.715 [2024-12-06 03:34:59.472384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.715 [2024-12-06 03:34:59.472397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.715 qpair failed and we were unable to recover it. 00:26:39.715 [2024-12-06 03:34:59.472568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.715 [2024-12-06 03:34:59.472581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.715 qpair failed and we were unable to recover it. 00:26:39.715 [2024-12-06 03:34:59.472749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.715 [2024-12-06 03:34:59.472762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.715 qpair failed and we were unable to recover it. 00:26:39.715 [2024-12-06 03:34:59.472931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.715 [2024-12-06 03:34:59.472944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.715 qpair failed and we were unable to recover it. 00:26:39.715 [2024-12-06 03:34:59.473147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.715 [2024-12-06 03:34:59.473160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.715 qpair failed and we were unable to recover it. 00:26:39.715 [2024-12-06 03:34:59.473247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.715 [2024-12-06 03:34:59.473258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.715 qpair failed and we were unable to recover it. 00:26:39.715 [2024-12-06 03:34:59.473351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.715 [2024-12-06 03:34:59.473363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.715 qpair failed and we were unable to recover it. 00:26:39.715 [2024-12-06 03:34:59.473506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.715 [2024-12-06 03:34:59.473518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.715 qpair failed and we were unable to recover it. 00:26:39.715 [2024-12-06 03:34:59.473657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.715 [2024-12-06 03:34:59.473670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.715 qpair failed and we were unable to recover it. 00:26:39.715 [2024-12-06 03:34:59.473832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.715 [2024-12-06 03:34:59.473844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.715 qpair failed and we were unable to recover it. 00:26:39.715 [2024-12-06 03:34:59.474091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.715 [2024-12-06 03:34:59.474103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.715 qpair failed and we were unable to recover it. 00:26:39.715 [2024-12-06 03:34:59.474337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.715 [2024-12-06 03:34:59.474349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.715 qpair failed and we were unable to recover it. 00:26:39.715 [2024-12-06 03:34:59.474516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.715 [2024-12-06 03:34:59.474528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.715 qpair failed and we were unable to recover it. 00:26:39.715 [2024-12-06 03:34:59.474732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.715 [2024-12-06 03:34:59.474745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.715 qpair failed and we were unable to recover it. 00:26:39.715 [2024-12-06 03:34:59.474962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.715 [2024-12-06 03:34:59.474974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.715 qpair failed and we were unable to recover it. 00:26:39.715 [2024-12-06 03:34:59.475175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.716 [2024-12-06 03:34:59.475187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.716 qpair failed and we were unable to recover it. 00:26:39.716 [2024-12-06 03:34:59.475352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.716 [2024-12-06 03:34:59.475364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.716 qpair failed and we were unable to recover it. 00:26:39.716 [2024-12-06 03:34:59.475502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.716 [2024-12-06 03:34:59.475514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.716 qpair failed and we were unable to recover it. 00:26:39.716 [2024-12-06 03:34:59.475737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.716 [2024-12-06 03:34:59.475750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.716 qpair failed and we were unable to recover it. 00:26:39.716 [2024-12-06 03:34:59.475843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.716 [2024-12-06 03:34:59.475854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.716 qpair failed and we were unable to recover it. 00:26:39.716 [2024-12-06 03:34:59.476077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.716 [2024-12-06 03:34:59.476090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.716 qpair failed and we were unable to recover it. 00:26:39.716 [2024-12-06 03:34:59.476290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.716 [2024-12-06 03:34:59.476303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.716 qpair failed and we were unable to recover it. 00:26:39.716 [2024-12-06 03:34:59.476463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.716 [2024-12-06 03:34:59.476475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.716 qpair failed and we were unable to recover it. 00:26:39.716 [2024-12-06 03:34:59.476672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.716 [2024-12-06 03:34:59.476684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.716 qpair failed and we were unable to recover it. 00:26:39.716 [2024-12-06 03:34:59.476838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.716 [2024-12-06 03:34:59.476850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.716 qpair failed and we were unable to recover it. 00:26:39.716 [2024-12-06 03:34:59.477015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.716 [2024-12-06 03:34:59.477027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.716 qpair failed and we were unable to recover it. 00:26:39.716 [2024-12-06 03:34:59.477249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.716 [2024-12-06 03:34:59.477261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.716 qpair failed and we were unable to recover it. 00:26:39.716 [2024-12-06 03:34:59.477389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.716 [2024-12-06 03:34:59.477401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.716 qpair failed and we were unable to recover it. 00:26:39.716 [2024-12-06 03:34:59.477631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.716 [2024-12-06 03:34:59.477643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.716 qpair failed and we were unable to recover it. 00:26:39.716 [2024-12-06 03:34:59.477787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.716 [2024-12-06 03:34:59.477799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.716 qpair failed and we were unable to recover it. 00:26:39.716 [2024-12-06 03:34:59.478016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.716 [2024-12-06 03:34:59.478029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.716 qpair failed and we were unable to recover it. 00:26:39.716 [2024-12-06 03:34:59.478256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.716 [2024-12-06 03:34:59.478270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.716 qpair failed and we were unable to recover it. 00:26:39.716 [2024-12-06 03:34:59.478466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.716 [2024-12-06 03:34:59.478479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.716 qpair failed and we were unable to recover it. 00:26:39.716 [2024-12-06 03:34:59.478571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.716 [2024-12-06 03:34:59.478582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.716 qpair failed and we were unable to recover it. 00:26:39.716 [2024-12-06 03:34:59.478804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.716 [2024-12-06 03:34:59.478817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.716 qpair failed and we were unable to recover it. 00:26:39.716 [2024-12-06 03:34:59.479016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.716 [2024-12-06 03:34:59.479029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.716 qpair failed and we were unable to recover it. 00:26:39.716 [2024-12-06 03:34:59.479203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.716 [2024-12-06 03:34:59.479214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.716 qpair failed and we were unable to recover it. 00:26:39.716 [2024-12-06 03:34:59.479347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.716 [2024-12-06 03:34:59.479359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.716 qpair failed and we were unable to recover it. 00:26:39.716 [2024-12-06 03:34:59.479576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.716 [2024-12-06 03:34:59.479589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.716 qpair failed and we were unable to recover it. 00:26:39.716 [2024-12-06 03:34:59.479819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.716 [2024-12-06 03:34:59.479831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.716 qpair failed and we were unable to recover it. 00:26:39.716 [2024-12-06 03:34:59.479907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.716 [2024-12-06 03:34:59.479918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.716 qpair failed and we were unable to recover it. 00:26:39.716 [2024-12-06 03:34:59.480064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.716 [2024-12-06 03:34:59.480077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.716 qpair failed and we were unable to recover it. 00:26:39.716 [2024-12-06 03:34:59.480327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.716 [2024-12-06 03:34:59.480338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.716 qpair failed and we were unable to recover it. 00:26:39.716 [2024-12-06 03:34:59.480477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.716 [2024-12-06 03:34:59.480488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.716 qpair failed and we were unable to recover it. 00:26:39.716 [2024-12-06 03:34:59.480664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.716 [2024-12-06 03:34:59.480677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.716 qpair failed and we were unable to recover it. 00:26:39.716 [2024-12-06 03:34:59.480765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.716 [2024-12-06 03:34:59.480776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.716 qpair failed and we were unable to recover it. 00:26:39.716 [2024-12-06 03:34:59.480999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.716 [2024-12-06 03:34:59.481012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.717 qpair failed and we were unable to recover it. 00:26:39.717 [2024-12-06 03:34:59.481157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.717 [2024-12-06 03:34:59.481169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.717 qpair failed and we were unable to recover it. 00:26:39.717 [2024-12-06 03:34:59.481371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.717 [2024-12-06 03:34:59.481384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.717 qpair failed and we were unable to recover it. 00:26:39.717 [2024-12-06 03:34:59.481543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.717 [2024-12-06 03:34:59.481556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.717 qpair failed and we were unable to recover it. 00:26:39.717 [2024-12-06 03:34:59.481634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.717 [2024-12-06 03:34:59.481646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.717 qpair failed and we were unable to recover it. 00:26:39.717 [2024-12-06 03:34:59.481867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.717 [2024-12-06 03:34:59.481879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.717 qpair failed and we were unable to recover it. 00:26:39.717 [2024-12-06 03:34:59.482012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.717 [2024-12-06 03:34:59.482025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.717 qpair failed and we were unable to recover it. 00:26:39.717 [2024-12-06 03:34:59.482173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.717 [2024-12-06 03:34:59.482186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.717 qpair failed and we were unable to recover it. 00:26:39.717 [2024-12-06 03:34:59.482339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.717 [2024-12-06 03:34:59.482351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.717 qpair failed and we were unable to recover it. 00:26:39.717 [2024-12-06 03:34:59.482498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.717 [2024-12-06 03:34:59.482511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.717 qpair failed and we were unable to recover it. 00:26:39.717 [2024-12-06 03:34:59.482676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.717 [2024-12-06 03:34:59.482688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.717 qpair failed and we were unable to recover it. 00:26:39.717 [2024-12-06 03:34:59.482912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.717 [2024-12-06 03:34:59.482925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.717 qpair failed and we were unable to recover it. 00:26:39.717 [2024-12-06 03:34:59.483160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.717 [2024-12-06 03:34:59.483172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.717 qpair failed and we were unable to recover it. 00:26:39.717 [2024-12-06 03:34:59.483390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.717 [2024-12-06 03:34:59.483403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.717 qpair failed and we were unable to recover it. 00:26:39.717 [2024-12-06 03:34:59.483674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.717 [2024-12-06 03:34:59.483686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.717 qpair failed and we were unable to recover it. 00:26:39.717 [2024-12-06 03:34:59.483774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.717 [2024-12-06 03:34:59.483786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.717 qpair failed and we were unable to recover it. 00:26:39.717 [2024-12-06 03:34:59.484006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.717 [2024-12-06 03:34:59.484019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.717 qpair failed and we were unable to recover it. 00:26:39.717 [2024-12-06 03:34:59.484160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.717 [2024-12-06 03:34:59.484172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.717 qpair failed and we were unable to recover it. 00:26:39.717 [2024-12-06 03:34:59.484424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.717 [2024-12-06 03:34:59.484437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.717 qpair failed and we were unable to recover it. 00:26:39.717 [2024-12-06 03:34:59.484617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.717 [2024-12-06 03:34:59.484629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.717 qpair failed and we were unable to recover it. 00:26:39.717 [2024-12-06 03:34:59.484882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.717 [2024-12-06 03:34:59.484896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.717 qpair failed and we were unable to recover it. 00:26:39.717 [2024-12-06 03:34:59.485114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.717 [2024-12-06 03:34:59.485127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.717 qpair failed and we were unable to recover it. 00:26:39.717 [2024-12-06 03:34:59.485351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.717 [2024-12-06 03:34:59.485363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.717 qpair failed and we were unable to recover it. 00:26:39.717 [2024-12-06 03:34:59.485586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.717 [2024-12-06 03:34:59.485599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.717 qpair failed and we were unable to recover it. 00:26:39.717 [2024-12-06 03:34:59.485768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.717 [2024-12-06 03:34:59.485780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.717 qpair failed and we were unable to recover it. 00:26:39.717 [2024-12-06 03:34:59.485910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.717 [2024-12-06 03:34:59.485923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.717 qpair failed and we were unable to recover it. 00:26:39.717 [2024-12-06 03:34:59.486089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.717 [2024-12-06 03:34:59.486103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.717 qpair failed and we were unable to recover it. 00:26:39.717 [2024-12-06 03:34:59.486263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.717 [2024-12-06 03:34:59.486275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.717 qpair failed and we were unable to recover it. 00:26:39.717 [2024-12-06 03:34:59.486540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.717 [2024-12-06 03:34:59.486553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.717 qpair failed and we were unable to recover it. 00:26:39.717 [2024-12-06 03:34:59.486775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.717 [2024-12-06 03:34:59.486788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.717 qpair failed and we were unable to recover it. 00:26:39.717 [2024-12-06 03:34:59.486935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.717 [2024-12-06 03:34:59.486951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.717 qpair failed and we were unable to recover it. 00:26:39.717 [2024-12-06 03:34:59.487171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.717 [2024-12-06 03:34:59.487183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.717 qpair failed and we were unable to recover it. 00:26:39.717 [2024-12-06 03:34:59.487405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.717 [2024-12-06 03:34:59.487418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.717 qpair failed and we were unable to recover it. 00:26:39.717 [2024-12-06 03:34:59.487593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.717 [2024-12-06 03:34:59.487605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.717 qpair failed and we were unable to recover it. 00:26:39.717 [2024-12-06 03:34:59.487828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.717 [2024-12-06 03:34:59.487840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.717 qpair failed and we were unable to recover it. 00:26:39.717 [2024-12-06 03:34:59.488038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.717 [2024-12-06 03:34:59.488051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.717 qpair failed and we were unable to recover it. 00:26:39.717 [2024-12-06 03:34:59.488217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.717 [2024-12-06 03:34:59.488229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.717 qpair failed and we were unable to recover it. 00:26:39.717 [2024-12-06 03:34:59.488429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.717 [2024-12-06 03:34:59.488442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.717 qpair failed and we were unable to recover it. 00:26:39.717 [2024-12-06 03:34:59.488659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.718 [2024-12-06 03:34:59.488671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.718 qpair failed and we were unable to recover it. 00:26:39.718 [2024-12-06 03:34:59.488897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.718 [2024-12-06 03:34:59.488909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.718 qpair failed and we were unable to recover it. 00:26:39.718 [2024-12-06 03:34:59.489058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.718 [2024-12-06 03:34:59.489071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.718 qpair failed and we were unable to recover it. 00:26:39.718 [2024-12-06 03:34:59.489166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.718 [2024-12-06 03:34:59.489177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.718 qpair failed and we were unable to recover it. 00:26:39.718 [2024-12-06 03:34:59.489319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.718 [2024-12-06 03:34:59.489331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.718 qpair failed and we were unable to recover it. 00:26:39.718 [2024-12-06 03:34:59.489549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.718 [2024-12-06 03:34:59.489561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.718 qpair failed and we were unable to recover it. 00:26:39.718 [2024-12-06 03:34:59.489649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.718 [2024-12-06 03:34:59.489661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.718 qpair failed and we were unable to recover it. 00:26:39.718 [2024-12-06 03:34:59.489886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.718 [2024-12-06 03:34:59.489898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.718 qpair failed and we were unable to recover it. 00:26:39.718 [2024-12-06 03:34:59.490029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.718 [2024-12-06 03:34:59.490042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.718 qpair failed and we were unable to recover it. 00:26:39.718 [2024-12-06 03:34:59.490138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.718 [2024-12-06 03:34:59.490150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.718 qpair failed and we were unable to recover it. 00:26:39.718 [2024-12-06 03:34:59.490362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.718 [2024-12-06 03:34:59.490374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.718 qpair failed and we were unable to recover it. 00:26:39.718 [2024-12-06 03:34:59.490530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.718 [2024-12-06 03:34:59.490543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.718 qpair failed and we were unable to recover it. 00:26:39.718 [2024-12-06 03:34:59.490751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.718 [2024-12-06 03:34:59.490764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.718 qpair failed and we were unable to recover it. 00:26:39.718 [2024-12-06 03:34:59.490989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.718 [2024-12-06 03:34:59.491002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.718 qpair failed and we were unable to recover it. 00:26:39.718 [2024-12-06 03:34:59.491232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.718 [2024-12-06 03:34:59.491245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.718 qpair failed and we were unable to recover it. 00:26:39.718 [2024-12-06 03:34:59.491406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.718 [2024-12-06 03:34:59.491419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.718 qpair failed and we were unable to recover it. 00:26:39.718 [2024-12-06 03:34:59.491620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.718 [2024-12-06 03:34:59.491632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.718 qpair failed and we were unable to recover it. 00:26:39.718 [2024-12-06 03:34:59.491829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.718 [2024-12-06 03:34:59.491842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.718 qpair failed and we were unable to recover it. 00:26:39.718 [2024-12-06 03:34:59.492067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.718 [2024-12-06 03:34:59.492079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.718 qpair failed and we were unable to recover it. 00:26:39.718 [2024-12-06 03:34:59.492354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.718 [2024-12-06 03:34:59.492366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.718 qpair failed and we were unable to recover it. 00:26:39.718 [2024-12-06 03:34:59.492582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.718 [2024-12-06 03:34:59.492594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.718 qpair failed and we were unable to recover it. 00:26:39.718 [2024-12-06 03:34:59.492791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.718 [2024-12-06 03:34:59.492803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.718 qpair failed and we were unable to recover it. 00:26:39.718 [2024-12-06 03:34:59.493051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.718 [2024-12-06 03:34:59.493064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.718 qpair failed and we were unable to recover it. 00:26:39.718 [2024-12-06 03:34:59.493215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.718 [2024-12-06 03:34:59.493227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.718 qpair failed and we were unable to recover it. 00:26:39.718 [2024-12-06 03:34:59.493409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.718 [2024-12-06 03:34:59.493421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.718 qpair failed and we were unable to recover it. 00:26:39.718 [2024-12-06 03:34:59.493587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.718 [2024-12-06 03:34:59.493600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.718 qpair failed and we were unable to recover it. 00:26:39.718 [2024-12-06 03:34:59.493740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.718 [2024-12-06 03:34:59.493752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.718 qpair failed and we were unable to recover it. 00:26:39.718 [2024-12-06 03:34:59.493966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.718 [2024-12-06 03:34:59.493981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.718 qpair failed and we were unable to recover it. 00:26:39.718 [2024-12-06 03:34:59.494116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.718 [2024-12-06 03:34:59.494128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.718 qpair failed and we were unable to recover it. 00:26:39.718 [2024-12-06 03:34:59.494278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.718 [2024-12-06 03:34:59.494291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.718 qpair failed and we were unable to recover it. 00:26:39.718 [2024-12-06 03:34:59.494444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.718 [2024-12-06 03:34:59.494456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.718 qpair failed and we were unable to recover it. 00:26:39.718 [2024-12-06 03:34:59.494620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.718 [2024-12-06 03:34:59.494632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.718 qpair failed and we were unable to recover it. 00:26:39.718 [2024-12-06 03:34:59.494857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.718 [2024-12-06 03:34:59.494870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.718 qpair failed and we were unable to recover it. 00:26:39.718 [2024-12-06 03:34:59.495075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.718 [2024-12-06 03:34:59.495087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.718 qpair failed and we were unable to recover it. 00:26:39.718 [2024-12-06 03:34:59.495171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.718 [2024-12-06 03:34:59.495182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.718 qpair failed and we were unable to recover it. 00:26:39.718 [2024-12-06 03:34:59.495351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.718 [2024-12-06 03:34:59.495364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.718 qpair failed and we were unable to recover it. 00:26:39.718 [2024-12-06 03:34:59.495463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.718 [2024-12-06 03:34:59.495474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.718 qpair failed and we were unable to recover it. 00:26:39.718 [2024-12-06 03:34:59.495689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.718 [2024-12-06 03:34:59.495702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.718 qpair failed and we were unable to recover it. 00:26:39.719 [2024-12-06 03:34:59.495862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.719 [2024-12-06 03:34:59.495875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.719 qpair failed and we were unable to recover it. 00:26:39.719 [2024-12-06 03:34:59.495974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.719 [2024-12-06 03:34:59.495986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.719 qpair failed and we were unable to recover it. 00:26:39.719 [2024-12-06 03:34:59.496231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.719 [2024-12-06 03:34:59.496244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.719 qpair failed and we were unable to recover it. 00:26:39.719 [2024-12-06 03:34:59.496328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.719 [2024-12-06 03:34:59.496339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.719 qpair failed and we were unable to recover it. 00:26:39.719 [2024-12-06 03:34:59.496490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.719 [2024-12-06 03:34:59.496503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.719 qpair failed and we were unable to recover it. 00:26:39.719 [2024-12-06 03:34:59.496649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.719 [2024-12-06 03:34:59.496662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.719 qpair failed and we were unable to recover it. 00:26:39.719 [2024-12-06 03:34:59.496740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.719 [2024-12-06 03:34:59.496751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.719 qpair failed and we were unable to recover it. 00:26:39.719 [2024-12-06 03:34:59.496952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.719 [2024-12-06 03:34:59.496965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.719 qpair failed and we were unable to recover it. 00:26:39.719 [2024-12-06 03:34:59.497112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.719 [2024-12-06 03:34:59.497125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.719 qpair failed and we were unable to recover it. 00:26:39.719 [2024-12-06 03:34:59.497275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.719 [2024-12-06 03:34:59.497287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.719 qpair failed and we were unable to recover it. 00:26:39.719 [2024-12-06 03:34:59.497509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.719 [2024-12-06 03:34:59.497522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.719 qpair failed and we were unable to recover it. 00:26:39.719 [2024-12-06 03:34:59.497615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.719 [2024-12-06 03:34:59.497627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.719 qpair failed and we were unable to recover it. 00:26:39.719 [2024-12-06 03:34:59.497914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.719 [2024-12-06 03:34:59.497927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.719 qpair failed and we were unable to recover it. 00:26:39.719 [2024-12-06 03:34:59.498106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.719 [2024-12-06 03:34:59.498119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.719 qpair failed and we were unable to recover it. 00:26:39.719 [2024-12-06 03:34:59.498281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.719 [2024-12-06 03:34:59.498293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.719 qpair failed and we were unable to recover it. 00:26:39.719 [2024-12-06 03:34:59.498508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.719 [2024-12-06 03:34:59.498522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.719 qpair failed and we were unable to recover it. 00:26:39.719 [2024-12-06 03:34:59.498622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.719 [2024-12-06 03:34:59.498643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.719 qpair failed and we were unable to recover it. 00:26:39.719 [2024-12-06 03:34:59.498793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.719 [2024-12-06 03:34:59.498810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.719 qpair failed and we were unable to recover it. 00:26:39.719 [2024-12-06 03:34:59.499026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.719 [2024-12-06 03:34:59.499043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.719 qpair failed and we were unable to recover it. 00:26:39.719 [2024-12-06 03:34:59.499195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.719 [2024-12-06 03:34:59.499212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.719 qpair failed and we were unable to recover it. 00:26:39.719 [2024-12-06 03:34:59.499447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.719 [2024-12-06 03:34:59.499463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.719 qpair failed and we were unable to recover it. 00:26:39.719 [2024-12-06 03:34:59.499693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.719 [2024-12-06 03:34:59.499710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.719 qpair failed and we were unable to recover it. 00:26:39.719 [2024-12-06 03:34:59.499817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.719 [2024-12-06 03:34:59.499834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.719 qpair failed and we were unable to recover it. 00:26:39.719 [2024-12-06 03:34:59.500041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.719 [2024-12-06 03:34:59.500058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.719 qpair failed and we were unable to recover it. 00:26:39.719 [2024-12-06 03:34:59.500266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.719 [2024-12-06 03:34:59.500283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.719 qpair failed and we were unable to recover it. 00:26:39.719 [2024-12-06 03:34:59.500513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.719 [2024-12-06 03:34:59.500529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.719 qpair failed and we were unable to recover it. 00:26:39.719 [2024-12-06 03:34:59.500679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.719 [2024-12-06 03:34:59.500694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.719 qpair failed and we were unable to recover it. 00:26:39.719 [2024-12-06 03:34:59.500794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.719 [2024-12-06 03:34:59.500809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.719 qpair failed and we were unable to recover it. 00:26:39.719 [2024-12-06 03:34:59.501038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.719 [2024-12-06 03:34:59.501056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.719 qpair failed and we were unable to recover it. 00:26:39.719 [2024-12-06 03:34:59.501267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.719 [2024-12-06 03:34:59.501284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.719 qpair failed and we were unable to recover it. 00:26:39.719 [2024-12-06 03:34:59.501448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.719 [2024-12-06 03:34:59.501464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.719 qpair failed and we were unable to recover it. 00:26:39.719 [2024-12-06 03:34:59.501671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.719 [2024-12-06 03:34:59.501688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.719 qpair failed and we were unable to recover it. 00:26:39.719 [2024-12-06 03:34:59.501896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.719 [2024-12-06 03:34:59.501912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.719 qpair failed and we were unable to recover it. 00:26:39.719 [2024-12-06 03:34:59.502067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.719 [2024-12-06 03:34:59.502084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.719 qpair failed and we were unable to recover it. 00:26:39.719 [2024-12-06 03:34:59.502236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.719 [2024-12-06 03:34:59.502253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.719 qpair failed and we were unable to recover it. 00:26:39.719 [2024-12-06 03:34:59.502513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.719 [2024-12-06 03:34:59.502529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.719 qpair failed and we were unable to recover it. 00:26:39.719 [2024-12-06 03:34:59.502771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.719 [2024-12-06 03:34:59.502787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.719 qpair failed and we were unable to recover it. 00:26:39.719 [2024-12-06 03:34:59.502933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.719 [2024-12-06 03:34:59.502954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.719 qpair failed and we were unable to recover it. 00:26:39.719 [2024-12-06 03:34:59.503141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.720 [2024-12-06 03:34:59.503157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.720 qpair failed and we were unable to recover it. 00:26:39.720 [2024-12-06 03:34:59.503382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.720 [2024-12-06 03:34:59.503399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.720 qpair failed and we were unable to recover it. 00:26:39.720 [2024-12-06 03:34:59.503568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.720 [2024-12-06 03:34:59.503585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.720 qpair failed and we were unable to recover it. 00:26:39.720 [2024-12-06 03:34:59.503794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.720 [2024-12-06 03:34:59.503811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.720 qpair failed and we were unable to recover it. 00:26:39.720 [2024-12-06 03:34:59.503968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.720 [2024-12-06 03:34:59.503985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.720 qpair failed and we were unable to recover it. 00:26:39.720 [2024-12-06 03:34:59.504268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.720 [2024-12-06 03:34:59.504282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.720 qpair failed and we were unable to recover it. 00:26:39.720 [2024-12-06 03:34:59.504447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.720 [2024-12-06 03:34:59.504459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.720 qpair failed and we were unable to recover it. 00:26:39.720 [2024-12-06 03:34:59.504628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.720 [2024-12-06 03:34:59.504641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.720 qpair failed and we were unable to recover it. 00:26:39.720 [2024-12-06 03:34:59.504841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.720 [2024-12-06 03:34:59.504854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.720 qpair failed and we were unable to recover it. 00:26:39.720 [2024-12-06 03:34:59.505049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.720 [2024-12-06 03:34:59.505061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.720 qpair failed and we were unable to recover it. 00:26:39.720 [2024-12-06 03:34:59.505302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.720 [2024-12-06 03:34:59.505315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.720 qpair failed and we were unable to recover it. 00:26:39.720 [2024-12-06 03:34:59.505540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.720 [2024-12-06 03:34:59.505553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.720 qpair failed and we were unable to recover it. 00:26:39.720 [2024-12-06 03:34:59.505760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.720 [2024-12-06 03:34:59.505773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.720 qpair failed and we were unable to recover it. 00:26:39.720 [2024-12-06 03:34:59.505972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.720 [2024-12-06 03:34:59.505985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.720 qpair failed and we were unable to recover it. 00:26:39.720 [2024-12-06 03:34:59.506216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.720 [2024-12-06 03:34:59.506228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.720 qpair failed and we were unable to recover it. 00:26:39.720 [2024-12-06 03:34:59.506425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.720 [2024-12-06 03:34:59.506437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.720 qpair failed and we were unable to recover it. 00:26:39.720 [2024-12-06 03:34:59.506631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.720 [2024-12-06 03:34:59.506644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.720 qpair failed and we were unable to recover it. 00:26:39.720 [2024-12-06 03:34:59.506776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.720 [2024-12-06 03:34:59.506789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.720 qpair failed and we were unable to recover it. 00:26:39.720 [2024-12-06 03:34:59.507013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.720 [2024-12-06 03:34:59.507026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.720 qpair failed and we were unable to recover it. 00:26:39.720 [2024-12-06 03:34:59.507227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.720 [2024-12-06 03:34:59.507239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.720 qpair failed and we were unable to recover it. 00:26:39.720 [2024-12-06 03:34:59.507383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.720 [2024-12-06 03:34:59.507395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.720 qpair failed and we were unable to recover it. 00:26:39.720 [2024-12-06 03:34:59.507609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.720 [2024-12-06 03:34:59.507621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.720 qpair failed and we were unable to recover it. 00:26:39.720 [2024-12-06 03:34:59.507844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.720 [2024-12-06 03:34:59.507857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.720 qpair failed and we were unable to recover it. 00:26:39.720 [2024-12-06 03:34:59.508000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.720 [2024-12-06 03:34:59.508013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.720 qpair failed and we were unable to recover it. 00:26:39.720 [2024-12-06 03:34:59.508159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.720 [2024-12-06 03:34:59.508171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.720 qpair failed and we were unable to recover it. 00:26:39.720 [2024-12-06 03:34:59.508392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.720 [2024-12-06 03:34:59.508404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.720 qpair failed and we were unable to recover it. 00:26:39.720 [2024-12-06 03:34:59.508554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.720 [2024-12-06 03:34:59.508567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.720 qpair failed and we were unable to recover it. 00:26:39.720 [2024-12-06 03:34:59.508790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.720 [2024-12-06 03:34:59.508802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.720 qpair failed and we were unable to recover it. 00:26:39.720 [2024-12-06 03:34:59.509021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.720 [2024-12-06 03:34:59.509033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.720 qpair failed and we were unable to recover it. 00:26:39.720 [2024-12-06 03:34:59.509253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.720 [2024-12-06 03:34:59.509266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.720 qpair failed and we were unable to recover it. 00:26:39.720 [2024-12-06 03:34:59.509406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.720 [2024-12-06 03:34:59.509419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.720 qpair failed and we were unable to recover it. 00:26:39.720 [2024-12-06 03:34:59.509508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.720 [2024-12-06 03:34:59.509519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.720 qpair failed and we were unable to recover it. 00:26:39.720 [2024-12-06 03:34:59.509735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.720 [2024-12-06 03:34:59.509748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.720 qpair failed and we were unable to recover it. 00:26:39.720 [2024-12-06 03:34:59.509995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.720 [2024-12-06 03:34:59.510008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.720 qpair failed and we were unable to recover it. 00:26:39.720 [2024-12-06 03:34:59.510187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.720 [2024-12-06 03:34:59.510200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.720 qpair failed and we were unable to recover it. 00:26:39.720 [2024-12-06 03:34:59.510341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.720 [2024-12-06 03:34:59.510354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.720 qpair failed and we were unable to recover it. 00:26:39.720 [2024-12-06 03:34:59.510487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.720 [2024-12-06 03:34:59.510500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.720 qpair failed and we were unable to recover it. 00:26:39.720 [2024-12-06 03:34:59.510632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.720 [2024-12-06 03:34:59.510644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.720 qpair failed and we were unable to recover it. 00:26:39.720 [2024-12-06 03:34:59.510789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.721 [2024-12-06 03:34:59.510801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.721 qpair failed and we were unable to recover it. 00:26:39.721 [2024-12-06 03:34:59.510879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.721 [2024-12-06 03:34:59.510891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.721 qpair failed and we were unable to recover it. 00:26:39.721 [2024-12-06 03:34:59.511103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.721 [2024-12-06 03:34:59.511117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.721 qpair failed and we were unable to recover it. 00:26:39.721 [2024-12-06 03:34:59.511330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.721 [2024-12-06 03:34:59.511342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.721 qpair failed and we were unable to recover it. 00:26:39.721 [2024-12-06 03:34:59.511431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.721 [2024-12-06 03:34:59.511442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.721 qpair failed and we were unable to recover it. 00:26:39.721 [2024-12-06 03:34:59.511669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.721 [2024-12-06 03:34:59.511682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.721 qpair failed and we were unable to recover it. 00:26:39.721 [2024-12-06 03:34:59.511823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.721 [2024-12-06 03:34:59.511835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.721 qpair failed and we were unable to recover it. 00:26:39.721 [2024-12-06 03:34:59.511926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.721 [2024-12-06 03:34:59.511939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.721 qpair failed and we were unable to recover it. 00:26:39.721 [2024-12-06 03:34:59.512090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.721 [2024-12-06 03:34:59.512103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.721 qpair failed and we were unable to recover it. 00:26:39.721 [2024-12-06 03:34:59.512248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.721 [2024-12-06 03:34:59.512261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.721 qpair failed and we were unable to recover it. 00:26:39.721 [2024-12-06 03:34:59.512437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.721 [2024-12-06 03:34:59.512449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.721 qpair failed and we were unable to recover it. 00:26:39.721 [2024-12-06 03:34:59.512668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.721 [2024-12-06 03:34:59.512681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.721 qpair failed and we were unable to recover it. 00:26:39.721 [2024-12-06 03:34:59.512848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.721 [2024-12-06 03:34:59.512861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.721 qpair failed and we were unable to recover it. 00:26:39.721 [2024-12-06 03:34:59.513075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.721 [2024-12-06 03:34:59.513088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.721 qpair failed and we were unable to recover it. 00:26:39.721 [2024-12-06 03:34:59.513239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.721 [2024-12-06 03:34:59.513252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.721 qpair failed and we were unable to recover it. 00:26:39.721 [2024-12-06 03:34:59.513399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.721 [2024-12-06 03:34:59.513411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.721 qpair failed and we were unable to recover it. 00:26:39.721 [2024-12-06 03:34:59.513626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.721 [2024-12-06 03:34:59.513639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.721 qpair failed and we were unable to recover it. 00:26:39.721 [2024-12-06 03:34:59.513723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.721 [2024-12-06 03:34:59.513734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.721 qpair failed and we were unable to recover it. 00:26:39.721 [2024-12-06 03:34:59.513897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.721 [2024-12-06 03:34:59.513910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.721 qpair failed and we were unable to recover it. 00:26:39.721 [2024-12-06 03:34:59.514050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.721 [2024-12-06 03:34:59.514063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.721 qpair failed and we were unable to recover it. 00:26:39.721 [2024-12-06 03:34:59.514200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.721 [2024-12-06 03:34:59.514212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.721 qpair failed and we were unable to recover it. 00:26:39.721 [2024-12-06 03:34:59.514422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.721 [2024-12-06 03:34:59.514435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.721 qpair failed and we were unable to recover it. 00:26:39.721 [2024-12-06 03:34:59.514570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.721 [2024-12-06 03:34:59.514583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.721 qpair failed and we were unable to recover it. 00:26:39.721 [2024-12-06 03:34:59.514820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.721 [2024-12-06 03:34:59.514833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.721 qpair failed and we were unable to recover it. 00:26:39.721 [2024-12-06 03:34:59.514978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.721 [2024-12-06 03:34:59.514991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.721 qpair failed and we were unable to recover it. 00:26:39.721 [2024-12-06 03:34:59.515137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.721 [2024-12-06 03:34:59.515149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.721 qpair failed and we were unable to recover it. 00:26:39.721 [2024-12-06 03:34:59.515239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.721 [2024-12-06 03:34:59.515250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.721 qpair failed and we were unable to recover it. 00:26:39.721 [2024-12-06 03:34:59.515454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.721 [2024-12-06 03:34:59.515466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.721 qpair failed and we were unable to recover it. 00:26:39.721 [2024-12-06 03:34:59.515713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.721 [2024-12-06 03:34:59.515726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.721 qpair failed and we were unable to recover it. 00:26:39.721 [2024-12-06 03:34:59.515973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.721 [2024-12-06 03:34:59.515986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.721 qpair failed and we were unable to recover it. 00:26:39.721 [2024-12-06 03:34:59.516211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.721 [2024-12-06 03:34:59.516224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.721 qpair failed and we were unable to recover it. 00:26:39.721 [2024-12-06 03:34:59.516358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.721 [2024-12-06 03:34:59.516370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.721 qpair failed and we were unable to recover it. 00:26:39.721 [2024-12-06 03:34:59.516445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.721 [2024-12-06 03:34:59.516456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.721 qpair failed and we were unable to recover it. 00:26:39.721 [2024-12-06 03:34:59.516655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.722 [2024-12-06 03:34:59.516668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.722 qpair failed and we were unable to recover it. 00:26:39.722 [2024-12-06 03:34:59.516810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.722 [2024-12-06 03:34:59.516823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.722 qpair failed and we were unable to recover it. 00:26:39.722 [2024-12-06 03:34:59.517039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.722 [2024-12-06 03:34:59.517052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.722 qpair failed and we were unable to recover it. 00:26:39.722 [2024-12-06 03:34:59.517281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.722 [2024-12-06 03:34:59.517293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.722 qpair failed and we were unable to recover it. 00:26:39.722 [2024-12-06 03:34:59.517493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.722 [2024-12-06 03:34:59.517505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.722 qpair failed and we were unable to recover it. 00:26:39.722 [2024-12-06 03:34:59.517701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.722 [2024-12-06 03:34:59.517714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.722 qpair failed and we were unable to recover it. 00:26:39.722 [2024-12-06 03:34:59.517917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.722 [2024-12-06 03:34:59.517930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.722 qpair failed and we were unable to recover it. 00:26:39.722 [2024-12-06 03:34:59.518077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.722 [2024-12-06 03:34:59.518091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.722 qpair failed and we were unable to recover it. 00:26:39.722 [2024-12-06 03:34:59.518229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.722 [2024-12-06 03:34:59.518242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.722 qpair failed and we were unable to recover it. 00:26:39.722 [2024-12-06 03:34:59.518323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.722 [2024-12-06 03:34:59.518334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.722 qpair failed and we were unable to recover it. 00:26:39.722 [2024-12-06 03:34:59.518548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.722 [2024-12-06 03:34:59.518560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.722 qpair failed and we were unable to recover it. 00:26:39.722 [2024-12-06 03:34:59.518691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.722 [2024-12-06 03:34:59.518703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.722 qpair failed and we were unable to recover it. 00:26:39.722 [2024-12-06 03:34:59.518771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.722 [2024-12-06 03:34:59.518782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.722 qpair failed and we were unable to recover it. 00:26:39.722 [2024-12-06 03:34:59.519026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.722 [2024-12-06 03:34:59.519040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.722 qpair failed and we were unable to recover it. 00:26:39.722 [2024-12-06 03:34:59.519180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.722 [2024-12-06 03:34:59.519198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.722 qpair failed and we were unable to recover it. 00:26:39.722 [2024-12-06 03:34:59.519397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.722 [2024-12-06 03:34:59.519410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.722 qpair failed and we were unable to recover it. 00:26:39.722 [2024-12-06 03:34:59.519649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.722 [2024-12-06 03:34:59.519661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.722 qpair failed and we were unable to recover it. 00:26:39.722 [2024-12-06 03:34:59.519810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.722 [2024-12-06 03:34:59.519822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.722 qpair failed and we were unable to recover it. 00:26:39.722 [2024-12-06 03:34:59.519965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.722 [2024-12-06 03:34:59.519979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.722 qpair failed and we were unable to recover it. 00:26:39.722 [2024-12-06 03:34:59.520126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.722 [2024-12-06 03:34:59.520139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.722 qpair failed and we were unable to recover it. 00:26:39.722 [2024-12-06 03:34:59.520372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.722 [2024-12-06 03:34:59.520385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.722 qpair failed and we were unable to recover it. 00:26:39.722 [2024-12-06 03:34:59.520556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.722 [2024-12-06 03:34:59.520568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.722 qpair failed and we were unable to recover it. 00:26:39.722 [2024-12-06 03:34:59.520776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.722 [2024-12-06 03:34:59.520789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.722 qpair failed and we were unable to recover it. 00:26:39.722 [2024-12-06 03:34:59.521003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.722 [2024-12-06 03:34:59.521016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.722 qpair failed and we were unable to recover it. 00:26:39.722 [2024-12-06 03:34:59.521152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.722 [2024-12-06 03:34:59.521164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.722 qpair failed and we were unable to recover it. 00:26:39.722 [2024-12-06 03:34:59.521316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.722 [2024-12-06 03:34:59.521328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.722 qpair failed and we were unable to recover it. 00:26:39.722 [2024-12-06 03:34:59.521471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.722 [2024-12-06 03:34:59.521483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.722 qpair failed and we were unable to recover it. 00:26:39.722 [2024-12-06 03:34:59.521620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.722 [2024-12-06 03:34:59.521632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.722 qpair failed and we were unable to recover it. 00:26:39.722 [2024-12-06 03:34:59.521775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.722 [2024-12-06 03:34:59.521788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.722 qpair failed and we were unable to recover it. 00:26:39.722 [2024-12-06 03:34:59.521918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.722 [2024-12-06 03:34:59.521930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.722 qpair failed and we were unable to recover it. 00:26:39.722 [2024-12-06 03:34:59.522075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.722 [2024-12-06 03:34:59.522087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.722 qpair failed and we were unable to recover it. 00:26:39.722 [2024-12-06 03:34:59.522330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.722 [2024-12-06 03:34:59.522342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.722 qpair failed and we were unable to recover it. 00:26:39.722 [2024-12-06 03:34:59.522542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.722 [2024-12-06 03:34:59.522555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.722 qpair failed and we were unable to recover it. 00:26:39.722 [2024-12-06 03:34:59.522708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.722 [2024-12-06 03:34:59.522720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.722 qpair failed and we were unable to recover it. 00:26:39.722 [2024-12-06 03:34:59.522944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.722 [2024-12-06 03:34:59.522960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.722 qpair failed and we were unable to recover it. 00:26:39.722 [2024-12-06 03:34:59.523141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.722 [2024-12-06 03:34:59.523154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.722 qpair failed and we were unable to recover it. 00:26:39.722 [2024-12-06 03:34:59.523374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.722 [2024-12-06 03:34:59.523386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.722 qpair failed and we were unable to recover it. 00:26:39.722 [2024-12-06 03:34:59.523545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.722 [2024-12-06 03:34:59.523558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.722 qpair failed and we were unable to recover it. 00:26:39.722 [2024-12-06 03:34:59.523707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.723 [2024-12-06 03:34:59.523720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.723 qpair failed and we were unable to recover it. 00:26:39.723 [2024-12-06 03:34:59.523950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.723 [2024-12-06 03:34:59.523962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.723 qpair failed and we were unable to recover it. 00:26:39.723 [2024-12-06 03:34:59.524125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.723 [2024-12-06 03:34:59.524138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.723 qpair failed and we were unable to recover it. 00:26:39.723 [2024-12-06 03:34:59.524365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.723 [2024-12-06 03:34:59.524377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.723 qpair failed and we were unable to recover it. 00:26:39.723 [2024-12-06 03:34:59.524598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.723 [2024-12-06 03:34:59.524610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.723 qpair failed and we were unable to recover it. 00:26:39.723 [2024-12-06 03:34:59.524753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.723 [2024-12-06 03:34:59.524766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.723 qpair failed and we were unable to recover it. 00:26:39.723 [2024-12-06 03:34:59.524966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.723 [2024-12-06 03:34:59.524978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.723 qpair failed and we were unable to recover it. 00:26:39.723 [2024-12-06 03:34:59.525138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.723 [2024-12-06 03:34:59.525150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.723 qpair failed and we were unable to recover it. 00:26:39.723 [2024-12-06 03:34:59.525215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.723 [2024-12-06 03:34:59.525226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.723 qpair failed and we were unable to recover it. 00:26:39.723 [2024-12-06 03:34:59.525383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.723 [2024-12-06 03:34:59.525395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.723 qpair failed and we were unable to recover it. 00:26:39.723 [2024-12-06 03:34:59.525619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.723 [2024-12-06 03:34:59.525631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.723 qpair failed and we were unable to recover it. 00:26:39.723 [2024-12-06 03:34:59.525717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.723 [2024-12-06 03:34:59.525728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.723 qpair failed and we were unable to recover it. 00:26:39.723 [2024-12-06 03:34:59.525970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.723 [2024-12-06 03:34:59.525982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.723 qpair failed and we were unable to recover it. 00:26:39.723 [2024-12-06 03:34:59.526181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.723 [2024-12-06 03:34:59.526194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.723 qpair failed and we were unable to recover it. 00:26:39.723 [2024-12-06 03:34:59.526371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.723 [2024-12-06 03:34:59.526383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.723 qpair failed and we were unable to recover it. 00:26:39.723 [2024-12-06 03:34:59.526624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.723 [2024-12-06 03:34:59.526637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.723 qpair failed and we were unable to recover it. 00:26:39.723 [2024-12-06 03:34:59.526884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.723 [2024-12-06 03:34:59.526899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.723 qpair failed and we were unable to recover it. 00:26:39.723 [2024-12-06 03:34:59.527126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.723 [2024-12-06 03:34:59.527138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.723 qpair failed and we were unable to recover it. 00:26:39.723 [2024-12-06 03:34:59.527281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.723 [2024-12-06 03:34:59.527293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.723 qpair failed and we were unable to recover it. 00:26:39.723 [2024-12-06 03:34:59.527532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.723 [2024-12-06 03:34:59.527545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.723 qpair failed and we were unable to recover it. 00:26:39.723 [2024-12-06 03:34:59.527805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.723 [2024-12-06 03:34:59.527817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.723 qpair failed and we were unable to recover it. 00:26:39.723 [2024-12-06 03:34:59.528018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.723 [2024-12-06 03:34:59.528031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.723 qpair failed and we were unable to recover it. 00:26:39.723 [2024-12-06 03:34:59.528193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.723 [2024-12-06 03:34:59.528206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.723 qpair failed and we were unable to recover it. 00:26:39.723 [2024-12-06 03:34:59.528370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.723 [2024-12-06 03:34:59.528382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.723 qpair failed and we were unable to recover it. 00:26:39.723 [2024-12-06 03:34:59.528626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.723 [2024-12-06 03:34:59.528639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.723 qpair failed and we were unable to recover it. 00:26:39.723 [2024-12-06 03:34:59.528843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.723 [2024-12-06 03:34:59.528855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.723 qpair failed and we were unable to recover it. 00:26:39.723 [2024-12-06 03:34:59.529003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.723 [2024-12-06 03:34:59.529015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.723 qpair failed and we were unable to recover it. 00:26:39.723 [2024-12-06 03:34:59.529148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.723 [2024-12-06 03:34:59.529161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.723 qpair failed and we were unable to recover it. 00:26:39.723 [2024-12-06 03:34:59.529241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.723 [2024-12-06 03:34:59.529252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.723 qpair failed and we were unable to recover it. 00:26:39.723 [2024-12-06 03:34:59.529410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.723 [2024-12-06 03:34:59.529423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.723 qpair failed and we were unable to recover it. 00:26:39.723 [2024-12-06 03:34:59.529614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.723 [2024-12-06 03:34:59.529627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.723 qpair failed and we were unable to recover it. 00:26:39.723 [2024-12-06 03:34:59.529887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.723 [2024-12-06 03:34:59.529900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.723 qpair failed and we were unable to recover it. 00:26:39.723 [2024-12-06 03:34:59.530043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.723 [2024-12-06 03:34:59.530056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.723 qpair failed and we were unable to recover it. 00:26:39.723 [2024-12-06 03:34:59.530263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.723 [2024-12-06 03:34:59.530275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.723 qpair failed and we were unable to recover it. 00:26:39.723 [2024-12-06 03:34:59.530409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.723 [2024-12-06 03:34:59.530421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.723 qpair failed and we were unable to recover it. 00:26:39.723 [2024-12-06 03:34:59.530568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.723 [2024-12-06 03:34:59.530581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.723 qpair failed and we were unable to recover it. 00:26:39.723 [2024-12-06 03:34:59.530720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.723 [2024-12-06 03:34:59.530732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.723 qpair failed and we were unable to recover it. 00:26:39.723 [2024-12-06 03:34:59.530882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.723 [2024-12-06 03:34:59.530895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.723 qpair failed and we were unable to recover it. 00:26:39.723 [2024-12-06 03:34:59.531037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.724 [2024-12-06 03:34:59.531050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.724 qpair failed and we were unable to recover it. 00:26:39.724 [2024-12-06 03:34:59.531198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.724 [2024-12-06 03:34:59.531211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.724 qpair failed and we were unable to recover it. 00:26:39.724 [2024-12-06 03:34:59.531302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.724 [2024-12-06 03:34:59.531314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.724 qpair failed and we were unable to recover it. 00:26:39.724 [2024-12-06 03:34:59.531495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.724 [2024-12-06 03:34:59.531508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.724 qpair failed and we were unable to recover it. 00:26:39.724 [2024-12-06 03:34:59.531592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.724 [2024-12-06 03:34:59.531604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.724 qpair failed and we were unable to recover it. 00:26:39.724 [2024-12-06 03:34:59.531835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.724 [2024-12-06 03:34:59.531847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.724 qpair failed and we were unable to recover it. 00:26:39.724 [2024-12-06 03:34:59.532065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.724 [2024-12-06 03:34:59.532078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.724 qpair failed and we were unable to recover it. 00:26:39.724 [2024-12-06 03:34:59.532161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.724 [2024-12-06 03:34:59.532172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.724 qpair failed and we were unable to recover it. 00:26:39.724 [2024-12-06 03:34:59.532268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.724 [2024-12-06 03:34:59.532280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.724 qpair failed and we were unable to recover it. 00:26:39.724 [2024-12-06 03:34:59.532505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.724 [2024-12-06 03:34:59.532518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.724 qpair failed and we were unable to recover it. 00:26:39.724 [2024-12-06 03:34:59.532714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.724 [2024-12-06 03:34:59.532727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.724 qpair failed and we were unable to recover it. 00:26:39.724 [2024-12-06 03:34:59.532906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.724 [2024-12-06 03:34:59.532918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.724 qpair failed and we were unable to recover it. 00:26:39.724 [2024-12-06 03:34:59.533175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.724 [2024-12-06 03:34:59.533188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.724 qpair failed and we were unable to recover it. 00:26:39.724 [2024-12-06 03:34:59.533413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.724 [2024-12-06 03:34:59.533426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.724 qpair failed and we were unable to recover it. 00:26:39.724 [2024-12-06 03:34:59.533601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.724 [2024-12-06 03:34:59.533613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.724 qpair failed and we were unable to recover it. 00:26:39.724 [2024-12-06 03:34:59.533838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.724 [2024-12-06 03:34:59.533851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.724 qpair failed and we were unable to recover it. 00:26:39.724 [2024-12-06 03:34:59.533939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.724 [2024-12-06 03:34:59.533954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.724 qpair failed and we were unable to recover it. 00:26:39.724 [2024-12-06 03:34:59.534178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.724 [2024-12-06 03:34:59.534190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.724 qpair failed and we were unable to recover it. 00:26:39.724 [2024-12-06 03:34:59.534357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.724 [2024-12-06 03:34:59.534371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.724 qpair failed and we were unable to recover it. 00:26:39.724 [2024-12-06 03:34:59.534515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.724 [2024-12-06 03:34:59.534528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.724 qpair failed and we were unable to recover it. 00:26:39.724 [2024-12-06 03:34:59.534752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.724 [2024-12-06 03:34:59.534765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.724 qpair failed and we were unable to recover it. 00:26:39.724 [2024-12-06 03:34:59.534935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.724 [2024-12-06 03:34:59.534952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.724 qpair failed and we were unable to recover it. 00:26:39.724 [2024-12-06 03:34:59.535103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.724 [2024-12-06 03:34:59.535117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.724 qpair failed and we were unable to recover it. 00:26:39.724 [2024-12-06 03:34:59.535265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.724 [2024-12-06 03:34:59.535278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.724 qpair failed and we were unable to recover it. 00:26:39.724 [2024-12-06 03:34:59.535452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.724 [2024-12-06 03:34:59.535464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.724 qpair failed and we were unable to recover it. 00:26:39.724 [2024-12-06 03:34:59.535606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.724 [2024-12-06 03:34:59.535619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.724 qpair failed and we were unable to recover it. 00:26:39.724 [2024-12-06 03:34:59.535777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.724 [2024-12-06 03:34:59.535790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.724 qpair failed and we were unable to recover it. 00:26:39.724 [2024-12-06 03:34:59.535870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.724 [2024-12-06 03:34:59.535882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.724 qpair failed and we were unable to recover it. 00:26:39.724 [2024-12-06 03:34:59.535975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.724 [2024-12-06 03:34:59.535987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.724 qpair failed and we were unable to recover it. 00:26:39.724 [2024-12-06 03:34:59.536132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.724 [2024-12-06 03:34:59.536145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.724 qpair failed and we were unable to recover it. 00:26:39.724 [2024-12-06 03:34:59.536284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.724 [2024-12-06 03:34:59.536297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.724 qpair failed and we were unable to recover it. 00:26:39.724 [2024-12-06 03:34:59.536428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.724 [2024-12-06 03:34:59.536440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.724 qpair failed and we were unable to recover it. 00:26:39.724 [2024-12-06 03:34:59.536689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.724 [2024-12-06 03:34:59.536702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.724 qpair failed and we were unable to recover it. 00:26:39.724 [2024-12-06 03:34:59.536797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.724 [2024-12-06 03:34:59.536809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.724 qpair failed and we were unable to recover it. 00:26:39.724 [2024-12-06 03:34:59.536945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.724 [2024-12-06 03:34:59.536961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.724 qpair failed and we were unable to recover it. 00:26:39.724 [2024-12-06 03:34:59.537186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.724 [2024-12-06 03:34:59.537199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.724 qpair failed and we were unable to recover it. 00:26:39.724 [2024-12-06 03:34:59.537420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.724 [2024-12-06 03:34:59.537432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.724 qpair failed and we were unable to recover it. 00:26:39.724 [2024-12-06 03:34:59.537576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.724 [2024-12-06 03:34:59.537589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.724 qpair failed and we were unable to recover it. 00:26:39.724 [2024-12-06 03:34:59.537667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.725 [2024-12-06 03:34:59.537678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.725 qpair failed and we were unable to recover it. 00:26:39.725 [2024-12-06 03:34:59.537927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.725 [2024-12-06 03:34:59.537939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.725 qpair failed and we were unable to recover it. 00:26:39.725 [2024-12-06 03:34:59.538090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.725 [2024-12-06 03:34:59.538103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.725 qpair failed and we were unable to recover it. 00:26:39.725 [2024-12-06 03:34:59.538287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.725 [2024-12-06 03:34:59.538300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.725 qpair failed and we were unable to recover it. 00:26:39.725 [2024-12-06 03:34:59.538459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.725 [2024-12-06 03:34:59.538472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.725 qpair failed and we were unable to recover it. 00:26:39.725 [2024-12-06 03:34:59.538671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.725 [2024-12-06 03:34:59.538683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.725 qpair failed and we were unable to recover it. 00:26:39.725 [2024-12-06 03:34:59.538906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.725 [2024-12-06 03:34:59.538919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.725 qpair failed and we were unable to recover it. 00:26:39.725 [2024-12-06 03:34:59.539163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.725 [2024-12-06 03:34:59.539176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.725 qpair failed and we were unable to recover it. 00:26:39.725 [2024-12-06 03:34:59.539401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.725 [2024-12-06 03:34:59.539413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.725 qpair failed and we were unable to recover it. 00:26:39.725 [2024-12-06 03:34:59.539571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.725 [2024-12-06 03:34:59.539584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.725 qpair failed and we were unable to recover it. 00:26:39.725 [2024-12-06 03:34:59.539780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.725 [2024-12-06 03:34:59.539793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.725 qpair failed and we were unable to recover it. 00:26:39.725 [2024-12-06 03:34:59.540003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.725 [2024-12-06 03:34:59.540016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.725 qpair failed and we were unable to recover it. 00:26:39.725 [2024-12-06 03:34:59.540254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.725 [2024-12-06 03:34:59.540266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.725 qpair failed and we were unable to recover it. 00:26:39.725 [2024-12-06 03:34:59.540490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.725 [2024-12-06 03:34:59.540503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.725 qpair failed and we were unable to recover it. 00:26:39.725 [2024-12-06 03:34:59.540655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.725 [2024-12-06 03:34:59.540667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.725 qpair failed and we were unable to recover it. 00:26:39.725 [2024-12-06 03:34:59.540808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.725 [2024-12-06 03:34:59.540820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.725 qpair failed and we were unable to recover it. 00:26:39.725 [2024-12-06 03:34:59.540955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.725 [2024-12-06 03:34:59.540967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.725 qpair failed and we were unable to recover it. 00:26:39.725 [2024-12-06 03:34:59.541054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.725 [2024-12-06 03:34:59.541065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.725 qpair failed and we were unable to recover it. 00:26:39.725 [2024-12-06 03:34:59.541213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.725 [2024-12-06 03:34:59.541225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.725 qpair failed and we were unable to recover it. 00:26:39.725 [2024-12-06 03:34:59.541395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.725 [2024-12-06 03:34:59.541408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.725 qpair failed and we were unable to recover it. 00:26:39.725 [2024-12-06 03:34:59.541651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.725 [2024-12-06 03:34:59.541666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.725 qpair failed and we were unable to recover it. 00:26:39.725 [2024-12-06 03:34:59.541746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.725 [2024-12-06 03:34:59.541757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.725 qpair failed and we were unable to recover it. 00:26:39.725 [2024-12-06 03:34:59.541955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.725 [2024-12-06 03:34:59.541968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.725 qpair failed and we were unable to recover it. 00:26:39.725 [2024-12-06 03:34:59.542113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.725 [2024-12-06 03:34:59.542125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.725 qpair failed and we were unable to recover it. 00:26:39.725 [2024-12-06 03:34:59.542274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.725 [2024-12-06 03:34:59.542287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.725 qpair failed and we were unable to recover it. 00:26:39.725 [2024-12-06 03:34:59.542497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.725 [2024-12-06 03:34:59.542509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.725 qpair failed and we were unable to recover it. 00:26:39.725 [2024-12-06 03:34:59.542679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.725 [2024-12-06 03:34:59.542692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.725 qpair failed and we were unable to recover it. 00:26:39.725 [2024-12-06 03:34:59.542895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.725 [2024-12-06 03:34:59.542907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.725 qpair failed and we were unable to recover it. 00:26:39.725 [2024-12-06 03:34:59.542999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.725 [2024-12-06 03:34:59.543010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.725 qpair failed and we were unable to recover it. 00:26:39.725 [2024-12-06 03:34:59.543188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.725 [2024-12-06 03:34:59.543200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.725 qpair failed and we were unable to recover it. 00:26:39.725 [2024-12-06 03:34:59.543353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.725 [2024-12-06 03:34:59.543366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.725 qpair failed and we were unable to recover it. 00:26:39.725 [2024-12-06 03:34:59.543513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.725 [2024-12-06 03:34:59.543526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.725 qpair failed and we were unable to recover it. 00:26:39.725 [2024-12-06 03:34:59.543660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.725 [2024-12-06 03:34:59.543672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.725 qpair failed and we were unable to recover it. 00:26:39.725 [2024-12-06 03:34:59.543871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.725 [2024-12-06 03:34:59.543884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.725 qpair failed and we were unable to recover it. 00:26:39.725 [2024-12-06 03:34:59.544026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.726 [2024-12-06 03:34:59.544039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.726 qpair failed and we were unable to recover it. 00:26:39.726 [2024-12-06 03:34:59.544182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.726 [2024-12-06 03:34:59.544195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.726 qpair failed and we were unable to recover it. 00:26:39.726 [2024-12-06 03:34:59.544286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.726 [2024-12-06 03:34:59.544298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.726 qpair failed and we were unable to recover it. 00:26:39.726 [2024-12-06 03:34:59.544465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.726 [2024-12-06 03:34:59.544477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.726 qpair failed and we were unable to recover it. 00:26:39.726 [2024-12-06 03:34:59.544699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.726 [2024-12-06 03:34:59.544712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.726 qpair failed and we were unable to recover it. 00:26:39.726 [2024-12-06 03:34:59.544910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.726 [2024-12-06 03:34:59.544923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.726 qpair failed and we were unable to recover it. 00:26:39.726 [2024-12-06 03:34:59.545013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.726 [2024-12-06 03:34:59.545025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.726 qpair failed and we were unable to recover it. 00:26:39.726 [2024-12-06 03:34:59.545201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.726 [2024-12-06 03:34:59.545213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.726 qpair failed and we were unable to recover it. 00:26:39.726 [2024-12-06 03:34:59.545412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.726 [2024-12-06 03:34:59.545425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.726 qpair failed and we were unable to recover it. 00:26:39.726 [2024-12-06 03:34:59.545580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.726 [2024-12-06 03:34:59.545593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.726 qpair failed and we were unable to recover it. 00:26:39.726 [2024-12-06 03:34:59.545845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.726 [2024-12-06 03:34:59.545856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.726 qpair failed and we were unable to recover it. 00:26:39.726 [2024-12-06 03:34:59.545992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.726 [2024-12-06 03:34:59.546004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.726 qpair failed and we were unable to recover it. 00:26:39.726 [2024-12-06 03:34:59.546209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.726 [2024-12-06 03:34:59.546221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.726 qpair failed and we were unable to recover it. 00:26:39.726 [2024-12-06 03:34:59.546300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.726 [2024-12-06 03:34:59.546311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.726 qpair failed and we were unable to recover it. 00:26:39.726 [2024-12-06 03:34:59.546533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.726 [2024-12-06 03:34:59.546546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.726 qpair failed and we were unable to recover it. 00:26:39.726 [2024-12-06 03:34:59.546737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.726 [2024-12-06 03:34:59.546750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.726 qpair failed and we were unable to recover it. 00:26:39.726 [2024-12-06 03:34:59.546971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.726 [2024-12-06 03:34:59.546984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.726 qpair failed and we were unable to recover it. 00:26:39.726 [2024-12-06 03:34:59.547220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.726 [2024-12-06 03:34:59.547233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.726 qpair failed and we were unable to recover it. 00:26:39.726 [2024-12-06 03:34:59.547392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.726 [2024-12-06 03:34:59.547404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.726 qpair failed and we were unable to recover it. 00:26:39.726 [2024-12-06 03:34:59.547554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.726 [2024-12-06 03:34:59.547566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.726 qpair failed and we were unable to recover it. 00:26:39.726 [2024-12-06 03:34:59.547697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.726 [2024-12-06 03:34:59.547709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.726 qpair failed and we were unable to recover it. 00:26:39.726 [2024-12-06 03:34:59.547875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.726 [2024-12-06 03:34:59.547887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.726 qpair failed and we were unable to recover it. 00:26:39.726 [2024-12-06 03:34:59.548030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.726 [2024-12-06 03:34:59.548043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.726 qpair failed and we were unable to recover it. 00:26:39.726 [2024-12-06 03:34:59.548282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.726 [2024-12-06 03:34:59.548294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.726 qpair failed and we were unable to recover it. 00:26:39.726 [2024-12-06 03:34:59.548453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.726 [2024-12-06 03:34:59.548466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.726 qpair failed and we were unable to recover it. 00:26:39.726 [2024-12-06 03:34:59.548686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.726 [2024-12-06 03:34:59.548699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.726 qpair failed and we were unable to recover it. 00:26:39.726 [2024-12-06 03:34:59.548911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.726 [2024-12-06 03:34:59.548925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.726 qpair failed and we were unable to recover it. 00:26:39.726 [2024-12-06 03:34:59.549128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.726 [2024-12-06 03:34:59.549141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.726 qpair failed and we were unable to recover it. 00:26:39.726 [2024-12-06 03:34:59.549360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.726 [2024-12-06 03:34:59.549373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.726 qpair failed and we were unable to recover it. 00:26:39.726 [2024-12-06 03:34:59.549597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.726 [2024-12-06 03:34:59.549609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.726 qpair failed and we were unable to recover it. 00:26:39.726 [2024-12-06 03:34:59.549831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.726 [2024-12-06 03:34:59.549844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.726 qpair failed and we were unable to recover it. 00:26:39.726 [2024-12-06 03:34:59.549991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.726 [2024-12-06 03:34:59.550004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.726 qpair failed and we were unable to recover it. 00:26:39.726 [2024-12-06 03:34:59.550176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.726 [2024-12-06 03:34:59.550189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.726 qpair failed and we were unable to recover it. 00:26:39.726 [2024-12-06 03:34:59.550349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.726 [2024-12-06 03:34:59.550362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.726 qpair failed and we were unable to recover it. 00:26:39.726 [2024-12-06 03:34:59.550604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.726 [2024-12-06 03:34:59.550616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.726 qpair failed and we were unable to recover it. 00:26:39.726 [2024-12-06 03:34:59.550854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.726 [2024-12-06 03:34:59.550866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.726 qpair failed and we were unable to recover it. 00:26:39.726 [2024-12-06 03:34:59.551092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.726 [2024-12-06 03:34:59.551105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.726 qpair failed and we were unable to recover it. 00:26:39.726 [2024-12-06 03:34:59.551315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.726 [2024-12-06 03:34:59.551328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.726 qpair failed and we were unable to recover it. 00:26:39.726 [2024-12-06 03:34:59.551483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.727 [2024-12-06 03:34:59.551495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.727 qpair failed and we were unable to recover it. 00:26:39.727 [2024-12-06 03:34:59.551577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.727 [2024-12-06 03:34:59.551588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.727 qpair failed and we were unable to recover it. 00:26:39.727 [2024-12-06 03:34:59.551669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.727 [2024-12-06 03:34:59.551680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.727 qpair failed and we were unable to recover it. 00:26:39.727 [2024-12-06 03:34:59.551826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.727 [2024-12-06 03:34:59.551838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.727 qpair failed and we were unable to recover it. 00:26:39.727 [2024-12-06 03:34:59.552057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.727 [2024-12-06 03:34:59.552070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.727 qpair failed and we were unable to recover it. 00:26:39.727 [2024-12-06 03:34:59.552310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.727 [2024-12-06 03:34:59.552322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.727 qpair failed and we were unable to recover it. 00:26:39.727 [2024-12-06 03:34:59.552458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.727 [2024-12-06 03:34:59.552471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.727 qpair failed and we were unable to recover it. 00:26:39.727 [2024-12-06 03:34:59.552688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.727 [2024-12-06 03:34:59.552700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.727 qpair failed and we were unable to recover it. 00:26:39.727 [2024-12-06 03:34:59.552860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.727 [2024-12-06 03:34:59.552873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.727 qpair failed and we were unable to recover it. 00:26:39.727 [2024-12-06 03:34:59.553092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.727 [2024-12-06 03:34:59.553105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.727 qpair failed and we were unable to recover it. 00:26:39.727 [2024-12-06 03:34:59.553303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.727 [2024-12-06 03:34:59.553316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.727 qpair failed and we were unable to recover it. 00:26:39.727 [2024-12-06 03:34:59.553514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.727 [2024-12-06 03:34:59.553527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.727 qpair failed and we were unable to recover it. 00:26:39.727 [2024-12-06 03:34:59.553705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.727 [2024-12-06 03:34:59.553717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.727 qpair failed and we were unable to recover it. 00:26:39.727 [2024-12-06 03:34:59.553868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.727 [2024-12-06 03:34:59.553881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.727 qpair failed and we were unable to recover it. 00:26:39.727 [2024-12-06 03:34:59.554101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.727 [2024-12-06 03:34:59.554113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.727 qpair failed and we were unable to recover it. 00:26:39.727 [2024-12-06 03:34:59.554360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.727 [2024-12-06 03:34:59.554373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.727 qpair failed and we were unable to recover it. 00:26:39.727 [2024-12-06 03:34:59.554511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.727 [2024-12-06 03:34:59.554523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.727 qpair failed and we were unable to recover it. 00:26:39.727 [2024-12-06 03:34:59.554664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.727 [2024-12-06 03:34:59.554676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.727 qpair failed and we were unable to recover it. 00:26:39.727 [2024-12-06 03:34:59.554878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.727 [2024-12-06 03:34:59.554890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.727 qpair failed and we were unable to recover it. 00:26:39.727 [2024-12-06 03:34:59.555025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.727 [2024-12-06 03:34:59.555038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.727 qpair failed and we were unable to recover it. 00:26:39.727 [2024-12-06 03:34:59.555279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.727 [2024-12-06 03:34:59.555292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.727 qpair failed and we were unable to recover it. 00:26:39.727 [2024-12-06 03:34:59.555511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.727 [2024-12-06 03:34:59.555524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.727 qpair failed and we were unable to recover it. 00:26:39.727 [2024-12-06 03:34:59.555793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.727 [2024-12-06 03:34:59.555805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.727 qpair failed and we were unable to recover it. 00:26:39.727 [2024-12-06 03:34:59.555960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.727 [2024-12-06 03:34:59.555973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.727 qpair failed and we were unable to recover it. 00:26:39.727 [2024-12-06 03:34:59.556142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.727 [2024-12-06 03:34:59.556155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.727 qpair failed and we were unable to recover it. 00:26:39.727 [2024-12-06 03:34:59.556233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.727 [2024-12-06 03:34:59.556244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.727 qpair failed and we were unable to recover it. 00:26:39.727 [2024-12-06 03:34:59.556468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.727 [2024-12-06 03:34:59.556480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.727 qpair failed and we were unable to recover it. 00:26:39.727 [2024-12-06 03:34:59.556628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.727 [2024-12-06 03:34:59.556641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.727 qpair failed and we were unable to recover it. 00:26:39.727 [2024-12-06 03:34:59.556791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.727 [2024-12-06 03:34:59.556805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.727 qpair failed and we were unable to recover it. 00:26:39.727 [2024-12-06 03:34:59.556975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.727 [2024-12-06 03:34:59.556988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.727 qpair failed and we were unable to recover it. 00:26:39.727 [2024-12-06 03:34:59.557081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.727 [2024-12-06 03:34:59.557093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.727 qpair failed and we were unable to recover it. 00:26:39.727 [2024-12-06 03:34:59.557375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.727 [2024-12-06 03:34:59.557387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.727 qpair failed and we were unable to recover it. 00:26:39.727 [2024-12-06 03:34:59.557534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.727 [2024-12-06 03:34:59.557547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.727 qpair failed and we were unable to recover it. 00:26:39.727 [2024-12-06 03:34:59.557732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.727 [2024-12-06 03:34:59.557745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.727 qpair failed and we were unable to recover it. 00:26:39.727 [2024-12-06 03:34:59.557843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.727 [2024-12-06 03:34:59.557854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.727 qpair failed and we were unable to recover it. 00:26:39.727 [2024-12-06 03:34:59.558076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.727 [2024-12-06 03:34:59.558089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.727 qpair failed and we were unable to recover it. 00:26:39.727 [2024-12-06 03:34:59.558289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.727 [2024-12-06 03:34:59.558301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.727 qpair failed and we were unable to recover it. 00:26:39.727 [2024-12-06 03:34:59.558524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.727 [2024-12-06 03:34:59.558536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.727 qpair failed and we were unable to recover it. 00:26:39.727 [2024-12-06 03:34:59.558755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.728 [2024-12-06 03:34:59.558767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.728 qpair failed and we were unable to recover it. 00:26:39.728 [2024-12-06 03:34:59.558965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.728 [2024-12-06 03:34:59.558977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.728 qpair failed and we were unable to recover it. 00:26:39.728 [2024-12-06 03:34:59.559119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.728 [2024-12-06 03:34:59.559131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.728 qpair failed and we were unable to recover it. 00:26:39.728 [2024-12-06 03:34:59.559262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.728 [2024-12-06 03:34:59.559275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.728 qpair failed and we were unable to recover it. 00:26:39.728 [2024-12-06 03:34:59.559442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.728 [2024-12-06 03:34:59.559454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.728 qpair failed and we were unable to recover it. 00:26:39.728 [2024-12-06 03:34:59.559699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.728 [2024-12-06 03:34:59.559711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.728 qpair failed and we were unable to recover it. 00:26:39.728 [2024-12-06 03:34:59.559863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.728 [2024-12-06 03:34:59.559875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.728 qpair failed and we were unable to recover it. 00:26:39.728 [2024-12-06 03:34:59.560081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.728 [2024-12-06 03:34:59.560093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.728 qpair failed and we were unable to recover it. 00:26:39.728 [2024-12-06 03:34:59.560251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.728 [2024-12-06 03:34:59.560264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.728 qpair failed and we were unable to recover it. 00:26:39.728 [2024-12-06 03:34:59.560406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.728 [2024-12-06 03:34:59.560418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.728 qpair failed and we were unable to recover it. 00:26:39.728 [2024-12-06 03:34:59.560501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.728 [2024-12-06 03:34:59.560513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.728 qpair failed and we were unable to recover it. 00:26:39.728 [2024-12-06 03:34:59.560658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.728 [2024-12-06 03:34:59.560670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.728 qpair failed and we were unable to recover it. 00:26:39.728 [2024-12-06 03:34:59.560833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.728 [2024-12-06 03:34:59.560845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.728 qpair failed and we were unable to recover it. 00:26:39.728 [2024-12-06 03:34:59.560989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.728 [2024-12-06 03:34:59.561003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.728 qpair failed and we were unable to recover it. 00:26:39.728 [2024-12-06 03:34:59.561225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.728 [2024-12-06 03:34:59.561238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.728 qpair failed and we were unable to recover it. 00:26:39.728 [2024-12-06 03:34:59.561463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.728 [2024-12-06 03:34:59.561476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.728 qpair failed and we were unable to recover it. 00:26:39.728 [2024-12-06 03:34:59.561557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.728 [2024-12-06 03:34:59.561568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.728 qpair failed and we were unable to recover it. 00:26:39.728 [2024-12-06 03:34:59.561778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.728 [2024-12-06 03:34:59.561797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.728 qpair failed and we were unable to recover it. 00:26:39.728 [2024-12-06 03:34:59.561968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.728 [2024-12-06 03:34:59.561984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.728 qpair failed and we were unable to recover it. 00:26:39.728 [2024-12-06 03:34:59.562190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.728 [2024-12-06 03:34:59.562206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.728 qpair failed and we were unable to recover it. 00:26:39.728 [2024-12-06 03:34:59.562364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.728 [2024-12-06 03:34:59.562382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.728 qpair failed and we were unable to recover it. 00:26:39.728 [2024-12-06 03:34:59.562632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.728 [2024-12-06 03:34:59.562648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.728 qpair failed and we were unable to recover it. 00:26:39.728 [2024-12-06 03:34:59.562854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.728 [2024-12-06 03:34:59.562871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.728 qpair failed and we were unable to recover it. 00:26:39.728 [2024-12-06 03:34:59.563082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.728 [2024-12-06 03:34:59.563098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.728 qpair failed and we were unable to recover it. 00:26:39.728 [2024-12-06 03:34:59.563256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.728 [2024-12-06 03:34:59.563272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.728 qpair failed and we were unable to recover it. 00:26:39.728 [2024-12-06 03:34:59.563506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.728 [2024-12-06 03:34:59.563522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.728 qpair failed and we were unable to recover it. 00:26:39.728 [2024-12-06 03:34:59.563684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.728 [2024-12-06 03:34:59.563701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.728 qpair failed and we were unable to recover it. 00:26:39.728 [2024-12-06 03:34:59.563982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.728 [2024-12-06 03:34:59.563998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.728 qpair failed and we were unable to recover it. 00:26:39.728 [2024-12-06 03:34:59.564275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.728 [2024-12-06 03:34:59.564291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.728 qpair failed and we were unable to recover it. 00:26:39.728 [2024-12-06 03:34:59.564404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.728 [2024-12-06 03:34:59.564420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.728 qpair failed and we were unable to recover it. 00:26:39.728 [2024-12-06 03:34:59.564528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.728 [2024-12-06 03:34:59.564546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.728 qpair failed and we were unable to recover it. 00:26:39.728 [2024-12-06 03:34:59.564753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.728 [2024-12-06 03:34:59.564769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.728 qpair failed and we were unable to recover it. 00:26:39.728 [2024-12-06 03:34:59.564967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.728 [2024-12-06 03:34:59.564984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.728 qpair failed and we were unable to recover it. 00:26:39.728 [2024-12-06 03:34:59.565080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.728 [2024-12-06 03:34:59.565096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.728 qpair failed and we were unable to recover it. 00:26:39.728 [2024-12-06 03:34:59.565257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.728 [2024-12-06 03:34:59.565273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.728 qpair failed and we were unable to recover it. 00:26:39.728 [2024-12-06 03:34:59.565481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.728 [2024-12-06 03:34:59.565497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.728 qpair failed and we were unable to recover it. 00:26:39.728 [2024-12-06 03:34:59.565680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.728 [2024-12-06 03:34:59.565696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.728 qpair failed and we were unable to recover it. 00:26:39.728 [2024-12-06 03:34:59.565937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.728 [2024-12-06 03:34:59.565957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.728 qpair failed and we were unable to recover it. 00:26:39.728 [2024-12-06 03:34:59.566064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.729 [2024-12-06 03:34:59.566080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.729 qpair failed and we were unable to recover it. 00:26:39.729 [2024-12-06 03:34:59.566682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.729 [2024-12-06 03:34:59.566708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.729 qpair failed and we were unable to recover it. 00:26:39.729 [2024-12-06 03:34:59.566982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.729 [2024-12-06 03:34:59.567000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.729 qpair failed and we were unable to recover it. 00:26:39.729 [2024-12-06 03:34:59.567236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.729 [2024-12-06 03:34:59.567253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.729 qpair failed and we were unable to recover it. 00:26:39.729 [2024-12-06 03:34:59.567461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.729 [2024-12-06 03:34:59.567478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.729 qpair failed and we were unable to recover it. 00:26:39.729 [2024-12-06 03:34:59.567639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.729 [2024-12-06 03:34:59.567655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.729 qpair failed and we were unable to recover it. 00:26:39.729 [2024-12-06 03:34:59.567890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.729 [2024-12-06 03:34:59.567906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.729 qpair failed and we were unable to recover it. 00:26:39.729 [2024-12-06 03:34:59.568051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.729 [2024-12-06 03:34:59.568069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.729 qpair failed and we were unable to recover it. 00:26:39.729 [2024-12-06 03:34:59.568277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.729 [2024-12-06 03:34:59.568293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.729 qpair failed and we were unable to recover it. 00:26:39.729 [2024-12-06 03:34:59.568460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.729 [2024-12-06 03:34:59.568476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.729 qpair failed and we were unable to recover it. 00:26:39.729 [2024-12-06 03:34:59.568577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.729 [2024-12-06 03:34:59.568594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.729 qpair failed and we were unable to recover it. 00:26:39.729 [2024-12-06 03:34:59.568828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.729 [2024-12-06 03:34:59.568844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.729 qpair failed and we were unable to recover it. 00:26:39.729 [2024-12-06 03:34:59.568962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.729 [2024-12-06 03:34:59.568979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.729 qpair failed and we were unable to recover it. 00:26:39.729 [2024-12-06 03:34:59.569151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.729 [2024-12-06 03:34:59.569168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.729 qpair failed and we were unable to recover it. 00:26:39.729 [2024-12-06 03:34:59.569318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.729 [2024-12-06 03:34:59.569334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.729 qpair failed and we were unable to recover it. 00:26:39.729 [2024-12-06 03:34:59.569564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.729 [2024-12-06 03:34:59.569579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.729 qpair failed and we were unable to recover it. 00:26:39.729 [2024-12-06 03:34:59.569785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.729 [2024-12-06 03:34:59.569797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.729 qpair failed and we were unable to recover it. 00:26:39.729 [2024-12-06 03:34:59.569943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.729 [2024-12-06 03:34:59.569965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.729 qpair failed and we were unable to recover it. 00:26:39.729 [2024-12-06 03:34:59.570114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.729 [2024-12-06 03:34:59.570126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.729 qpair failed and we were unable to recover it. 00:26:39.729 [2024-12-06 03:34:59.570309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.729 [2024-12-06 03:34:59.570322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.729 qpair failed and we were unable to recover it. 00:26:39.729 [2024-12-06 03:34:59.570536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.729 [2024-12-06 03:34:59.570549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.729 qpair failed and we were unable to recover it. 00:26:39.729 [2024-12-06 03:34:59.570807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.729 [2024-12-06 03:34:59.570820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.729 qpair failed and we were unable to recover it. 00:26:39.729 [2024-12-06 03:34:59.570995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.729 [2024-12-06 03:34:59.571008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.729 qpair failed and we were unable to recover it. 00:26:39.729 [2024-12-06 03:34:59.571209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.729 [2024-12-06 03:34:59.571222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.729 qpair failed and we were unable to recover it. 00:26:39.729 [2024-12-06 03:34:59.571362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.729 [2024-12-06 03:34:59.571375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.729 qpair failed and we were unable to recover it. 00:26:39.729 [2024-12-06 03:34:59.571465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.729 [2024-12-06 03:34:59.571476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.729 qpair failed and we were unable to recover it. 00:26:39.729 [2024-12-06 03:34:59.571622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.729 [2024-12-06 03:34:59.571633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.729 qpair failed and we were unable to recover it. 00:26:39.729 [2024-12-06 03:34:59.571900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.729 [2024-12-06 03:34:59.571914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.729 qpair failed and we were unable to recover it. 00:26:39.729 [2024-12-06 03:34:59.572116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.729 [2024-12-06 03:34:59.572128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.729 qpair failed and we were unable to recover it. 00:26:39.729 [2024-12-06 03:34:59.572382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.729 [2024-12-06 03:34:59.572395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.729 qpair failed and we were unable to recover it. 00:26:39.729 [2024-12-06 03:34:59.572492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.729 [2024-12-06 03:34:59.572503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.729 qpair failed and we were unable to recover it. 00:26:39.729 [2024-12-06 03:34:59.572583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.729 [2024-12-06 03:34:59.572594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.729 qpair failed and we were unable to recover it. 00:26:39.729 [2024-12-06 03:34:59.572726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.729 [2024-12-06 03:34:59.572740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.729 qpair failed and we were unable to recover it. 00:26:39.729 [2024-12-06 03:34:59.572815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.729 [2024-12-06 03:34:59.572826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.729 qpair failed and we were unable to recover it. 00:26:39.729 [2024-12-06 03:34:59.572905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.729 [2024-12-06 03:34:59.572916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.729 qpair failed and we were unable to recover it. 00:26:39.729 [2024-12-06 03:34:59.573181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.729 [2024-12-06 03:34:59.573195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.729 qpair failed and we were unable to recover it. 00:26:39.729 [2024-12-06 03:34:59.573360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.729 [2024-12-06 03:34:59.573373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.729 qpair failed and we were unable to recover it. 00:26:39.729 [2024-12-06 03:34:59.573573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.729 [2024-12-06 03:34:59.573586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.729 qpair failed and we were unable to recover it. 00:26:39.729 [2024-12-06 03:34:59.573810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.730 [2024-12-06 03:34:59.573822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.730 qpair failed and we were unable to recover it. 00:26:39.730 [2024-12-06 03:34:59.573968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.730 [2024-12-06 03:34:59.573981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.730 qpair failed and we were unable to recover it. 00:26:39.730 [2024-12-06 03:34:59.574239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.730 [2024-12-06 03:34:59.574252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.730 qpair failed and we were unable to recover it. 00:26:39.730 [2024-12-06 03:34:59.574386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.730 [2024-12-06 03:34:59.574400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.730 qpair failed and we were unable to recover it. 00:26:39.730 [2024-12-06 03:34:59.574509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.730 [2024-12-06 03:34:59.574521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.730 qpair failed and we were unable to recover it. 00:26:39.730 [2024-12-06 03:34:59.574681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.730 [2024-12-06 03:34:59.574694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.730 qpair failed and we were unable to recover it. 00:26:39.730 [2024-12-06 03:34:59.574778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.730 [2024-12-06 03:34:59.574790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.730 qpair failed and we were unable to recover it. 00:26:39.730 [2024-12-06 03:34:59.574956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.730 [2024-12-06 03:34:59.574969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.730 qpair failed and we were unable to recover it. 00:26:39.730 [2024-12-06 03:34:59.575151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.730 [2024-12-06 03:34:59.575165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.730 qpair failed and we were unable to recover it. 00:26:39.730 [2024-12-06 03:34:59.575395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.730 [2024-12-06 03:34:59.575407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.730 qpair failed and we were unable to recover it. 00:26:39.730 [2024-12-06 03:34:59.575486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.730 [2024-12-06 03:34:59.575498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.730 qpair failed and we were unable to recover it. 00:26:39.730 [2024-12-06 03:34:59.575635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.730 [2024-12-06 03:34:59.575647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.730 qpair failed and we were unable to recover it. 00:26:39.730 [2024-12-06 03:34:59.575893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.730 [2024-12-06 03:34:59.575906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.730 qpair failed and we were unable to recover it. 00:26:39.730 [2024-12-06 03:34:59.576058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.730 [2024-12-06 03:34:59.576072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.730 qpair failed and we were unable to recover it. 00:26:39.730 [2024-12-06 03:34:59.576215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.730 [2024-12-06 03:34:59.576228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.730 qpair failed and we were unable to recover it. 00:26:39.730 [2024-12-06 03:34:59.576397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.730 [2024-12-06 03:34:59.576410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.730 qpair failed and we were unable to recover it. 00:26:39.730 [2024-12-06 03:34:59.576615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.730 [2024-12-06 03:34:59.576628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.730 qpair failed and we were unable to recover it. 00:26:39.730 [2024-12-06 03:34:59.576836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.730 [2024-12-06 03:34:59.576849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.730 qpair failed and we were unable to recover it. 00:26:39.730 [2024-12-06 03:34:59.577069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.730 [2024-12-06 03:34:59.577083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.730 qpair failed and we were unable to recover it. 00:26:39.730 [2024-12-06 03:34:59.577182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.730 [2024-12-06 03:34:59.577193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.730 qpair failed and we were unable to recover it. 00:26:39.730 [2024-12-06 03:34:59.577326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.730 [2024-12-06 03:34:59.577338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.730 qpair failed and we were unable to recover it. 00:26:39.730 [2024-12-06 03:34:59.577524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.730 [2024-12-06 03:34:59.577547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.730 qpair failed and we were unable to recover it. 00:26:39.730 [2024-12-06 03:34:59.577700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.730 [2024-12-06 03:34:59.577718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.730 qpair failed and we were unable to recover it. 00:26:39.730 [2024-12-06 03:34:59.577889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.730 [2024-12-06 03:34:59.577906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.730 qpair failed and we were unable to recover it. 00:26:39.730 [2024-12-06 03:34:59.578012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.730 [2024-12-06 03:34:59.578030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.730 qpair failed and we were unable to recover it. 00:26:39.730 [2024-12-06 03:34:59.578244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.730 [2024-12-06 03:34:59.578261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.730 qpair failed and we were unable to recover it. 00:26:39.730 [2024-12-06 03:34:59.578487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.730 [2024-12-06 03:34:59.578504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.730 qpair failed and we were unable to recover it. 00:26:39.730 [2024-12-06 03:34:59.578664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.730 [2024-12-06 03:34:59.578681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.730 qpair failed and we were unable to recover it. 00:26:39.730 [2024-12-06 03:34:59.578778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.730 [2024-12-06 03:34:59.578794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.730 qpair failed and we were unable to recover it. 00:26:39.730 [2024-12-06 03:34:59.579024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.730 [2024-12-06 03:34:59.579042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.730 qpair failed and we were unable to recover it. 00:26:39.730 [2024-12-06 03:34:59.579186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.730 [2024-12-06 03:34:59.579202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.730 qpair failed and we were unable to recover it. 00:26:39.730 [2024-12-06 03:34:59.579423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.730 [2024-12-06 03:34:59.579439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.730 qpair failed and we were unable to recover it. 00:26:39.730 [2024-12-06 03:34:59.579532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.730 [2024-12-06 03:34:59.579548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.730 qpair failed and we were unable to recover it. 00:26:39.730 [2024-12-06 03:34:59.579700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.730 [2024-12-06 03:34:59.579717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.731 qpair failed and we were unable to recover it. 00:26:39.731 [2024-12-06 03:34:59.579965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.731 [2024-12-06 03:34:59.579982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.731 qpair failed and we were unable to recover it. 00:26:39.731 [2024-12-06 03:34:59.580152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.731 [2024-12-06 03:34:59.580169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.731 qpair failed and we were unable to recover it. 00:26:39.731 [2024-12-06 03:34:59.580319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.731 [2024-12-06 03:34:59.580334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.731 qpair failed and we were unable to recover it. 00:26:39.731 [2024-12-06 03:34:59.580488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.731 [2024-12-06 03:34:59.580500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.731 qpair failed and we were unable to recover it. 00:26:39.731 [2024-12-06 03:34:59.580655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.731 [2024-12-06 03:34:59.580667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.731 qpair failed and we were unable to recover it. 00:26:39.731 [2024-12-06 03:34:59.580754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.731 [2024-12-06 03:34:59.580765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.731 qpair failed and we were unable to recover it. 00:26:39.731 [2024-12-06 03:34:59.580936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.731 [2024-12-06 03:34:59.580953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.731 qpair failed and we were unable to recover it. 00:26:39.731 [2024-12-06 03:34:59.581115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.731 [2024-12-06 03:34:59.581128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.731 qpair failed and we were unable to recover it. 00:26:39.731 [2024-12-06 03:34:59.581277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.731 [2024-12-06 03:34:59.581289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.731 qpair failed and we were unable to recover it. 00:26:39.731 [2024-12-06 03:34:59.581506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.731 [2024-12-06 03:34:59.581519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.731 qpair failed and we were unable to recover it. 00:26:39.731 [2024-12-06 03:34:59.581719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.731 [2024-12-06 03:34:59.581731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.731 qpair failed and we were unable to recover it. 00:26:39.731 [2024-12-06 03:34:59.581866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.731 [2024-12-06 03:34:59.581879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.731 qpair failed and we were unable to recover it. 00:26:39.731 [2024-12-06 03:34:59.582031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.731 [2024-12-06 03:34:59.582044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.731 qpair failed and we were unable to recover it. 00:26:39.731 [2024-12-06 03:34:59.582183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.731 [2024-12-06 03:34:59.582196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.731 qpair failed and we were unable to recover it. 00:26:39.731 [2024-12-06 03:34:59.582443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.731 [2024-12-06 03:34:59.582456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.731 qpair failed and we were unable to recover it. 00:26:39.731 [2024-12-06 03:34:59.582534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.731 [2024-12-06 03:34:59.582545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.731 qpair failed and we were unable to recover it. 00:26:39.731 [2024-12-06 03:34:59.582704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.731 [2024-12-06 03:34:59.582717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.731 qpair failed and we were unable to recover it. 00:26:39.731 [2024-12-06 03:34:59.582801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.731 [2024-12-06 03:34:59.582813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.731 qpair failed and we were unable to recover it. 00:26:39.731 [2024-12-06 03:34:59.583014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.731 [2024-12-06 03:34:59.583027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.731 qpair failed and we were unable to recover it. 00:26:39.731 [2024-12-06 03:34:59.583190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.731 [2024-12-06 03:34:59.583204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.731 qpair failed and we were unable to recover it. 00:26:39.731 [2024-12-06 03:34:59.583404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.731 [2024-12-06 03:34:59.583417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.731 qpair failed and we were unable to recover it. 00:26:39.731 [2024-12-06 03:34:59.583563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.731 [2024-12-06 03:34:59.583575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.731 qpair failed and we were unable to recover it. 00:26:39.731 [2024-12-06 03:34:59.583651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.731 [2024-12-06 03:34:59.583662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.731 qpair failed and we were unable to recover it. 00:26:39.731 [2024-12-06 03:34:59.583745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.731 [2024-12-06 03:34:59.583757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.731 qpair failed and we were unable to recover it. 00:26:39.731 [2024-12-06 03:34:59.583898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.731 [2024-12-06 03:34:59.583911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.731 qpair failed and we were unable to recover it. 00:26:39.731 [2024-12-06 03:34:59.584054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.731 [2024-12-06 03:34:59.584067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.731 qpair failed and we were unable to recover it. 00:26:39.731 [2024-12-06 03:34:59.584224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.731 [2024-12-06 03:34:59.584238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.731 qpair failed and we were unable to recover it. 00:26:39.731 [2024-12-06 03:34:59.584368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.731 [2024-12-06 03:34:59.584384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.731 qpair failed and we were unable to recover it. 00:26:39.731 [2024-12-06 03:34:59.584595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.731 [2024-12-06 03:34:59.584608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.731 qpair failed and we were unable to recover it. 00:26:39.731 [2024-12-06 03:34:59.584771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.731 [2024-12-06 03:34:59.584785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.731 qpair failed and we were unable to recover it. 00:26:39.731 [2024-12-06 03:34:59.585019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.731 [2024-12-06 03:34:59.585033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.731 qpair failed and we were unable to recover it. 00:26:39.731 [2024-12-06 03:34:59.585232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.731 [2024-12-06 03:34:59.585244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.731 qpair failed and we were unable to recover it. 00:26:39.731 [2024-12-06 03:34:59.585393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.731 [2024-12-06 03:34:59.585406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.731 qpair failed and we were unable to recover it. 00:26:39.731 [2024-12-06 03:34:59.585554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.731 [2024-12-06 03:34:59.585567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.731 qpair failed and we were unable to recover it. 00:26:39.731 [2024-12-06 03:34:59.585638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.731 [2024-12-06 03:34:59.585649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.731 qpair failed and we were unable to recover it. 00:26:39.731 [2024-12-06 03:34:59.585801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.731 [2024-12-06 03:34:59.585814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.731 qpair failed and we were unable to recover it. 00:26:39.731 [2024-12-06 03:34:59.585983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.732 [2024-12-06 03:34:59.585997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.732 qpair failed and we were unable to recover it. 00:26:39.732 [2024-12-06 03:34:59.586146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.732 [2024-12-06 03:34:59.586159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.732 qpair failed and we were unable to recover it. 00:26:39.732 [2024-12-06 03:34:59.586306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.732 [2024-12-06 03:34:59.586319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.732 qpair failed and we were unable to recover it. 00:26:39.732 [2024-12-06 03:34:59.586480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.732 [2024-12-06 03:34:59.586492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.732 qpair failed and we were unable to recover it. 00:26:39.732 [2024-12-06 03:34:59.586659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.732 [2024-12-06 03:34:59.586672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.732 qpair failed and we were unable to recover it. 00:26:39.732 [2024-12-06 03:34:59.586820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.732 [2024-12-06 03:34:59.586832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.732 qpair failed and we were unable to recover it. 00:26:39.732 [2024-12-06 03:34:59.587003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.732 [2024-12-06 03:34:59.587017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.732 qpair failed and we were unable to recover it. 00:26:39.732 [2024-12-06 03:34:59.587103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.732 [2024-12-06 03:34:59.587114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.732 qpair failed and we were unable to recover it. 00:26:39.732 [2024-12-06 03:34:59.587211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.732 [2024-12-06 03:34:59.587223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.732 qpair failed and we were unable to recover it. 00:26:39.732 [2024-12-06 03:34:59.587400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.732 [2024-12-06 03:34:59.587412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.732 qpair failed and we were unable to recover it. 00:26:39.732 [2024-12-06 03:34:59.587632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.732 [2024-12-06 03:34:59.587645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.732 qpair failed and we were unable to recover it. 00:26:39.732 [2024-12-06 03:34:59.587795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.732 [2024-12-06 03:34:59.587808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.732 qpair failed and we were unable to recover it. 00:26:39.732 [2024-12-06 03:34:59.588052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.732 [2024-12-06 03:34:59.588065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.732 qpair failed and we were unable to recover it. 00:26:39.732 [2024-12-06 03:34:59.588285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.732 [2024-12-06 03:34:59.588298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.732 qpair failed and we were unable to recover it. 00:26:39.732 [2024-12-06 03:34:59.588508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.732 [2024-12-06 03:34:59.588521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.732 qpair failed and we were unable to recover it. 00:26:39.732 [2024-12-06 03:34:59.588610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.732 [2024-12-06 03:34:59.588622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.732 qpair failed and we were unable to recover it. 00:26:39.732 [2024-12-06 03:34:59.588713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.732 [2024-12-06 03:34:59.588725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.732 qpair failed and we were unable to recover it. 00:26:39.732 [2024-12-06 03:34:59.588858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.732 [2024-12-06 03:34:59.588869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.732 qpair failed and we were unable to recover it. 00:26:39.732 [2024-12-06 03:34:59.588963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.732 [2024-12-06 03:34:59.588976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.732 qpair failed and we were unable to recover it. 00:26:39.732 [2024-12-06 03:34:59.589126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.732 [2024-12-06 03:34:59.589139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.732 qpair failed and we were unable to recover it. 00:26:39.732 [2024-12-06 03:34:59.589376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.732 [2024-12-06 03:34:59.589389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.732 qpair failed and we were unable to recover it. 00:26:39.732 [2024-12-06 03:34:59.589593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.732 [2024-12-06 03:34:59.589606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.732 qpair failed and we were unable to recover it. 00:26:39.732 [2024-12-06 03:34:59.589829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.732 [2024-12-06 03:34:59.589841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.732 qpair failed and we were unable to recover it. 00:26:39.732 [2024-12-06 03:34:59.589929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.732 [2024-12-06 03:34:59.589940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.732 qpair failed and we were unable to recover it. 00:26:39.732 [2024-12-06 03:34:59.590079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.732 [2024-12-06 03:34:59.590092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.732 qpair failed and we were unable to recover it. 00:26:39.732 [2024-12-06 03:34:59.590229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.732 [2024-12-06 03:34:59.590241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.732 qpair failed and we were unable to recover it. 00:26:39.732 [2024-12-06 03:34:59.590479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.732 [2024-12-06 03:34:59.590492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.732 qpair failed and we were unable to recover it. 00:26:39.732 [2024-12-06 03:34:59.590719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.732 [2024-12-06 03:34:59.590731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.732 qpair failed and we were unable to recover it. 00:26:39.732 [2024-12-06 03:34:59.590933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.732 [2024-12-06 03:34:59.590946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.732 qpair failed and we were unable to recover it. 00:26:39.732 [2024-12-06 03:34:59.591042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.732 [2024-12-06 03:34:59.591054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.732 qpair failed and we were unable to recover it. 00:26:39.732 [2024-12-06 03:34:59.591158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.732 [2024-12-06 03:34:59.591171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.732 qpair failed and we were unable to recover it. 00:26:39.732 [2024-12-06 03:34:59.591249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.732 [2024-12-06 03:34:59.591262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.732 qpair failed and we were unable to recover it. 00:26:39.732 [2024-12-06 03:34:59.591470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.732 [2024-12-06 03:34:59.591482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.732 qpair failed and we were unable to recover it. 00:26:39.732 [2024-12-06 03:34:59.591684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.732 [2024-12-06 03:34:59.591696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.732 qpair failed and we were unable to recover it. 00:26:39.732 [2024-12-06 03:34:59.591781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.732 [2024-12-06 03:34:59.591793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.732 qpair failed and we were unable to recover it. 00:26:39.732 [2024-12-06 03:34:59.591987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.732 [2024-12-06 03:34:59.592000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.732 qpair failed and we were unable to recover it. 00:26:39.732 [2024-12-06 03:34:59.592170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.732 [2024-12-06 03:34:59.592183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.732 qpair failed and we were unable to recover it. 00:26:39.732 [2024-12-06 03:34:59.592360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.732 [2024-12-06 03:34:59.592373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.732 qpair failed and we were unable to recover it. 00:26:39.732 [2024-12-06 03:34:59.592536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.733 [2024-12-06 03:34:59.592548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.733 qpair failed and we were unable to recover it. 00:26:39.733 [2024-12-06 03:34:59.592689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.733 [2024-12-06 03:34:59.592703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.733 qpair failed and we were unable to recover it. 00:26:39.733 [2024-12-06 03:34:59.592904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.733 [2024-12-06 03:34:59.592916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.733 qpair failed and we were unable to recover it. 00:26:39.733 [2024-12-06 03:34:59.593159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.733 [2024-12-06 03:34:59.593172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.733 qpair failed and we were unable to recover it. 00:26:39.733 [2024-12-06 03:34:59.593266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.733 [2024-12-06 03:34:59.593277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.733 qpair failed and we were unable to recover it. 00:26:39.733 [2024-12-06 03:34:59.593412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.733 [2024-12-06 03:34:59.593425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.733 qpair failed and we were unable to recover it. 00:26:39.733 [2024-12-06 03:34:59.593506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.733 [2024-12-06 03:34:59.593518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.733 qpair failed and we were unable to recover it. 00:26:39.733 [2024-12-06 03:34:59.593718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.733 [2024-12-06 03:34:59.593730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.733 qpair failed and we were unable to recover it. 00:26:39.733 [2024-12-06 03:34:59.593881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.733 [2024-12-06 03:34:59.593894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.733 qpair failed and we were unable to recover it. 00:26:39.733 [2024-12-06 03:34:59.594096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.733 [2024-12-06 03:34:59.594110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.733 qpair failed and we were unable to recover it. 00:26:39.733 [2024-12-06 03:34:59.594212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.733 [2024-12-06 03:34:59.594225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.733 qpair failed and we were unable to recover it. 00:26:39.733 [2024-12-06 03:34:59.594450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.733 [2024-12-06 03:34:59.594463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.733 qpair failed and we were unable to recover it. 00:26:39.733 [2024-12-06 03:34:59.594610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.733 [2024-12-06 03:34:59.594622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.733 qpair failed and we were unable to recover it. 00:26:39.733 [2024-12-06 03:34:59.594769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.733 [2024-12-06 03:34:59.594782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.733 qpair failed and we were unable to recover it. 00:26:39.733 [2024-12-06 03:34:59.594930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.733 [2024-12-06 03:34:59.594942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.733 qpair failed and we were unable to recover it. 00:26:39.733 [2024-12-06 03:34:59.595097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.733 [2024-12-06 03:34:59.595110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.733 qpair failed and we were unable to recover it. 00:26:39.733 [2024-12-06 03:34:59.595349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.733 [2024-12-06 03:34:59.595361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.733 qpair failed and we were unable to recover it. 00:26:39.733 [2024-12-06 03:34:59.595497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.733 [2024-12-06 03:34:59.595510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.733 qpair failed and we were unable to recover it. 00:26:39.733 [2024-12-06 03:34:59.595662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.733 [2024-12-06 03:34:59.595675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.733 qpair failed and we were unable to recover it. 00:26:39.733 [2024-12-06 03:34:59.595871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.733 [2024-12-06 03:34:59.595883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.733 qpair failed and we were unable to recover it. 00:26:39.733 [2024-12-06 03:34:59.596186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.733 [2024-12-06 03:34:59.596224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.733 qpair failed and we were unable to recover it. 00:26:39.733 [2024-12-06 03:34:59.596448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.733 [2024-12-06 03:34:59.596467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.733 qpair failed and we were unable to recover it. 00:26:39.733 [2024-12-06 03:34:59.596576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.733 [2024-12-06 03:34:59.596593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.733 qpair failed and we were unable to recover it. 00:26:39.733 [2024-12-06 03:34:59.596832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.733 [2024-12-06 03:34:59.596847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.733 qpair failed and we were unable to recover it. 00:26:39.733 [2024-12-06 03:34:59.597012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.733 [2024-12-06 03:34:59.597029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.733 qpair failed and we were unable to recover it. 00:26:39.733 [2024-12-06 03:34:59.597279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.733 [2024-12-06 03:34:59.597296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.733 qpair failed and we were unable to recover it. 00:26:39.733 [2024-12-06 03:34:59.597452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.733 [2024-12-06 03:34:59.597468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.733 qpair failed and we were unable to recover it. 00:26:39.733 [2024-12-06 03:34:59.597734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.733 [2024-12-06 03:34:59.597750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.733 qpair failed and we were unable to recover it. 00:26:39.733 [2024-12-06 03:34:59.597983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.733 [2024-12-06 03:34:59.598000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.733 qpair failed and we were unable to recover it. 00:26:39.733 [2024-12-06 03:34:59.598184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.733 [2024-12-06 03:34:59.598201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.733 qpair failed and we were unable to recover it. 00:26:39.733 [2024-12-06 03:34:59.598364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.733 [2024-12-06 03:34:59.598380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.733 qpair failed and we were unable to recover it. 00:26:39.733 [2024-12-06 03:34:59.598465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.733 [2024-12-06 03:34:59.598480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.733 qpair failed and we were unable to recover it. 00:26:39.733 [2024-12-06 03:34:59.598655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.733 [2024-12-06 03:34:59.598672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.733 qpair failed and we were unable to recover it. 00:26:39.733 [2024-12-06 03:34:59.598834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.733 [2024-12-06 03:34:59.598855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.733 qpair failed and we were unable to recover it. 00:26:39.733 [2024-12-06 03:34:59.599087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.733 [2024-12-06 03:34:59.599105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.733 qpair failed and we were unable to recover it. 00:26:39.733 [2024-12-06 03:34:59.599333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.733 [2024-12-06 03:34:59.599349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.733 qpair failed and we were unable to recover it. 00:26:39.733 [2024-12-06 03:34:59.599528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.733 [2024-12-06 03:34:59.599545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.733 qpair failed and we were unable to recover it. 00:26:39.733 [2024-12-06 03:34:59.599646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.734 [2024-12-06 03:34:59.599662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.734 qpair failed and we were unable to recover it. 00:26:39.734 [2024-12-06 03:34:59.599801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.734 [2024-12-06 03:34:59.599817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.734 qpair failed and we were unable to recover it. 00:26:39.734 [2024-12-06 03:34:59.599982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.734 [2024-12-06 03:34:59.599998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.734 qpair failed and we were unable to recover it. 00:26:39.734 [2024-12-06 03:34:59.600094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.734 [2024-12-06 03:34:59.600111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.734 qpair failed and we were unable to recover it. 00:26:39.734 [2024-12-06 03:34:59.600216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.734 [2024-12-06 03:34:59.600232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.734 qpair failed and we were unable to recover it. 00:26:39.734 [2024-12-06 03:34:59.600315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.734 [2024-12-06 03:34:59.600330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.734 qpair failed and we were unable to recover it. 00:26:39.734 [2024-12-06 03:34:59.600495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.734 [2024-12-06 03:34:59.600512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.734 qpair failed and we were unable to recover it. 00:26:39.734 [2024-12-06 03:34:59.600722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.734 [2024-12-06 03:34:59.600738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.734 qpair failed and we were unable to recover it. 00:26:39.734 [2024-12-06 03:34:59.600892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.734 [2024-12-06 03:34:59.600908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.734 qpair failed and we were unable to recover it. 00:26:39.734 [2024-12-06 03:34:59.601066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.734 [2024-12-06 03:34:59.601083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.734 qpair failed and we were unable to recover it. 00:26:39.734 [2024-12-06 03:34:59.601239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.734 [2024-12-06 03:34:59.601256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.734 qpair failed and we were unable to recover it. 00:26:39.734 [2024-12-06 03:34:59.601398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.734 [2024-12-06 03:34:59.601414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.734 qpair failed and we were unable to recover it. 00:26:39.734 [2024-12-06 03:34:59.601650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.734 [2024-12-06 03:34:59.601667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.734 qpair failed and we were unable to recover it. 00:26:39.734 [2024-12-06 03:34:59.601765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.734 [2024-12-06 03:34:59.601782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.734 qpair failed and we were unable to recover it. 00:26:39.734 [2024-12-06 03:34:59.601924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.734 [2024-12-06 03:34:59.601942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.734 qpair failed and we were unable to recover it. 00:26:39.734 [2024-12-06 03:34:59.602163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.734 [2024-12-06 03:34:59.602179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.734 qpair failed and we were unable to recover it. 00:26:39.734 [2024-12-06 03:34:59.602339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.734 [2024-12-06 03:34:59.602355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.734 qpair failed and we were unable to recover it. 00:26:39.734 [2024-12-06 03:34:59.602562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.734 [2024-12-06 03:34:59.602578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.734 qpair failed and we were unable to recover it. 00:26:39.734 [2024-12-06 03:34:59.602733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.734 [2024-12-06 03:34:59.602750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.734 qpair failed and we were unable to recover it. 00:26:39.734 [2024-12-06 03:34:59.602970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.734 [2024-12-06 03:34:59.602987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.734 qpair failed and we were unable to recover it. 00:26:39.734 [2024-12-06 03:34:59.603101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.734 [2024-12-06 03:34:59.603117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.734 qpair failed and we were unable to recover it. 00:26:39.734 [2024-12-06 03:34:59.603214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.734 [2024-12-06 03:34:59.603230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.734 qpair failed and we were unable to recover it. 00:26:39.734 [2024-12-06 03:34:59.603468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.734 [2024-12-06 03:34:59.603484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.734 qpair failed and we were unable to recover it. 00:26:39.734 [2024-12-06 03:34:59.603665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.734 [2024-12-06 03:34:59.603687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.734 qpair failed and we were unable to recover it. 00:26:39.734 [2024-12-06 03:34:59.603846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.734 [2024-12-06 03:34:59.603863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.734 qpair failed and we were unable to recover it. 00:26:39.734 [2024-12-06 03:34:59.604096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.734 [2024-12-06 03:34:59.604114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.734 qpair failed and we were unable to recover it. 00:26:39.734 [2024-12-06 03:34:59.604377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.734 [2024-12-06 03:34:59.604393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.734 qpair failed and we were unable to recover it. 00:26:39.734 [2024-12-06 03:34:59.604552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.734 [2024-12-06 03:34:59.604569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.734 qpair failed and we were unable to recover it. 00:26:39.734 [2024-12-06 03:34:59.604653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.734 [2024-12-06 03:34:59.604669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.734 qpair failed and we were unable to recover it. 00:26:39.734 [2024-12-06 03:34:59.604820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.734 [2024-12-06 03:34:59.604837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.734 qpair failed and we were unable to recover it. 00:26:39.734 [2024-12-06 03:34:59.605046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.734 [2024-12-06 03:34:59.605063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.734 qpair failed and we were unable to recover it. 00:26:39.734 [2024-12-06 03:34:59.605206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.734 [2024-12-06 03:34:59.605223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.734 qpair failed and we were unable to recover it. 00:26:39.734 [2024-12-06 03:34:59.605316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.734 [2024-12-06 03:34:59.605333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.734 qpair failed and we were unable to recover it. 00:26:39.734 [2024-12-06 03:34:59.605500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.734 [2024-12-06 03:34:59.605516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.734 qpair failed and we were unable to recover it. 00:26:39.734 [2024-12-06 03:34:59.605723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.734 [2024-12-06 03:34:59.605737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.734 qpair failed and we were unable to recover it. 00:26:39.734 [2024-12-06 03:34:59.605914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.734 [2024-12-06 03:34:59.605926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.735 qpair failed and we were unable to recover it. 00:26:39.735 [2024-12-06 03:34:59.606071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.735 [2024-12-06 03:34:59.606084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.735 qpair failed and we were unable to recover it. 00:26:39.735 [2024-12-06 03:34:59.606227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.735 [2024-12-06 03:34:59.606239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.735 qpair failed and we were unable to recover it. 00:26:39.735 [2024-12-06 03:34:59.606441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.735 [2024-12-06 03:34:59.606454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.735 qpair failed and we were unable to recover it. 00:26:39.735 [2024-12-06 03:34:59.606598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.735 [2024-12-06 03:34:59.606611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.735 qpair failed and we were unable to recover it. 00:26:39.735 [2024-12-06 03:34:59.606708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.735 [2024-12-06 03:34:59.606720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.735 qpair failed and we were unable to recover it. 00:26:39.735 [2024-12-06 03:34:59.606868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.735 [2024-12-06 03:34:59.606880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.735 qpair failed and we were unable to recover it. 00:26:39.735 [2024-12-06 03:34:59.607064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.735 [2024-12-06 03:34:59.607077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.735 qpair failed and we were unable to recover it. 00:26:39.735 [2024-12-06 03:34:59.607227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.735 [2024-12-06 03:34:59.607241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.735 qpair failed and we were unable to recover it. 00:26:39.735 [2024-12-06 03:34:59.607458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.735 [2024-12-06 03:34:59.607470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.735 qpair failed and we were unable to recover it. 00:26:39.735 [2024-12-06 03:34:59.607561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.735 [2024-12-06 03:34:59.607572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.735 qpair failed and we were unable to recover it. 00:26:39.735 [2024-12-06 03:34:59.607758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.735 [2024-12-06 03:34:59.607771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.735 qpair failed and we were unable to recover it. 00:26:39.735 [2024-12-06 03:34:59.607984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.735 [2024-12-06 03:34:59.607998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.735 qpair failed and we were unable to recover it. 00:26:39.735 [2024-12-06 03:34:59.608137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.735 [2024-12-06 03:34:59.608148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.735 qpair failed and we were unable to recover it. 00:26:39.735 [2024-12-06 03:34:59.608288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.735 [2024-12-06 03:34:59.608300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.735 qpair failed and we were unable to recover it. 00:26:39.735 [2024-12-06 03:34:59.608471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.735 [2024-12-06 03:34:59.608484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.735 qpair failed and we were unable to recover it. 00:26:39.735 [2024-12-06 03:34:59.608719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.735 [2024-12-06 03:34:59.608732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.735 qpair failed and we were unable to recover it. 00:26:39.735 [2024-12-06 03:34:59.608908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.735 [2024-12-06 03:34:59.608922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.735 qpair failed and we were unable to recover it. 00:26:39.735 [2024-12-06 03:34:59.609096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.735 [2024-12-06 03:34:59.609108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.735 qpair failed and we were unable to recover it. 00:26:39.735 [2024-12-06 03:34:59.609202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.735 [2024-12-06 03:34:59.609213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.735 qpair failed and we were unable to recover it. 00:26:39.735 [2024-12-06 03:34:59.609458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.735 [2024-12-06 03:34:59.609470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.735 qpair failed and we were unable to recover it. 00:26:39.735 [2024-12-06 03:34:59.609616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.735 [2024-12-06 03:34:59.609628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.735 qpair failed and we were unable to recover it. 00:26:39.735 [2024-12-06 03:34:59.609764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.735 [2024-12-06 03:34:59.609776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.735 qpair failed and we were unable to recover it. 00:26:39.735 [2024-12-06 03:34:59.610001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.735 [2024-12-06 03:34:59.610014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.735 qpair failed and we were unable to recover it. 00:26:39.735 [2024-12-06 03:34:59.610215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.735 [2024-12-06 03:34:59.610228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.735 qpair failed and we were unable to recover it. 00:26:39.735 [2024-12-06 03:34:59.610373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.735 [2024-12-06 03:34:59.610385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.735 qpair failed and we were unable to recover it. 00:26:39.735 [2024-12-06 03:34:59.610622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.735 [2024-12-06 03:34:59.610635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.735 qpair failed and we were unable to recover it. 00:26:39.735 [2024-12-06 03:34:59.610847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.735 [2024-12-06 03:34:59.610859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.735 qpair failed and we were unable to recover it. 00:26:39.735 [2024-12-06 03:34:59.611013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.735 [2024-12-06 03:34:59.611028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.735 qpair failed and we were unable to recover it. 00:26:39.735 [2024-12-06 03:34:59.611122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.735 [2024-12-06 03:34:59.611132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.735 qpair failed and we were unable to recover it. 00:26:39.735 [2024-12-06 03:34:59.611334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.735 [2024-12-06 03:34:59.611347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.735 qpair failed and we were unable to recover it. 00:26:39.735 [2024-12-06 03:34:59.611504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.735 [2024-12-06 03:34:59.611517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.735 qpair failed and we were unable to recover it. 00:26:39.735 [2024-12-06 03:34:59.611584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.735 [2024-12-06 03:34:59.611595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.735 qpair failed and we were unable to recover it. 00:26:39.735 [2024-12-06 03:34:59.611679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.735 [2024-12-06 03:34:59.611690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.735 qpair failed and we were unable to recover it. 00:26:39.735 [2024-12-06 03:34:59.611824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.735 [2024-12-06 03:34:59.611837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.735 qpair failed and we were unable to recover it. 00:26:39.735 [2024-12-06 03:34:59.612059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.735 [2024-12-06 03:34:59.612072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.735 qpair failed and we were unable to recover it. 00:26:39.735 [2024-12-06 03:34:59.612150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.735 [2024-12-06 03:34:59.612162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.735 qpair failed and we were unable to recover it. 00:26:39.736 [2024-12-06 03:34:59.612407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.736 [2024-12-06 03:34:59.612420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.736 qpair failed and we were unable to recover it. 00:26:39.736 [2024-12-06 03:34:59.612504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.736 [2024-12-06 03:34:59.612516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.736 qpair failed and we were unable to recover it. 00:26:39.736 [2024-12-06 03:34:59.612750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.736 [2024-12-06 03:34:59.612762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.736 qpair failed and we were unable to recover it. 00:26:39.736 [2024-12-06 03:34:59.612913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.736 [2024-12-06 03:34:59.612925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.736 qpair failed and we were unable to recover it. 00:26:39.736 [2024-12-06 03:34:59.613089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.736 [2024-12-06 03:34:59.613101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.736 qpair failed and we were unable to recover it. 00:26:39.736 [2024-12-06 03:34:59.613246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.736 [2024-12-06 03:34:59.613258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.736 qpair failed and we were unable to recover it. 00:26:39.736 [2024-12-06 03:34:59.613423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.736 [2024-12-06 03:34:59.613436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.736 qpair failed and we were unable to recover it. 00:26:39.736 [2024-12-06 03:34:59.613537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.736 [2024-12-06 03:34:59.613550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.736 qpair failed and we were unable to recover it. 00:26:39.736 [2024-12-06 03:34:59.613751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.736 [2024-12-06 03:34:59.613763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.736 qpair failed and we were unable to recover it. 00:26:39.736 [2024-12-06 03:34:59.613868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.736 [2024-12-06 03:34:59.613881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.736 qpair failed and we were unable to recover it. 00:26:39.736 [2024-12-06 03:34:59.614048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.736 [2024-12-06 03:34:59.614073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.736 qpair failed and we were unable to recover it. 00:26:39.736 [2024-12-06 03:34:59.614160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.736 [2024-12-06 03:34:59.614171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.736 qpair failed and we were unable to recover it. 00:26:39.736 [2024-12-06 03:34:59.614314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.736 [2024-12-06 03:34:59.614326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.736 qpair failed and we were unable to recover it. 00:26:39.736 [2024-12-06 03:34:59.614464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.736 [2024-12-06 03:34:59.614476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.736 qpair failed and we were unable to recover it. 00:26:39.736 [2024-12-06 03:34:59.614701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.736 [2024-12-06 03:34:59.614713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.736 qpair failed and we were unable to recover it. 00:26:39.736 [2024-12-06 03:34:59.614844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.736 [2024-12-06 03:34:59.614857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.736 qpair failed and we were unable to recover it. 00:26:39.736 [2024-12-06 03:34:59.615079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.736 [2024-12-06 03:34:59.615092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.736 qpair failed and we were unable to recover it. 00:26:39.736 [2024-12-06 03:34:59.615178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.736 [2024-12-06 03:34:59.615190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.736 qpair failed and we were unable to recover it. 00:26:39.736 [2024-12-06 03:34:59.615392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.736 [2024-12-06 03:34:59.615404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.736 qpair failed and we were unable to recover it. 00:26:39.736 [2024-12-06 03:34:59.615557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.736 [2024-12-06 03:34:59.615569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.736 qpair failed and we were unable to recover it. 00:26:39.736 [2024-12-06 03:34:59.615664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.736 [2024-12-06 03:34:59.615676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.736 qpair failed and we were unable to recover it. 00:26:39.736 [2024-12-06 03:34:59.615832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.736 [2024-12-06 03:34:59.615844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.736 qpair failed and we were unable to recover it. 00:26:39.736 [2024-12-06 03:34:59.615951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.736 [2024-12-06 03:34:59.615964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.736 qpair failed and we were unable to recover it. 00:26:39.736 [2024-12-06 03:34:59.616049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.736 [2024-12-06 03:34:59.616060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.736 qpair failed and we were unable to recover it. 00:26:39.736 [2024-12-06 03:34:59.616295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.736 [2024-12-06 03:34:59.616307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.736 qpair failed and we were unable to recover it. 00:26:39.736 [2024-12-06 03:34:59.616534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.736 [2024-12-06 03:34:59.616547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.736 qpair failed and we were unable to recover it. 00:26:39.736 [2024-12-06 03:34:59.616678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.736 [2024-12-06 03:34:59.616691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.736 qpair failed and we were unable to recover it. 00:26:39.736 [2024-12-06 03:34:59.616921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.736 [2024-12-06 03:34:59.616934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.736 qpair failed and we were unable to recover it. 00:26:39.736 [2024-12-06 03:34:59.617090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.736 [2024-12-06 03:34:59.617102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.736 qpair failed and we were unable to recover it. 00:26:39.736 [2024-12-06 03:34:59.617305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.736 [2024-12-06 03:34:59.617317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.736 qpair failed and we were unable to recover it. 00:26:39.736 [2024-12-06 03:34:59.617461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.736 [2024-12-06 03:34:59.617474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.736 qpair failed and we were unable to recover it. 00:26:39.736 [2024-12-06 03:34:59.617665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.736 [2024-12-06 03:34:59.617679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.736 qpair failed and we were unable to recover it. 00:26:39.736 [2024-12-06 03:34:59.617815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.736 [2024-12-06 03:34:59.617829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.736 qpair failed and we were unable to recover it. 00:26:39.736 [2024-12-06 03:34:59.618082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.736 [2024-12-06 03:34:59.618094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.736 qpair failed and we were unable to recover it. 00:26:39.736 [2024-12-06 03:34:59.618179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.736 [2024-12-06 03:34:59.618190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.736 qpair failed and we were unable to recover it. 00:26:39.736 [2024-12-06 03:34:59.618405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.736 [2024-12-06 03:34:59.618417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.736 qpair failed and we were unable to recover it. 00:26:39.736 [2024-12-06 03:34:59.618557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.736 [2024-12-06 03:34:59.618570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.736 qpair failed and we were unable to recover it. 00:26:39.736 [2024-12-06 03:34:59.618730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.736 [2024-12-06 03:34:59.618743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.736 qpair failed and we were unable to recover it. 00:26:39.737 [2024-12-06 03:34:59.618894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-12-06 03:34:59.618907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-12-06 03:34:59.619063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-12-06 03:34:59.619076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-12-06 03:34:59.619162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-12-06 03:34:59.619173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-12-06 03:34:59.619378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-12-06 03:34:59.619391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-12-06 03:34:59.619643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-12-06 03:34:59.619655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-12-06 03:34:59.619809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-12-06 03:34:59.619821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-12-06 03:34:59.619956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-12-06 03:34:59.619970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-12-06 03:34:59.620063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-12-06 03:34:59.620075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-12-06 03:34:59.620162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-12-06 03:34:59.620173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-12-06 03:34:59.620371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-12-06 03:34:59.620384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-12-06 03:34:59.620475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-12-06 03:34:59.620487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-12-06 03:34:59.620722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-12-06 03:34:59.620735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-12-06 03:34:59.620805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-12-06 03:34:59.620817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-12-06 03:34:59.620969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-12-06 03:34:59.620982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-12-06 03:34:59.621072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-12-06 03:34:59.621084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-12-06 03:34:59.621221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-12-06 03:34:59.621233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-12-06 03:34:59.621390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-12-06 03:34:59.621402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-12-06 03:34:59.621601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-12-06 03:34:59.621614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-12-06 03:34:59.621694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-12-06 03:34:59.621705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-12-06 03:34:59.621927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-12-06 03:34:59.621939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-12-06 03:34:59.622086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-12-06 03:34:59.622099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-12-06 03:34:59.622320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-12-06 03:34:59.622332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-12-06 03:34:59.622410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-12-06 03:34:59.622421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-12-06 03:34:59.622565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-12-06 03:34:59.622578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-12-06 03:34:59.622799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-12-06 03:34:59.622813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-12-06 03:34:59.622953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-12-06 03:34:59.622966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-12-06 03:34:59.623148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-12-06 03:34:59.623160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-12-06 03:34:59.623258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-12-06 03:34:59.623270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-12-06 03:34:59.623418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-12-06 03:34:59.623431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-12-06 03:34:59.623511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-12-06 03:34:59.623524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-12-06 03:34:59.623603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-12-06 03:34:59.623615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-12-06 03:34:59.623838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-12-06 03:34:59.623850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-12-06 03:34:59.623928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-12-06 03:34:59.623939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-12-06 03:34:59.624038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-12-06 03:34:59.624052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-12-06 03:34:59.624132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-12-06 03:34:59.624143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-12-06 03:34:59.624390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-12-06 03:34:59.624402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-12-06 03:34:59.624553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-12-06 03:34:59.624565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-12-06 03:34:59.624658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-12-06 03:34:59.624670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.738 [2024-12-06 03:34:59.624802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-12-06 03:34:59.624815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-12-06 03:34:59.625023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-12-06 03:34:59.625037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-12-06 03:34:59.625285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-12-06 03:34:59.625297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-12-06 03:34:59.625437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-12-06 03:34:59.625451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-12-06 03:34:59.625601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-12-06 03:34:59.625613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-12-06 03:34:59.625703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-12-06 03:34:59.625715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-12-06 03:34:59.625857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-12-06 03:34:59.625869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-12-06 03:34:59.626003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-12-06 03:34:59.626015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-12-06 03:34:59.626100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-12-06 03:34:59.626111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-12-06 03:34:59.626310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-12-06 03:34:59.626323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-12-06 03:34:59.626402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-12-06 03:34:59.626414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-12-06 03:34:59.626609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-12-06 03:34:59.626622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-12-06 03:34:59.626768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-12-06 03:34:59.626780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-12-06 03:34:59.626925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-12-06 03:34:59.626938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-12-06 03:34:59.627022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-12-06 03:34:59.627033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-12-06 03:34:59.627187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-12-06 03:34:59.627198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-12-06 03:34:59.627277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-12-06 03:34:59.627288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-12-06 03:34:59.627537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-12-06 03:34:59.627550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-12-06 03:34:59.627618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-12-06 03:34:59.627629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-12-06 03:34:59.627853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-12-06 03:34:59.627865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-12-06 03:34:59.628019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-12-06 03:34:59.628032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-12-06 03:34:59.628201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-12-06 03:34:59.628213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-12-06 03:34:59.628422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-12-06 03:34:59.628434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-12-06 03:34:59.628571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-12-06 03:34:59.628584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-12-06 03:34:59.628655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-12-06 03:34:59.628667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-12-06 03:34:59.628864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-12-06 03:34:59.628877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-12-06 03:34:59.629012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-12-06 03:34:59.629024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-12-06 03:34:59.629159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-12-06 03:34:59.629171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-12-06 03:34:59.629317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-12-06 03:34:59.629328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-12-06 03:34:59.629528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-12-06 03:34:59.629541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-12-06 03:34:59.629693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-12-06 03:34:59.629706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-12-06 03:34:59.629782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-12-06 03:34:59.629793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-12-06 03:34:59.629860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-12-06 03:34:59.629872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-12-06 03:34:59.630086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-12-06 03:34:59.630099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-12-06 03:34:59.630181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-12-06 03:34:59.630193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-12-06 03:34:59.630337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-12-06 03:34:59.630351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-12-06 03:34:59.630533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-12-06 03:34:59.630546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-12-06 03:34:59.630749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-12-06 03:34:59.630761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.739 [2024-12-06 03:34:59.630897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-12-06 03:34:59.630910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-12-06 03:34:59.631164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-12-06 03:34:59.631177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-12-06 03:34:59.631330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-12-06 03:34:59.631342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-12-06 03:34:59.631520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-12-06 03:34:59.631533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-12-06 03:34:59.631744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-12-06 03:34:59.631757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-12-06 03:34:59.631889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-12-06 03:34:59.631901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-12-06 03:34:59.631994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-12-06 03:34:59.632005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-12-06 03:34:59.632151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-12-06 03:34:59.632163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-12-06 03:34:59.632378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-12-06 03:34:59.632391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-12-06 03:34:59.632591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-12-06 03:34:59.632605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-12-06 03:34:59.632775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-12-06 03:34:59.632787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-12-06 03:34:59.632861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-12-06 03:34:59.632872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-12-06 03:34:59.633010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-12-06 03:34:59.633023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-12-06 03:34:59.633157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-12-06 03:34:59.633169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-12-06 03:34:59.633313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-12-06 03:34:59.633325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-12-06 03:34:59.633458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-12-06 03:34:59.633471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-12-06 03:34:59.633557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-12-06 03:34:59.633569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-12-06 03:34:59.633664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-12-06 03:34:59.633676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-12-06 03:34:59.633828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-12-06 03:34:59.633841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-12-06 03:34:59.634002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-12-06 03:34:59.634014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-12-06 03:34:59.634287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-12-06 03:34:59.634299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-12-06 03:34:59.634513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-12-06 03:34:59.634525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-12-06 03:34:59.634609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-12-06 03:34:59.634621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-12-06 03:34:59.634710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-12-06 03:34:59.634724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-12-06 03:34:59.634874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-12-06 03:34:59.634894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-12-06 03:34:59.635122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-12-06 03:34:59.635144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-12-06 03:34:59.635243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-12-06 03:34:59.635258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-12-06 03:34:59.635511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-12-06 03:34:59.635527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-12-06 03:34:59.635751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-12-06 03:34:59.635767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-12-06 03:34:59.635951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-12-06 03:34:59.635967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-12-06 03:34:59.636060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-12-06 03:34:59.636075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-12-06 03:34:59.636277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-12-06 03:34:59.636290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-12-06 03:34:59.636428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-12-06 03:34:59.636440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-12-06 03:34:59.636577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-12-06 03:34:59.636590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-12-06 03:34:59.636815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-12-06 03:34:59.636828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-12-06 03:34:59.637055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-12-06 03:34:59.637067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-12-06 03:34:59.637209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-12-06 03:34:59.637222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-12-06 03:34:59.637468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-12-06 03:34:59.637485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-12-06 03:34:59.637633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-12-06 03:34:59.637645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-12-06 03:34:59.637817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-12-06 03:34:59.637829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-12-06 03:34:59.637993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-12-06 03:34:59.638006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-12-06 03:34:59.638143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-12-06 03:34:59.638156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-12-06 03:34:59.638222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-12-06 03:34:59.638235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-12-06 03:34:59.638373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-12-06 03:34:59.638386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-12-06 03:34:59.638534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-12-06 03:34:59.638547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-12-06 03:34:59.638718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-12-06 03:34:59.638730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-12-06 03:34:59.639002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-12-06 03:34:59.639016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-12-06 03:34:59.639186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-12-06 03:34:59.639199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-12-06 03:34:59.639284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-12-06 03:34:59.639297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-12-06 03:34:59.639432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-12-06 03:34:59.639444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-12-06 03:34:59.639665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-12-06 03:34:59.639678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-12-06 03:34:59.639933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-12-06 03:34:59.639945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-12-06 03:34:59.640185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-12-06 03:34:59.640197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-12-06 03:34:59.640330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-12-06 03:34:59.640342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-12-06 03:34:59.640440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-12-06 03:34:59.640451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-12-06 03:34:59.640584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-12-06 03:34:59.640597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-12-06 03:34:59.640737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-12-06 03:34:59.640749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-12-06 03:34:59.640881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-12-06 03:34:59.640891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-12-06 03:34:59.641073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-12-06 03:34:59.641084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-12-06 03:34:59.641334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-12-06 03:34:59.641345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-12-06 03:34:59.641479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-12-06 03:34:59.641490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-12-06 03:34:59.641568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-12-06 03:34:59.641577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-12-06 03:34:59.641677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-12-06 03:34:59.641687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-12-06 03:34:59.641910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-12-06 03:34:59.641919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-12-06 03:34:59.642220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-12-06 03:34:59.642236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-12-06 03:34:59.642327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-12-06 03:34:59.642340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-12-06 03:34:59.642483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-12-06 03:34:59.642496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-12-06 03:34:59.642635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-12-06 03:34:59.642650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-12-06 03:34:59.642810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-12-06 03:34:59.642824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-12-06 03:34:59.642973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-12-06 03:34:59.642986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-12-06 03:34:59.643198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-12-06 03:34:59.643212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-12-06 03:34:59.643362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-12-06 03:34:59.643376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-12-06 03:34:59.643536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-12-06 03:34:59.643550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-12-06 03:34:59.643756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-12-06 03:34:59.643770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-12-06 03:34:59.644001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-12-06 03:34:59.644016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-12-06 03:34:59.644165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-12-06 03:34:59.644179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-12-06 03:34:59.644331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-12-06 03:34:59.644347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-12-06 03:34:59.644532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-12-06 03:34:59.644546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-12-06 03:34:59.644705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-12-06 03:34:59.644720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-12-06 03:34:59.644833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-12-06 03:34:59.644847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-12-06 03:34:59.644993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-12-06 03:34:59.645008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-12-06 03:34:59.645240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-12-06 03:34:59.645256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-12-06 03:34:59.645431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-12-06 03:34:59.645445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-12-06 03:34:59.645644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-12-06 03:34:59.645655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-12-06 03:34:59.645871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-12-06 03:34:59.645883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-12-06 03:34:59.645978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-12-06 03:34:59.645989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-12-06 03:34:59.646195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-12-06 03:34:59.646206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-12-06 03:34:59.646293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-12-06 03:34:59.646304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-12-06 03:34:59.646452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-12-06 03:34:59.646463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-12-06 03:34:59.646543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-12-06 03:34:59.646554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-12-06 03:34:59.646657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-12-06 03:34:59.646669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-12-06 03:34:59.646872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-12-06 03:34:59.646885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-12-06 03:34:59.647084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-12-06 03:34:59.647097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-12-06 03:34:59.647315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-12-06 03:34:59.647328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-12-06 03:34:59.647490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-12-06 03:34:59.647502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-12-06 03:34:59.647633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-12-06 03:34:59.647645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-12-06 03:34:59.647723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-12-06 03:34:59.647735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-12-06 03:34:59.647953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-12-06 03:34:59.647966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-12-06 03:34:59.648164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-12-06 03:34:59.648175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-12-06 03:34:59.648257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-12-06 03:34:59.648269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-12-06 03:34:59.648460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-12-06 03:34:59.648472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-12-06 03:34:59.648641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-12-06 03:34:59.648653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-12-06 03:34:59.648748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-12-06 03:34:59.648761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-12-06 03:34:59.648991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-12-06 03:34:59.649005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-12-06 03:34:59.649078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-12-06 03:34:59.649092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-12-06 03:34:59.649237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-12-06 03:34:59.649250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-12-06 03:34:59.649400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-12-06 03:34:59.649412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-12-06 03:34:59.649476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-12-06 03:34:59.649487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-12-06 03:34:59.649597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-12-06 03:34:59.649609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-12-06 03:34:59.649755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-12-06 03:34:59.649766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-12-06 03:34:59.649966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-12-06 03:34:59.649979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-12-06 03:34:59.650185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-12-06 03:34:59.650198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-12-06 03:34:59.650331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-12-06 03:34:59.650343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-12-06 03:34:59.650431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-12-06 03:34:59.650444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-12-06 03:34:59.650597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-12-06 03:34:59.650609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-12-06 03:34:59.650824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-12-06 03:34:59.650836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-12-06 03:34:59.651012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-12-06 03:34:59.651024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-12-06 03:34:59.651213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-12-06 03:34:59.651225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-12-06 03:34:59.651400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-12-06 03:34:59.651412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-12-06 03:34:59.651497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-12-06 03:34:59.651511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-12-06 03:34:59.651670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-12-06 03:34:59.651681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-12-06 03:34:59.651881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-12-06 03:34:59.651893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-12-06 03:34:59.652124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-12-06 03:34:59.652137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-12-06 03:34:59.652333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-12-06 03:34:59.652346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-12-06 03:34:59.652497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-12-06 03:34:59.652509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-12-06 03:34:59.652602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-12-06 03:34:59.652615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-12-06 03:34:59.652686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-12-06 03:34:59.652697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-12-06 03:34:59.652841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-12-06 03:34:59.652853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-12-06 03:34:59.653005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-12-06 03:34:59.653018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-12-06 03:34:59.653170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-12-06 03:34:59.653183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-12-06 03:34:59.653316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-12-06 03:34:59.653329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-12-06 03:34:59.653545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-12-06 03:34:59.653557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-12-06 03:34:59.653647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-12-06 03:34:59.653659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-12-06 03:34:59.653862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-12-06 03:34:59.653875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-12-06 03:34:59.654051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-12-06 03:34:59.654064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-12-06 03:34:59.654205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-12-06 03:34:59.654218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-12-06 03:34:59.654350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-12-06 03:34:59.654361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-12-06 03:34:59.654434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-12-06 03:34:59.654446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-12-06 03:34:59.654648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-12-06 03:34:59.654660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-12-06 03:34:59.654835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-12-06 03:34:59.654846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-12-06 03:34:59.654982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-12-06 03:34:59.654994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-12-06 03:34:59.655186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-12-06 03:34:59.655199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-12-06 03:34:59.655454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-12-06 03:34:59.655466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-12-06 03:34:59.655622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-12-06 03:34:59.655634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-12-06 03:34:59.655785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-12-06 03:34:59.655799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.743 [2024-12-06 03:34:59.656059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-12-06 03:34:59.656071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-12-06 03:34:59.656213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-12-06 03:34:59.656226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-12-06 03:34:59.656357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-12-06 03:34:59.656369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-12-06 03:34:59.656568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-12-06 03:34:59.656580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-12-06 03:34:59.656666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-12-06 03:34:59.656679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-12-06 03:34:59.656828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-12-06 03:34:59.656841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-12-06 03:34:59.657122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-12-06 03:34:59.657135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-12-06 03:34:59.657361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-12-06 03:34:59.657373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-12-06 03:34:59.657585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-12-06 03:34:59.657597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-12-06 03:34:59.657821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-12-06 03:34:59.657834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-12-06 03:34:59.657907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-12-06 03:34:59.657918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-12-06 03:34:59.658057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-12-06 03:34:59.658069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-12-06 03:34:59.658207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-12-06 03:34:59.658219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-12-06 03:34:59.658357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-12-06 03:34:59.658369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-12-06 03:34:59.658588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-12-06 03:34:59.658600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-12-06 03:34:59.658754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-12-06 03:34:59.658767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-12-06 03:34:59.659008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-12-06 03:34:59.659021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-12-06 03:34:59.659162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-12-06 03:34:59.659174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-12-06 03:34:59.659331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-12-06 03:34:59.659344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-12-06 03:34:59.659542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-12-06 03:34:59.659555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-12-06 03:34:59.659719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-12-06 03:34:59.659731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-12-06 03:34:59.659887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-12-06 03:34:59.659899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-12-06 03:34:59.660046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-12-06 03:34:59.660058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-12-06 03:34:59.660280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-12-06 03:34:59.660292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-12-06 03:34:59.660492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-12-06 03:34:59.660505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-12-06 03:34:59.660717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-12-06 03:34:59.660730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-12-06 03:34:59.660964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-12-06 03:34:59.660976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-12-06 03:34:59.661120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-12-06 03:34:59.661131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-12-06 03:34:59.661289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-12-06 03:34:59.661302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-12-06 03:34:59.661463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-12-06 03:34:59.661475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-12-06 03:34:59.661638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-12-06 03:34:59.661650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-12-06 03:34:59.661795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-12-06 03:34:59.661807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-12-06 03:34:59.662035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-12-06 03:34:59.662047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-12-06 03:34:59.662279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-12-06 03:34:59.662291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-12-06 03:34:59.662475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-12-06 03:34:59.662488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-12-06 03:34:59.662691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-12-06 03:34:59.662704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-12-06 03:34:59.662852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-12-06 03:34:59.662865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-12-06 03:34:59.663021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-12-06 03:34:59.663033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.744 [2024-12-06 03:34:59.663167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-12-06 03:34:59.663179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-12-06 03:34:59.663393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-12-06 03:34:59.663407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-12-06 03:34:59.663629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-12-06 03:34:59.663641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-12-06 03:34:59.663788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-12-06 03:34:59.663801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-12-06 03:34:59.664023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-12-06 03:34:59.664035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-12-06 03:34:59.664223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-12-06 03:34:59.664235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-12-06 03:34:59.664458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-12-06 03:34:59.664471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-12-06 03:34:59.664605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-12-06 03:34:59.664617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-12-06 03:34:59.664826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-12-06 03:34:59.664840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-12-06 03:34:59.664992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-12-06 03:34:59.665006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-12-06 03:34:59.665079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-12-06 03:34:59.665090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-12-06 03:34:59.665233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-12-06 03:34:59.665246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-12-06 03:34:59.665331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-12-06 03:34:59.665343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-12-06 03:34:59.665614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-12-06 03:34:59.665627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-12-06 03:34:59.665880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-12-06 03:34:59.665893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-12-06 03:34:59.666116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-12-06 03:34:59.666130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-12-06 03:34:59.666330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-12-06 03:34:59.666342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-12-06 03:34:59.666511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-12-06 03:34:59.666524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-12-06 03:34:59.666727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-12-06 03:34:59.666740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-12-06 03:34:59.666828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-12-06 03:34:59.666842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-12-06 03:34:59.667070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-12-06 03:34:59.667084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-12-06 03:34:59.667300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-12-06 03:34:59.667313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-12-06 03:34:59.667448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-12-06 03:34:59.667462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-12-06 03:34:59.667610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-12-06 03:34:59.667622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-12-06 03:34:59.667787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-12-06 03:34:59.667799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-12-06 03:34:59.667890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-12-06 03:34:59.667902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-12-06 03:34:59.668075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-12-06 03:34:59.668089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-12-06 03:34:59.668160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-12-06 03:34:59.668170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-12-06 03:34:59.668403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-12-06 03:34:59.668417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-12-06 03:34:59.668614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-12-06 03:34:59.668626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-12-06 03:34:59.668822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-12-06 03:34:59.668834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-12-06 03:34:59.669002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-12-06 03:34:59.669015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-12-06 03:34:59.669240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-12-06 03:34:59.669252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-12-06 03:34:59.669499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-12-06 03:34:59.669512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-12-06 03:34:59.669648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-12-06 03:34:59.669661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-12-06 03:34:59.669798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-12-06 03:34:59.669811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-12-06 03:34:59.670017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-12-06 03:34:59.670035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-12-06 03:34:59.670273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-12-06 03:34:59.670289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-12-06 03:34:59.670464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-12-06 03:34:59.670480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-12-06 03:34:59.670724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-12-06 03:34:59.670740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-12-06 03:34:59.670901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-12-06 03:34:59.670914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-12-06 03:34:59.671099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-12-06 03:34:59.671114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-12-06 03:34:59.671199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-12-06 03:34:59.671210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-12-06 03:34:59.671346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-12-06 03:34:59.671359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-12-06 03:34:59.671524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-12-06 03:34:59.671537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-12-06 03:34:59.671752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-12-06 03:34:59.671764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-12-06 03:34:59.671912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-12-06 03:34:59.671924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-12-06 03:34:59.672068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-12-06 03:34:59.672081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-12-06 03:34:59.672161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-12-06 03:34:59.672172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-12-06 03:34:59.672384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-12-06 03:34:59.672396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-12-06 03:34:59.672486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-12-06 03:34:59.672497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-12-06 03:34:59.672639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-12-06 03:34:59.672652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-12-06 03:34:59.672816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-12-06 03:34:59.672829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-12-06 03:34:59.672934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-12-06 03:34:59.672951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-12-06 03:34:59.673032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-12-06 03:34:59.673043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-12-06 03:34:59.673198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-12-06 03:34:59.673211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-12-06 03:34:59.673293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-12-06 03:34:59.673304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-12-06 03:34:59.673545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-12-06 03:34:59.673557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-12-06 03:34:59.673785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-12-06 03:34:59.673798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-12-06 03:34:59.673941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-12-06 03:34:59.673959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-12-06 03:34:59.674095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-12-06 03:34:59.674107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-12-06 03:34:59.674255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-12-06 03:34:59.674268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-12-06 03:34:59.674398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-12-06 03:34:59.674410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-12-06 03:34:59.674518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-12-06 03:34:59.674530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-12-06 03:34:59.674695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-12-06 03:34:59.674707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-12-06 03:34:59.674879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-12-06 03:34:59.674893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-12-06 03:34:59.674982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-12-06 03:34:59.674994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-12-06 03:34:59.675220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-12-06 03:34:59.675232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-12-06 03:34:59.675382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-12-06 03:34:59.675395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-12-06 03:34:59.675626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-12-06 03:34:59.675638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-12-06 03:34:59.675706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-12-06 03:34:59.675717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-12-06 03:34:59.675966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-12-06 03:34:59.675980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-12-06 03:34:59.676112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-12-06 03:34:59.676124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-12-06 03:34:59.676326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-12-06 03:34:59.676338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-12-06 03:34:59.676488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-12-06 03:34:59.676500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-12-06 03:34:59.676640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-12-06 03:34:59.676653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-12-06 03:34:59.676890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-12-06 03:34:59.676903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-12-06 03:34:59.677123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-12-06 03:34:59.677135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-12-06 03:34:59.677375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-12-06 03:34:59.677387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-12-06 03:34:59.677568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-12-06 03:34:59.677580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-12-06 03:34:59.677810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-12-06 03:34:59.677823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-12-06 03:34:59.677970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-12-06 03:34:59.677985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-12-06 03:34:59.678073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-12-06 03:34:59.678085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-12-06 03:34:59.678182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-12-06 03:34:59.678194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-12-06 03:34:59.678353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-12-06 03:34:59.678364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-12-06 03:34:59.678451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-12-06 03:34:59.678462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-12-06 03:34:59.678599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-12-06 03:34:59.678611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-12-06 03:34:59.678840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-12-06 03:34:59.678853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-12-06 03:34:59.679049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-12-06 03:34:59.679063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-12-06 03:34:59.679157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-12-06 03:34:59.679168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-12-06 03:34:59.679367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-12-06 03:34:59.679380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-12-06 03:34:59.679527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-12-06 03:34:59.679541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-12-06 03:34:59.679778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-12-06 03:34:59.679791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-12-06 03:34:59.679880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-12-06 03:34:59.679891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-12-06 03:34:59.680058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-12-06 03:34:59.680071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-12-06 03:34:59.680249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-12-06 03:34:59.680261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-12-06 03:34:59.680406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-12-06 03:34:59.680418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-12-06 03:34:59.680636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-12-06 03:34:59.680648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-12-06 03:34:59.680782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-12-06 03:34:59.680796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-12-06 03:34:59.681037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-12-06 03:34:59.681051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-12-06 03:34:59.681141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-12-06 03:34:59.681152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-12-06 03:34:59.681236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-12-06 03:34:59.681247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-12-06 03:34:59.681340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-12-06 03:34:59.681351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-12-06 03:34:59.681550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-12-06 03:34:59.681563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-12-06 03:34:59.681696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-12-06 03:34:59.681708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-12-06 03:34:59.681858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-12-06 03:34:59.681870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-12-06 03:34:59.682087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-12-06 03:34:59.682100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-12-06 03:34:59.682236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-12-06 03:34:59.682249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-12-06 03:34:59.682347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-12-06 03:34:59.682360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.747 [2024-12-06 03:34:59.682531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-12-06 03:34:59.682543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-12-06 03:34:59.682786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-12-06 03:34:59.682799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-12-06 03:34:59.682935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-12-06 03:34:59.682960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-12-06 03:34:59.683119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-12-06 03:34:59.683132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-12-06 03:34:59.683233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-12-06 03:34:59.683245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-12-06 03:34:59.683414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-12-06 03:34:59.683427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-12-06 03:34:59.683591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-12-06 03:34:59.683604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-12-06 03:34:59.683677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-12-06 03:34:59.683689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-12-06 03:34:59.683912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-12-06 03:34:59.683925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-12-06 03:34:59.684069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-12-06 03:34:59.684082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-12-06 03:34:59.684281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-12-06 03:34:59.684293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-12-06 03:34:59.684440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-12-06 03:34:59.684453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-12-06 03:34:59.684689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-12-06 03:34:59.684703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-12-06 03:34:59.684930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-12-06 03:34:59.684942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-12-06 03:34:59.685110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-12-06 03:34:59.685122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-12-06 03:34:59.685207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-12-06 03:34:59.685219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-12-06 03:34:59.685379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-12-06 03:34:59.685391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-12-06 03:34:59.685468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-12-06 03:34:59.685480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-12-06 03:34:59.685626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-12-06 03:34:59.685638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-12-06 03:34:59.685856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-12-06 03:34:59.685868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-12-06 03:34:59.686017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-12-06 03:34:59.686030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-12-06 03:34:59.686247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-12-06 03:34:59.686260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-12-06 03:34:59.686347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-12-06 03:34:59.686359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-12-06 03:34:59.686613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-12-06 03:34:59.686626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-12-06 03:34:59.686770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-12-06 03:34:59.686782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-12-06 03:34:59.686867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-12-06 03:34:59.686878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-12-06 03:34:59.687102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-12-06 03:34:59.687115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-12-06 03:34:59.687265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-12-06 03:34:59.687277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-12-06 03:34:59.687377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-12-06 03:34:59.687390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-12-06 03:34:59.687616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-12-06 03:34:59.687629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-12-06 03:34:59.687715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-12-06 03:34:59.687725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-12-06 03:34:59.687868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-12-06 03:34:59.687881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-12-06 03:34:59.687985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-12-06 03:34:59.687997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-12-06 03:34:59.688164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-12-06 03:34:59.688177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-12-06 03:34:59.688404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-12-06 03:34:59.688416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-12-06 03:34:59.688510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-12-06 03:34:59.688522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-12-06 03:34:59.688717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-12-06 03:34:59.688731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-12-06 03:34:59.688957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-12-06 03:34:59.688970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.748 [2024-12-06 03:34:59.689154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-12-06 03:34:59.689166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-12-06 03:34:59.689252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-12-06 03:34:59.689264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-12-06 03:34:59.689403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-12-06 03:34:59.689415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-12-06 03:34:59.689634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-12-06 03:34:59.689646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-12-06 03:34:59.689806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-12-06 03:34:59.689819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-12-06 03:34:59.689958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-12-06 03:34:59.689972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-12-06 03:34:59.690120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-12-06 03:34:59.690133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-12-06 03:34:59.690330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-12-06 03:34:59.690343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-12-06 03:34:59.690584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-12-06 03:34:59.690597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-12-06 03:34:59.690810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-12-06 03:34:59.690822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-12-06 03:34:59.690908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-12-06 03:34:59.690918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-12-06 03:34:59.691054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-12-06 03:34:59.691068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-12-06 03:34:59.691285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-12-06 03:34:59.691297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-12-06 03:34:59.691471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-12-06 03:34:59.691483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-12-06 03:34:59.691580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-12-06 03:34:59.691597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-12-06 03:34:59.691738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-12-06 03:34:59.691751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-12-06 03:34:59.691912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-12-06 03:34:59.691924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-12-06 03:34:59.692078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-12-06 03:34:59.692091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-12-06 03:34:59.692337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-12-06 03:34:59.692350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-12-06 03:34:59.692415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-12-06 03:34:59.692427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-12-06 03:34:59.692648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-12-06 03:34:59.692661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-12-06 03:34:59.692886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-12-06 03:34:59.692899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-12-06 03:34:59.693088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-12-06 03:34:59.693101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-12-06 03:34:59.693237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-12-06 03:34:59.693249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-12-06 03:34:59.693318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-12-06 03:34:59.693330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-12-06 03:34:59.693482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-12-06 03:34:59.693494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-12-06 03:34:59.693642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-12-06 03:34:59.693656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-12-06 03:34:59.693737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-12-06 03:34:59.693748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-12-06 03:34:59.693884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-12-06 03:34:59.693897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-12-06 03:34:59.693978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-12-06 03:34:59.693989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-12-06 03:34:59.694140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-12-06 03:34:59.694153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-12-06 03:34:59.694352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-12-06 03:34:59.694364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-12-06 03:34:59.694530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-12-06 03:34:59.694542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-12-06 03:34:59.694618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-12-06 03:34:59.694629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-12-06 03:34:59.694839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-12-06 03:34:59.694852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-12-06 03:34:59.694929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-12-06 03:34:59.694940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-12-06 03:34:59.695148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-12-06 03:34:59.695161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-12-06 03:34:59.695242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-12-06 03:34:59.695253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-12-06 03:34:59.695403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-12-06 03:34:59.695415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-12-06 03:34:59.695570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-12-06 03:34:59.695581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-12-06 03:34:59.695797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-12-06 03:34:59.695809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-12-06 03:34:59.696064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-12-06 03:34:59.696087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-12-06 03:34:59.696268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-12-06 03:34:59.696286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-12-06 03:34:59.696460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-12-06 03:34:59.696475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-12-06 03:34:59.696683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-12-06 03:34:59.696700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-12-06 03:34:59.696880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-12-06 03:34:59.696896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-12-06 03:34:59.696991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-12-06 03:34:59.697007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-12-06 03:34:59.697236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-12-06 03:34:59.697249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-12-06 03:34:59.697417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-12-06 03:34:59.697430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-12-06 03:34:59.697513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-12-06 03:34:59.697523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-12-06 03:34:59.697706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-12-06 03:34:59.697718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-12-06 03:34:59.697852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-12-06 03:34:59.697864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-12-06 03:34:59.697944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-12-06 03:34:59.697960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-12-06 03:34:59.698102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-12-06 03:34:59.698113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-12-06 03:34:59.698259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-12-06 03:34:59.698273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-12-06 03:34:59.698422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-12-06 03:34:59.698435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-12-06 03:34:59.698567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-12-06 03:34:59.698579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-12-06 03:34:59.698709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-12-06 03:34:59.698720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-12-06 03:34:59.698807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-12-06 03:34:59.698818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-12-06 03:34:59.698968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-12-06 03:34:59.698981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-12-06 03:34:59.699206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-12-06 03:34:59.699219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-12-06 03:34:59.699298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-12-06 03:34:59.699310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-12-06 03:34:59.699456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-12-06 03:34:59.699468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-12-06 03:34:59.699692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-12-06 03:34:59.699704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-12-06 03:34:59.699845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-12-06 03:34:59.699857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-12-06 03:34:59.699995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-12-06 03:34:59.700008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-12-06 03:34:59.700174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-12-06 03:34:59.700187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-12-06 03:34:59.700324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-12-06 03:34:59.700336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-12-06 03:34:59.700560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-12-06 03:34:59.700572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-12-06 03:34:59.700707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-12-06 03:34:59.700720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-12-06 03:34:59.700944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-12-06 03:34:59.700960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.750 [2024-12-06 03:34:59.701112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-12-06 03:34:59.701125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-12-06 03:34:59.701335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-12-06 03:34:59.701347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-12-06 03:34:59.701478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-12-06 03:34:59.701490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-12-06 03:34:59.701727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-12-06 03:34:59.701740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-12-06 03:34:59.701966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-12-06 03:34:59.701980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-12-06 03:34:59.702138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-12-06 03:34:59.702150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-12-06 03:34:59.702296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-12-06 03:34:59.702308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-12-06 03:34:59.702439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-12-06 03:34:59.702452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-12-06 03:34:59.702602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-12-06 03:34:59.702614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-12-06 03:34:59.702773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-12-06 03:34:59.702785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-12-06 03:34:59.702942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-12-06 03:34:59.702972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-12-06 03:34:59.703056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-12-06 03:34:59.703071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-12-06 03:34:59.703232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-12-06 03:34:59.703248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-12-06 03:34:59.703407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-12-06 03:34:59.703423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-12-06 03:34:59.703633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-12-06 03:34:59.703650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-12-06 03:34:59.703797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-12-06 03:34:59.703814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-12-06 03:34:59.704020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-12-06 03:34:59.704038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-12-06 03:34:59.704216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-12-06 03:34:59.704232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-12-06 03:34:59.704370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-12-06 03:34:59.704386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-12-06 03:34:59.704615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-12-06 03:34:59.704629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-12-06 03:34:59.704787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-12-06 03:34:59.704800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-12-06 03:34:59.704982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-12-06 03:34:59.704996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-12-06 03:34:59.705163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-12-06 03:34:59.705175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-12-06 03:34:59.705418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-12-06 03:34:59.705430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-12-06 03:34:59.705572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-12-06 03:34:59.705584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-12-06 03:34:59.705730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-12-06 03:34:59.705743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-12-06 03:34:59.705891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-12-06 03:34:59.705904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-12-06 03:34:59.706127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-12-06 03:34:59.706139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-12-06 03:34:59.706238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-12-06 03:34:59.706250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-12-06 03:34:59.706345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-12-06 03:34:59.706356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-12-06 03:34:59.706506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-12-06 03:34:59.706518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-12-06 03:34:59.706614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-12-06 03:34:59.706627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-12-06 03:34:59.706826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-12-06 03:34:59.706838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-12-06 03:34:59.706938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-12-06 03:34:59.706955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-12-06 03:34:59.707053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-12-06 03:34:59.707065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-12-06 03:34:59.707134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-12-06 03:34:59.707146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-12-06 03:34:59.707260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-12-06 03:34:59.707272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-12-06 03:34:59.707478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-12-06 03:34:59.707490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-12-06 03:34:59.707577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-12-06 03:34:59.707589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-12-06 03:34:59.707796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-12-06 03:34:59.707810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-12-06 03:34:59.708023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-12-06 03:34:59.708036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-12-06 03:34:59.708258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-12-06 03:34:59.708270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-12-06 03:34:59.708471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-12-06 03:34:59.708484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-12-06 03:34:59.708553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-12-06 03:34:59.708564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-12-06 03:34:59.708706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-12-06 03:34:59.708717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-12-06 03:34:59.708956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-12-06 03:34:59.708970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-12-06 03:34:59.709058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-12-06 03:34:59.709070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-12-06 03:34:59.709286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-12-06 03:34:59.709298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-12-06 03:34:59.709448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-12-06 03:34:59.709460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-12-06 03:34:59.709553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-12-06 03:34:59.709565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-12-06 03:34:59.709790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-12-06 03:34:59.709806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-12-06 03:34:59.710006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-12-06 03:34:59.710019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-12-06 03:34:59.710197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-12-06 03:34:59.710209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-12-06 03:34:59.710365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-12-06 03:34:59.710378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-12-06 03:34:59.710625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-12-06 03:34:59.710638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-12-06 03:34:59.710817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-12-06 03:34:59.710829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-12-06 03:34:59.710979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-12-06 03:34:59.710993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-12-06 03:34:59.711081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-12-06 03:34:59.711093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-12-06 03:34:59.711290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-12-06 03:34:59.711303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-12-06 03:34:59.711451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-12-06 03:34:59.711463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-12-06 03:34:59.711609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-12-06 03:34:59.711622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-12-06 03:34:59.711773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-12-06 03:34:59.711786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-12-06 03:34:59.711926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-12-06 03:34:59.711939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-12-06 03:34:59.712091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-12-06 03:34:59.712105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-12-06 03:34:59.712181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-12-06 03:34:59.712193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-12-06 03:34:59.712327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-12-06 03:34:59.712339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-12-06 03:34:59.712579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-12-06 03:34:59.712592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-12-06 03:34:59.712737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-12-06 03:34:59.712749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-12-06 03:34:59.712887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-12-06 03:34:59.712899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-12-06 03:34:59.713107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-12-06 03:34:59.713120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-12-06 03:34:59.713269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-12-06 03:34:59.713281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-12-06 03:34:59.713375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-12-06 03:34:59.713389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-12-06 03:34:59.713528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-12-06 03:34:59.713540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-12-06 03:34:59.713684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-12-06 03:34:59.713697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-12-06 03:34:59.713778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-12-06 03:34:59.713789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-12-06 03:34:59.713935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-12-06 03:34:59.713950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-12-06 03:34:59.714088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-12-06 03:34:59.714101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-12-06 03:34:59.714171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-12-06 03:34:59.714182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-12-06 03:34:59.714403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-12-06 03:34:59.714417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-12-06 03:34:59.714509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-12-06 03:34:59.714521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-12-06 03:34:59.714753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-12-06 03:34:59.714765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-12-06 03:34:59.714920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-12-06 03:34:59.714933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-12-06 03:34:59.715163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-12-06 03:34:59.715176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-12-06 03:34:59.715261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-12-06 03:34:59.715272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-12-06 03:34:59.715479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-12-06 03:34:59.715493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-12-06 03:34:59.715668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-12-06 03:34:59.715680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-12-06 03:34:59.715818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-12-06 03:34:59.715831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-12-06 03:34:59.716080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-12-06 03:34:59.716093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-12-06 03:34:59.716244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-12-06 03:34:59.716257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-12-06 03:34:59.716480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-12-06 03:34:59.716493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-12-06 03:34:59.716629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-12-06 03:34:59.716644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-12-06 03:34:59.716774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-12-06 03:34:59.716785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-12-06 03:34:59.717011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-12-06 03:34:59.717024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-12-06 03:34:59.717107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-12-06 03:34:59.717118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-12-06 03:34:59.717252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-12-06 03:34:59.717265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-12-06 03:34:59.717465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-12-06 03:34:59.717476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-12-06 03:34:59.717641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-12-06 03:34:59.717654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-12-06 03:34:59.717797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-12-06 03:34:59.717810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-12-06 03:34:59.718029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-12-06 03:34:59.718042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-12-06 03:34:59.718195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-12-06 03:34:59.718208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-12-06 03:34:59.718296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-12-06 03:34:59.718310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-12-06 03:34:59.718473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-12-06 03:34:59.718485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-12-06 03:34:59.718718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-12-06 03:34:59.718732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-12-06 03:34:59.718887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-12-06 03:34:59.718900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-12-06 03:34:59.719043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-12-06 03:34:59.719056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-12-06 03:34:59.719208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-12-06 03:34:59.719220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-12-06 03:34:59.719367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-12-06 03:34:59.719380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-12-06 03:34:59.719581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-12-06 03:34:59.719594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-12-06 03:34:59.719816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-12-06 03:34:59.719828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-12-06 03:34:59.720026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-12-06 03:34:59.720039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-12-06 03:34:59.720238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-12-06 03:34:59.720252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-12-06 03:34:59.720328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-12-06 03:34:59.720339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-12-06 03:34:59.720538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-12-06 03:34:59.720550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-12-06 03:34:59.720672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-12-06 03:34:59.720684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-12-06 03:34:59.720831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-12-06 03:34:59.720843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-12-06 03:34:59.721070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-12-06 03:34:59.721084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-12-06 03:34:59.721225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-12-06 03:34:59.721238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-12-06 03:34:59.721378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-12-06 03:34:59.721390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-12-06 03:34:59.721633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-12-06 03:34:59.721646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-12-06 03:34:59.721852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-12-06 03:34:59.721865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-12-06 03:34:59.722114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-12-06 03:34:59.722128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-12-06 03:34:59.722261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-12-06 03:34:59.722273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-12-06 03:34:59.722482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-12-06 03:34:59.722495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-12-06 03:34:59.722580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-12-06 03:34:59.722591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-12-06 03:34:59.722731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-12-06 03:34:59.722744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-12-06 03:34:59.722891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-12-06 03:34:59.722903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-12-06 03:34:59.723114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-12-06 03:34:59.723127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-12-06 03:34:59.723212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-12-06 03:34:59.723225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-12-06 03:34:59.723396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-12-06 03:34:59.723409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-12-06 03:34:59.723542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-12-06 03:34:59.723554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-12-06 03:34:59.723643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-12-06 03:34:59.723657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-12-06 03:34:59.723854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-12-06 03:34:59.723867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-12-06 03:34:59.724034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-12-06 03:34:59.724047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-12-06 03:34:59.724271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-12-06 03:34:59.724283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-12-06 03:34:59.724502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-12-06 03:34:59.724514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-12-06 03:34:59.724671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-12-06 03:34:59.724683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-12-06 03:34:59.724898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-12-06 03:34:59.724911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-12-06 03:34:59.725101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-12-06 03:34:59.725114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-12-06 03:34:59.725339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-12-06 03:34:59.725352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-12-06 03:34:59.725555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-12-06 03:34:59.725567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-12-06 03:34:59.725817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-12-06 03:34:59.725829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-12-06 03:34:59.725997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-12-06 03:34:59.726009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-12-06 03:34:59.726099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-12-06 03:34:59.726113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-12-06 03:34:59.726260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-12-06 03:34:59.726271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-12-06 03:34:59.726344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-12-06 03:34:59.726356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-12-06 03:34:59.726491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-12-06 03:34:59.726503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-12-06 03:34:59.726650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-12-06 03:34:59.726663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-12-06 03:34:59.726728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-12-06 03:34:59.726739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-12-06 03:34:59.726892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-12-06 03:34:59.726906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-12-06 03:34:59.727121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-12-06 03:34:59.727134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-12-06 03:34:59.727271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-12-06 03:34:59.727282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-12-06 03:34:59.727428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-12-06 03:34:59.727441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-12-06 03:34:59.727660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-12-06 03:34:59.727671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-12-06 03:34:59.727797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-12-06 03:34:59.727809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-12-06 03:34:59.727942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-12-06 03:34:59.727959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-12-06 03:34:59.728097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-12-06 03:34:59.728109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-12-06 03:34:59.728331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-12-06 03:34:59.728344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-12-06 03:34:59.728493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-12-06 03:34:59.728505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-12-06 03:34:59.728645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-12-06 03:34:59.728658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-12-06 03:34:59.728735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-12-06 03:34:59.728746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-12-06 03:34:59.728960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-12-06 03:34:59.728973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-12-06 03:34:59.729162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-12-06 03:34:59.729175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-12-06 03:34:59.729373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-12-06 03:34:59.729384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-12-06 03:34:59.729470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-12-06 03:34:59.729480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-12-06 03:34:59.729548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-12-06 03:34:59.729559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-12-06 03:34:59.729786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-12-06 03:34:59.729799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-12-06 03:34:59.729954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-12-06 03:34:59.729968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-12-06 03:34:59.730235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-12-06 03:34:59.730247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-12-06 03:34:59.730330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-12-06 03:34:59.730342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-12-06 03:34:59.730481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-12-06 03:34:59.730494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-12-06 03:34:59.730639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-12-06 03:34:59.730654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-12-06 03:34:59.730734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-12-06 03:34:59.730746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-12-06 03:34:59.730811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-12-06 03:34:59.730822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-12-06 03:34:59.730974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-12-06 03:34:59.730987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-12-06 03:34:59.731122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-12-06 03:34:59.731134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-12-06 03:34:59.731298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-12-06 03:34:59.731311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-12-06 03:34:59.731563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-12-06 03:34:59.731576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-12-06 03:34:59.731739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-12-06 03:34:59.731752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-12-06 03:34:59.731905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-12-06 03:34:59.731917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-12-06 03:34:59.732190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-12-06 03:34:59.732202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-12-06 03:34:59.732334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-12-06 03:34:59.732347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-12-06 03:34:59.732567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-12-06 03:34:59.732580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-12-06 03:34:59.732645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-12-06 03:34:59.732656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-12-06 03:34:59.732801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-12-06 03:34:59.732814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.755 [2024-12-06 03:34:59.732987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-12-06 03:34:59.733000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-12-06 03:34:59.733201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-12-06 03:34:59.733214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-12-06 03:34:59.733431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-12-06 03:34:59.733444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-12-06 03:34:59.733512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-12-06 03:34:59.733523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-12-06 03:34:59.733744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-12-06 03:34:59.733758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-12-06 03:34:59.733972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-12-06 03:34:59.733985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-12-06 03:34:59.734195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-12-06 03:34:59.734208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-12-06 03:34:59.734344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-12-06 03:34:59.734357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-12-06 03:34:59.734498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-12-06 03:34:59.734510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-12-06 03:34:59.734735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-12-06 03:34:59.734747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-12-06 03:34:59.734934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-12-06 03:34:59.734951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-12-06 03:34:59.735039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-12-06 03:34:59.735051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-12-06 03:34:59.735252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-12-06 03:34:59.735265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-12-06 03:34:59.735416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-12-06 03:34:59.735429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-12-06 03:34:59.735691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-12-06 03:34:59.735703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-12-06 03:34:59.735924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-12-06 03:34:59.735936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-12-06 03:34:59.736186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-12-06 03:34:59.736198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-12-06 03:34:59.736351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-12-06 03:34:59.736363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-12-06 03:34:59.736600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-12-06 03:34:59.736612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-12-06 03:34:59.736675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-12-06 03:34:59.736686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-12-06 03:34:59.736752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-12-06 03:34:59.736763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-12-06 03:34:59.736934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-12-06 03:34:59.736945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-12-06 03:34:59.737029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-12-06 03:34:59.737040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-12-06 03:34:59.737173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-12-06 03:34:59.737186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-12-06 03:34:59.737284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-12-06 03:34:59.737296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-12-06 03:34:59.737493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-12-06 03:34:59.737506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-12-06 03:34:59.737582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-12-06 03:34:59.737596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-12-06 03:34:59.737766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-12-06 03:34:59.737778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-12-06 03:34:59.738004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-12-06 03:34:59.738017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-12-06 03:34:59.738237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-12-06 03:34:59.738250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-12-06 03:34:59.738473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-12-06 03:34:59.738485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-12-06 03:34:59.738702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-12-06 03:34:59.738715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-12-06 03:34:59.738868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-12-06 03:34:59.738880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-12-06 03:34:59.739096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-12-06 03:34:59.739109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-12-06 03:34:59.739274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-12-06 03:34:59.739286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-12-06 03:34:59.739423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-12-06 03:34:59.739435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-12-06 03:34:59.739566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-12-06 03:34:59.739578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-12-06 03:34:59.739775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-12-06 03:34:59.739789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.756 [2024-12-06 03:34:59.739991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-12-06 03:34:59.740005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-12-06 03:34:59.740228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-12-06 03:34:59.740240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-12-06 03:34:59.740383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-12-06 03:34:59.740394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-12-06 03:34:59.740621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-12-06 03:34:59.740634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-12-06 03:34:59.740856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-12-06 03:34:59.740869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-12-06 03:34:59.741033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-12-06 03:34:59.741046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-12-06 03:34:59.741122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-12-06 03:34:59.741133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-12-06 03:34:59.741269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-12-06 03:34:59.741282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-12-06 03:34:59.741376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-12-06 03:34:59.741388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-12-06 03:34:59.741615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-12-06 03:34:59.741627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-12-06 03:34:59.741773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-12-06 03:34:59.741785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-12-06 03:34:59.742007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-12-06 03:34:59.742020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-12-06 03:34:59.742115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-12-06 03:34:59.742128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-12-06 03:34:59.742273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-12-06 03:34:59.742286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-12-06 03:34:59.742486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-12-06 03:34:59.742500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-12-06 03:34:59.742673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-12-06 03:34:59.742686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-12-06 03:34:59.742902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-12-06 03:34:59.742916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-12-06 03:34:59.743075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-12-06 03:34:59.743087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-12-06 03:34:59.743266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-12-06 03:34:59.743279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-12-06 03:34:59.743523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-12-06 03:34:59.743535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-12-06 03:34:59.743736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-12-06 03:34:59.743748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-12-06 03:34:59.743986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-12-06 03:34:59.743999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-12-06 03:34:59.744223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-12-06 03:34:59.744236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-12-06 03:34:59.744445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-12-06 03:34:59.744457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-12-06 03:34:59.744681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-12-06 03:34:59.744694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-12-06 03:34:59.744841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-12-06 03:34:59.744853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-12-06 03:34:59.745042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-12-06 03:34:59.745056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-12-06 03:34:59.745148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-12-06 03:34:59.745160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-12-06 03:34:59.745309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-12-06 03:34:59.745326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-12-06 03:34:59.745468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-12-06 03:34:59.745480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-12-06 03:34:59.745559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-12-06 03:34:59.745571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-12-06 03:34:59.745650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-12-06 03:34:59.745661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-12-06 03:34:59.745858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-12-06 03:34:59.745871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-12-06 03:34:59.746071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-12-06 03:34:59.746084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-12-06 03:34:59.746164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-12-06 03:34:59.746176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-12-06 03:34:59.746311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-12-06 03:34:59.746323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-12-06 03:34:59.746472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-12-06 03:34:59.746484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-12-06 03:34:59.746696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-12-06 03:34:59.746709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-12-06 03:34:59.746909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-12-06 03:34:59.746922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-12-06 03:34:59.747153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-12-06 03:34:59.747165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.757 [2024-12-06 03:34:59.747310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-12-06 03:34:59.747323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-12-06 03:34:59.747550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-12-06 03:34:59.747564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-12-06 03:34:59.747668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-12-06 03:34:59.747680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-12-06 03:34:59.747753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-12-06 03:34:59.747764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-12-06 03:34:59.747928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-12-06 03:34:59.747941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-12-06 03:34:59.748184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-12-06 03:34:59.748196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-12-06 03:34:59.748415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-12-06 03:34:59.748427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-12-06 03:34:59.748686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-12-06 03:34:59.748698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-12-06 03:34:59.748847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-12-06 03:34:59.748859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-12-06 03:34:59.748993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-12-06 03:34:59.749005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-12-06 03:34:59.749157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-12-06 03:34:59.749170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-12-06 03:34:59.749368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-12-06 03:34:59.749381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-12-06 03:34:59.749577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-12-06 03:34:59.749589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-12-06 03:34:59.749748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-12-06 03:34:59.749761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-12-06 03:34:59.749990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-12-06 03:34:59.750003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-12-06 03:34:59.750153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-12-06 03:34:59.750167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-12-06 03:34:59.750315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-12-06 03:34:59.750327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-12-06 03:34:59.750483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-12-06 03:34:59.750496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-12-06 03:34:59.750729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-12-06 03:34:59.750741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-12-06 03:34:59.750808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-12-06 03:34:59.750819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-12-06 03:34:59.751065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-12-06 03:34:59.751079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-12-06 03:34:59.751244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-12-06 03:34:59.751257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-12-06 03:34:59.751393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-12-06 03:34:59.751405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-12-06 03:34:59.751552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-12-06 03:34:59.751564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-12-06 03:34:59.751774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-12-06 03:34:59.751788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-12-06 03:34:59.751998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-12-06 03:34:59.752011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-12-06 03:34:59.752145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-12-06 03:34:59.752158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-12-06 03:34:59.752365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-12-06 03:34:59.752377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-12-06 03:34:59.752579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-12-06 03:34:59.752594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-12-06 03:34:59.752687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-12-06 03:34:59.752700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-12-06 03:34:59.752780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-12-06 03:34:59.752792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-12-06 03:34:59.752951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-12-06 03:34:59.752963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-12-06 03:34:59.753097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-12-06 03:34:59.753109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-12-06 03:34:59.753298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-12-06 03:34:59.753311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-12-06 03:34:59.753447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-12-06 03:34:59.753459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-12-06 03:34:59.753618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-12-06 03:34:59.753630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-12-06 03:34:59.753806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-12-06 03:34:59.753819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-12-06 03:34:59.754044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-12-06 03:34:59.754057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-12-06 03:34:59.754119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-12-06 03:34:59.754130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-12-06 03:34:59.754284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-12-06 03:34:59.754296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-12-06 03:34:59.754448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-12-06 03:34:59.754460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-12-06 03:34:59.754617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-12-06 03:34:59.754630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-12-06 03:34:59.754702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-12-06 03:34:59.754713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-12-06 03:34:59.754882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-12-06 03:34:59.754895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-12-06 03:34:59.754965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-12-06 03:34:59.754976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-12-06 03:34:59.755083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-12-06 03:34:59.755095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-12-06 03:34:59.755193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-12-06 03:34:59.755205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-12-06 03:34:59.755409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-12-06 03:34:59.755422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-12-06 03:34:59.755556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-12-06 03:34:59.755569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2771787 Killed "${NVMF_APP[@]}" "$@" 00:26:39.758 [2024-12-06 03:34:59.755701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-12-06 03:34:59.755713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-12-06 03:34:59.755878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-12-06 03:34:59.755891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-12-06 03:34:59.756034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-12-06 03:34:59.756047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-12-06 03:34:59.756109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-12-06 03:34:59.756120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-12-06 03:34:59.756204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-12-06 03:34:59.756216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-12-06 03:34:59.756362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-12-06 03:34:59.756377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 03:34:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:26:39.758 [2024-12-06 03:34:59.756601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-12-06 03:34:59.756613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-12-06 03:34:59.756743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-12-06 03:34:59.756755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.758 03:34:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-12-06 03:34:59.756839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-12-06 03:34:59.756851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-12-06 03:34:59.756939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-12-06 03:34:59.756954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 03:34:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:39.758 [2024-12-06 03:34:59.757154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-12-06 03:34:59.757167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-12-06 03:34:59.757252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-12-06 03:34:59.757263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 03:34:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:39.759 [2024-12-06 03:34:59.757430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-12-06 03:34:59.757445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-12-06 03:34:59.757643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-12-06 03:34:59.757656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 03:34:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:39.759 [2024-12-06 03:34:59.757813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-12-06 03:34:59.757827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-12-06 03:34:59.758030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-12-06 03:34:59.758044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-12-06 03:34:59.758189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-12-06 03:34:59.758202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-12-06 03:34:59.758405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-12-06 03:34:59.758419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-12-06 03:34:59.758571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-12-06 03:34:59.758584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-12-06 03:34:59.758786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-12-06 03:34:59.758800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-12-06 03:34:59.759029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-12-06 03:34:59.759041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-12-06 03:34:59.759120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-12-06 03:34:59.759132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-12-06 03:34:59.759285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-12-06 03:34:59.759297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-12-06 03:34:59.759521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-12-06 03:34:59.759534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-12-06 03:34:59.759669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-12-06 03:34:59.759682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-12-06 03:34:59.759811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-12-06 03:34:59.759823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-12-06 03:34:59.759970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-12-06 03:34:59.759984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-12-06 03:34:59.760149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-12-06 03:34:59.760161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-12-06 03:34:59.760318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-12-06 03:34:59.760330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-12-06 03:34:59.760405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-12-06 03:34:59.760416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-12-06 03:34:59.760581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-12-06 03:34:59.760601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-12-06 03:34:59.760713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-12-06 03:34:59.760736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-12-06 03:34:59.760886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-12-06 03:34:59.760903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-12-06 03:34:59.761140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-12-06 03:34:59.761156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-12-06 03:34:59.761333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-12-06 03:34:59.761350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-12-06 03:34:59.761452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-12-06 03:34:59.761467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-12-06 03:34:59.761613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-12-06 03:34:59.761631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-12-06 03:34:59.761784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-12-06 03:34:59.761802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-12-06 03:34:59.762017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-12-06 03:34:59.762033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-12-06 03:34:59.762266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-12-06 03:34:59.762283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-12-06 03:34:59.762359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-12-06 03:34:59.762374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-12-06 03:34:59.762464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-12-06 03:34:59.762480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-12-06 03:34:59.762727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-12-06 03:34:59.762741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-12-06 03:34:59.762811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-12-06 03:34:59.762825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-12-06 03:34:59.762920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-12-06 03:34:59.762933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-12-06 03:34:59.763149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-12-06 03:34:59.763162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-12-06 03:34:59.763317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-12-06 03:34:59.763328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-12-06 03:34:59.763485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-12-06 03:34:59.763497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-12-06 03:34:59.763584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-12-06 03:34:59.763595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-12-06 03:34:59.763760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-12-06 03:34:59.763772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-12-06 03:34:59.763956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-12-06 03:34:59.763969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-12-06 03:34:59.764123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-12-06 03:34:59.764138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-12-06 03:34:59.764273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-12-06 03:34:59.764286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 03:34:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2772504 00:26:39.760 [2024-12-06 03:34:59.764532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-12-06 03:34:59.764546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-12-06 03:34:59.764635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-12-06 03:34:59.764648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 03:34:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2772504 00:26:39.760 [2024-12-06 03:34:59.764801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-12-06 03:34:59.764813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-12-06 03:34:59.764889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 03:34:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:39.760 [2024-12-06 03:34:59.764899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-12-06 03:34:59.764998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-12-06 03:34:59.765012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-12-06 03:34:59.765092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-12-06 03:34:59.765105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 03:34:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2772504 ']' 00:26:39.760 [2024-12-06 03:34:59.765199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-12-06 03:34:59.765212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-12-06 03:34:59.765409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 03:34:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:39.760 [2024-12-06 03:34:59.765421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-12-06 03:34:59.765623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-12-06 03:34:59.765636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 03:34:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:39.760 [2024-12-06 03:34:59.765717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-12-06 03:34:59.765729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-12-06 03:34:59.765870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-12-06 03:34:59.765884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 03:34:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:39.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:39.760 [2024-12-06 03:34:59.766030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-12-06 03:34:59.766042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-12-06 03:34:59.766124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-12-06 03:34:59.766136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 03:34:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:39.760 [2024-12-06 03:34:59.766298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-12-06 03:34:59.766312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-12-06 03:34:59.766451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-12-06 03:34:59.766463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 03:34:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:39.760 [2024-12-06 03:34:59.766622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-12-06 03:34:59.766634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-12-06 03:34:59.766767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-12-06 03:34:59.766779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-12-06 03:34:59.766990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-12-06 03:34:59.767002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-12-06 03:34:59.767090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-12-06 03:34:59.767102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-12-06 03:34:59.767180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-12-06 03:34:59.767192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-12-06 03:34:59.767287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-12-06 03:34:59.767299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-12-06 03:34:59.767382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-12-06 03:34:59.767393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-12-06 03:34:59.767477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-12-06 03:34:59.767490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-12-06 03:34:59.767588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-12-06 03:34:59.767601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-12-06 03:34:59.767664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-12-06 03:34:59.767675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-12-06 03:34:59.767809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-12-06 03:34:59.767822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-12-06 03:34:59.767994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-12-06 03:34:59.768008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-12-06 03:34:59.768188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-12-06 03:34:59.768200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-12-06 03:34:59.768323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-12-06 03:34:59.768336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-12-06 03:34:59.768416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-12-06 03:34:59.768427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-12-06 03:34:59.768583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-12-06 03:34:59.768601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-12-06 03:34:59.768814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-12-06 03:34:59.768826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-12-06 03:34:59.769000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-12-06 03:34:59.769013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-12-06 03:34:59.769221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-12-06 03:34:59.769235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-12-06 03:34:59.769388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-12-06 03:34:59.769401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-12-06 03:34:59.769494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-12-06 03:34:59.769507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-12-06 03:34:59.769598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-12-06 03:34:59.769612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-12-06 03:34:59.769825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-12-06 03:34:59.769837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-12-06 03:34:59.769900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-12-06 03:34:59.769912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-12-06 03:34:59.770091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-12-06 03:34:59.770116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-12-06 03:34:59.770294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-12-06 03:34:59.770312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-12-06 03:34:59.770492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-12-06 03:34:59.770509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-12-06 03:34:59.770684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-12-06 03:34:59.770701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-12-06 03:34:59.770800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-12-06 03:34:59.770819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-12-06 03:34:59.770974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-12-06 03:34:59.770991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-12-06 03:34:59.771142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-12-06 03:34:59.771159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-12-06 03:34:59.771256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-12-06 03:34:59.771273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-12-06 03:34:59.771491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-12-06 03:34:59.771507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-12-06 03:34:59.771717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-12-06 03:34:59.771731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-12-06 03:34:59.771819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-12-06 03:34:59.771829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-12-06 03:34:59.772053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-12-06 03:34:59.772066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-12-06 03:34:59.772205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-12-06 03:34:59.772218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-12-06 03:34:59.772377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-12-06 03:34:59.772392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-12-06 03:34:59.772463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-12-06 03:34:59.772474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-12-06 03:34:59.772680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-12-06 03:34:59.772694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-12-06 03:34:59.772834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-12-06 03:34:59.772848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-12-06 03:34:59.773090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-12-06 03:34:59.773103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-12-06 03:34:59.773180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-12-06 03:34:59.773190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-12-06 03:34:59.773354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-12-06 03:34:59.773367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-12-06 03:34:59.773522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-12-06 03:34:59.773534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-12-06 03:34:59.773748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-12-06 03:34:59.773761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-12-06 03:34:59.773978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-12-06 03:34:59.773992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-12-06 03:34:59.774160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-12-06 03:34:59.774172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.762 [2024-12-06 03:34:59.774322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-12-06 03:34:59.774335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-12-06 03:34:59.774469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-12-06 03:34:59.774482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-12-06 03:34:59.774679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-12-06 03:34:59.774693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-12-06 03:34:59.774825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-12-06 03:34:59.774836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-12-06 03:34:59.775038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-12-06 03:34:59.775052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-12-06 03:34:59.775252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-12-06 03:34:59.775264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-12-06 03:34:59.775440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-12-06 03:34:59.775452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-12-06 03:34:59.775619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-12-06 03:34:59.775632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-12-06 03:34:59.775810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-12-06 03:34:59.775823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-12-06 03:34:59.776070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-12-06 03:34:59.776083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-12-06 03:34:59.776162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-12-06 03:34:59.776173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-12-06 03:34:59.776241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-12-06 03:34:59.776252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-12-06 03:34:59.776409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-12-06 03:34:59.776421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-12-06 03:34:59.776593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-12-06 03:34:59.776606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-12-06 03:34:59.776674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-12-06 03:34:59.776685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-12-06 03:34:59.776881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-12-06 03:34:59.776893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-12-06 03:34:59.777144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-12-06 03:34:59.777165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-12-06 03:34:59.777382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-12-06 03:34:59.777399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-12-06 03:34:59.777547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-12-06 03:34:59.777564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-12-06 03:34:59.777791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-12-06 03:34:59.777807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-12-06 03:34:59.777971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-12-06 03:34:59.777988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-12-06 03:34:59.778222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-12-06 03:34:59.778239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-12-06 03:34:59.778328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-12-06 03:34:59.778344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-12-06 03:34:59.778551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-12-06 03:34:59.778567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-12-06 03:34:59.778723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-12-06 03:34:59.778740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-12-06 03:34:59.778968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-12-06 03:34:59.778986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-12-06 03:34:59.779166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-12-06 03:34:59.779183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-12-06 03:34:59.779295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-12-06 03:34:59.779312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-12-06 03:34:59.779460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-12-06 03:34:59.779477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-12-06 03:34:59.779637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-12-06 03:34:59.779653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-12-06 03:34:59.779813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-12-06 03:34:59.779829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-12-06 03:34:59.779923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-12-06 03:34:59.779940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-12-06 03:34:59.780057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-12-06 03:34:59.780074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-12-06 03:34:59.780244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-12-06 03:34:59.780260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-12-06 03:34:59.780439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-12-06 03:34:59.780454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-12-06 03:34:59.780611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-12-06 03:34:59.780624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-12-06 03:34:59.780862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-12-06 03:34:59.780874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-12-06 03:34:59.781060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-12-06 03:34:59.781074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-12-06 03:34:59.781162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-12-06 03:34:59.781174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-12-06 03:34:59.781271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-12-06 03:34:59.781283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-12-06 03:34:59.781439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-12-06 03:34:59.781454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-12-06 03:34:59.781611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-12-06 03:34:59.781624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-12-06 03:34:59.781858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-12-06 03:34:59.781871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-12-06 03:34:59.782089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-12-06 03:34:59.782108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-12-06 03:34:59.782349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-12-06 03:34:59.782366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-12-06 03:34:59.782461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-12-06 03:34:59.782477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-12-06 03:34:59.782717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-12-06 03:34:59.782733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-12-06 03:34:59.782904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-12-06 03:34:59.782921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-12-06 03:34:59.783148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-12-06 03:34:59.783166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-12-06 03:34:59.783340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-12-06 03:34:59.783357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-12-06 03:34:59.783574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-12-06 03:34:59.783590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-12-06 03:34:59.783695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-12-06 03:34:59.783713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-12-06 03:34:59.783860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-12-06 03:34:59.783878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-12-06 03:34:59.784025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-12-06 03:34:59.784041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-12-06 03:34:59.784140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-12-06 03:34:59.784156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-12-06 03:34:59.784327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-12-06 03:34:59.784345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-12-06 03:34:59.784500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-12-06 03:34:59.784517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-12-06 03:34:59.784746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-12-06 03:34:59.784763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-12-06 03:34:59.784856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-12-06 03:34:59.784873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-12-06 03:34:59.785051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-12-06 03:34:59.785068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-12-06 03:34:59.785280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-12-06 03:34:59.785297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-12-06 03:34:59.785526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-12-06 03:34:59.785542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-12-06 03:34:59.785805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-12-06 03:34:59.785822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-12-06 03:34:59.785931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-12-06 03:34:59.785951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-12-06 03:34:59.786128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-12-06 03:34:59.786143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-12-06 03:34:59.786226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-12-06 03:34:59.786237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.764 [2024-12-06 03:34:59.786439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-12-06 03:34:59.786451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-12-06 03:34:59.786603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-12-06 03:34:59.786615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-12-06 03:34:59.786694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-12-06 03:34:59.786706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-12-06 03:34:59.786928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-12-06 03:34:59.786940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-12-06 03:34:59.787030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-12-06 03:34:59.787048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-12-06 03:34:59.787233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-12-06 03:34:59.787250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-12-06 03:34:59.787364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-12-06 03:34:59.787381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-12-06 03:34:59.787546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-12-06 03:34:59.787563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-12-06 03:34:59.787792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-12-06 03:34:59.787810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-12-06 03:34:59.788049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-12-06 03:34:59.788066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-12-06 03:34:59.788221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-12-06 03:34:59.788237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-12-06 03:34:59.788406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-12-06 03:34:59.788422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-12-06 03:34:59.788583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-12-06 03:34:59.788600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-12-06 03:34:59.788685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-12-06 03:34:59.788698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-12-06 03:34:59.788775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-12-06 03:34:59.788786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-12-06 03:34:59.788939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-12-06 03:34:59.788956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-12-06 03:34:59.789111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-12-06 03:34:59.789123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-12-06 03:34:59.789214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-12-06 03:34:59.789226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-12-06 03:34:59.789378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-12-06 03:34:59.789390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-12-06 03:34:59.789465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-12-06 03:34:59.789476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-12-06 03:34:59.789635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-12-06 03:34:59.789647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-12-06 03:34:59.789796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-12-06 03:34:59.789808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-12-06 03:34:59.789893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-12-06 03:34:59.789904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-12-06 03:34:59.790027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-12-06 03:34:59.790041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-12-06 03:34:59.790270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-12-06 03:34:59.790282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-12-06 03:34:59.790431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-12-06 03:34:59.790444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-12-06 03:34:59.790644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-12-06 03:34:59.790658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-12-06 03:34:59.790743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-12-06 03:34:59.790754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-12-06 03:34:59.790830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-12-06 03:34:59.790843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-12-06 03:34:59.791003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-12-06 03:34:59.791016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-12-06 03:34:59.791186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-12-06 03:34:59.791200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-12-06 03:34:59.791265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-12-06 03:34:59.791276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-12-06 03:34:59.791500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-12-06 03:34:59.791512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-12-06 03:34:59.791611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-12-06 03:34:59.791622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-12-06 03:34:59.791765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-12-06 03:34:59.791778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-12-06 03:34:59.791864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-12-06 03:34:59.791875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-12-06 03:34:59.792008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-12-06 03:34:59.792023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-12-06 03:34:59.792227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-12-06 03:34:59.792241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-12-06 03:34:59.792381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-12-06 03:34:59.792393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-12-06 03:34:59.792457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-12-06 03:34:59.792468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-12-06 03:34:59.792625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-12-06 03:34:59.792639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-12-06 03:34:59.792802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-12-06 03:34:59.792815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-12-06 03:34:59.792894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-12-06 03:34:59.792906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-12-06 03:34:59.793002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-12-06 03:34:59.793014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-12-06 03:34:59.793229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-12-06 03:34:59.793243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-12-06 03:34:59.793374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-12-06 03:34:59.793386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-12-06 03:34:59.793517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-12-06 03:34:59.793530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-12-06 03:34:59.793660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-12-06 03:34:59.793673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-12-06 03:34:59.793823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-12-06 03:34:59.793836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-12-06 03:34:59.793912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-12-06 03:34:59.793923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-12-06 03:34:59.793991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-12-06 03:34:59.794003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-12-06 03:34:59.794071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-12-06 03:34:59.794083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-12-06 03:34:59.794159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-12-06 03:34:59.794171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-12-06 03:34:59.794244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-12-06 03:34:59.794256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-12-06 03:34:59.794342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-12-06 03:34:59.794354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-12-06 03:34:59.794555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-12-06 03:34:59.794567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-12-06 03:34:59.794631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-12-06 03:34:59.794642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-12-06 03:34:59.794733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-12-06 03:34:59.794744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-12-06 03:34:59.794843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-12-06 03:34:59.794857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-12-06 03:34:59.794946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-12-06 03:34:59.794964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-12-06 03:34:59.795108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-12-06 03:34:59.795121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-12-06 03:34:59.795202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-12-06 03:34:59.795214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-12-06 03:34:59.795422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-12-06 03:34:59.795435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-12-06 03:34:59.795510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-12-06 03:34:59.795522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-12-06 03:34:59.795666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-12-06 03:34:59.795680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-12-06 03:34:59.795820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-12-06 03:34:59.795833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-12-06 03:34:59.795991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-12-06 03:34:59.796004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-12-06 03:34:59.796144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-12-06 03:34:59.796157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-12-06 03:34:59.796232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-12-06 03:34:59.796242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-12-06 03:34:59.796335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-12-06 03:34:59.796346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-12-06 03:34:59.796482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-12-06 03:34:59.796495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-12-06 03:34:59.796577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-12-06 03:34:59.796588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-12-06 03:34:59.796659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-12-06 03:34:59.796669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-12-06 03:34:59.796826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-12-06 03:34:59.796839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-12-06 03:34:59.796939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-12-06 03:34:59.796955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-12-06 03:34:59.797035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-12-06 03:34:59.797048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-12-06 03:34:59.797123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-12-06 03:34:59.797135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-12-06 03:34:59.797211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-12-06 03:34:59.797223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-12-06 03:34:59.797368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-12-06 03:34:59.797381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-12-06 03:34:59.797460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-12-06 03:34:59.797471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-12-06 03:34:59.797550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-12-06 03:34:59.797563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-12-06 03:34:59.797718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-12-06 03:34:59.797731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-12-06 03:34:59.797811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-12-06 03:34:59.797822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-12-06 03:34:59.797909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-12-06 03:34:59.797920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-12-06 03:34:59.797990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-12-06 03:34:59.798004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-12-06 03:34:59.798100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-12-06 03:34:59.798111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-12-06 03:34:59.798194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-12-06 03:34:59.798206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-12-06 03:34:59.798283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-12-06 03:34:59.798294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-12-06 03:34:59.798378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-12-06 03:34:59.798389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-12-06 03:34:59.798557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-12-06 03:34:59.798570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-12-06 03:34:59.798729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-12-06 03:34:59.798741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-12-06 03:34:59.798875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-12-06 03:34:59.798888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-12-06 03:34:59.798962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-12-06 03:34:59.798973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-12-06 03:34:59.799104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-12-06 03:34:59.799117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-12-06 03:34:59.799259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-12-06 03:34:59.799272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-12-06 03:34:59.799360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-12-06 03:34:59.799372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-12-06 03:34:59.799457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-12-06 03:34:59.799470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-12-06 03:34:59.799541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-12-06 03:34:59.799552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-12-06 03:34:59.799618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-12-06 03:34:59.799630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-12-06 03:34:59.799760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-12-06 03:34:59.799773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-12-06 03:34:59.799849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-12-06 03:34:59.799861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-12-06 03:34:59.800009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-12-06 03:34:59.800021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-12-06 03:34:59.800108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-12-06 03:34:59.800120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-12-06 03:34:59.800205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-12-06 03:34:59.800217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-12-06 03:34:59.800291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-12-06 03:34:59.800303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-12-06 03:34:59.800366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-12-06 03:34:59.800377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-12-06 03:34:59.800581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-12-06 03:34:59.800594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-12-06 03:34:59.800726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-12-06 03:34:59.800738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-12-06 03:34:59.800879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-12-06 03:34:59.800893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-12-06 03:34:59.801032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-12-06 03:34:59.801046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-12-06 03:34:59.801192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-12-06 03:34:59.801205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-12-06 03:34:59.801302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-12-06 03:34:59.801314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-12-06 03:34:59.801392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-12-06 03:34:59.801404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-12-06 03:34:59.801473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-12-06 03:34:59.801484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-12-06 03:34:59.801558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-12-06 03:34:59.801571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-12-06 03:34:59.801723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-12-06 03:34:59.801735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-12-06 03:34:59.801873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-12-06 03:34:59.801886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-12-06 03:34:59.801963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-12-06 03:34:59.801975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-12-06 03:34:59.802049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-12-06 03:34:59.802061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-12-06 03:34:59.802144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-12-06 03:34:59.802156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-12-06 03:34:59.802292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-12-06 03:34:59.802305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-12-06 03:34:59.802389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-12-06 03:34:59.802400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-12-06 03:34:59.802553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-12-06 03:34:59.802565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-12-06 03:34:59.802641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-12-06 03:34:59.802653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-12-06 03:34:59.802736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-12-06 03:34:59.802750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-12-06 03:34:59.802833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-12-06 03:34:59.802846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-12-06 03:34:59.802915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-12-06 03:34:59.802926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-12-06 03:34:59.803065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-12-06 03:34:59.803078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-12-06 03:34:59.803171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-12-06 03:34:59.803183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-12-06 03:34:59.803320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-12-06 03:34:59.803332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-12-06 03:34:59.803541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-12-06 03:34:59.803553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-12-06 03:34:59.803629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-12-06 03:34:59.803642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-12-06 03:34:59.803715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-12-06 03:34:59.803727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.768 [2024-12-06 03:34:59.803794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-12-06 03:34:59.803806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-12-06 03:34:59.803880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-12-06 03:34:59.803892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-12-06 03:34:59.803989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-12-06 03:34:59.804002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-12-06 03:34:59.804077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-12-06 03:34:59.804089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-12-06 03:34:59.804154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-12-06 03:34:59.804165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-12-06 03:34:59.804237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-12-06 03:34:59.804249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-12-06 03:34:59.804337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-12-06 03:34:59.804349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-12-06 03:34:59.804436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-12-06 03:34:59.804448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-12-06 03:34:59.804511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-12-06 03:34:59.804523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-12-06 03:34:59.804599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-12-06 03:34:59.804611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-12-06 03:34:59.804743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-12-06 03:34:59.804756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-12-06 03:34:59.804830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-12-06 03:34:59.804843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-12-06 03:34:59.804992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-12-06 03:34:59.805005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-12-06 03:34:59.805078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-12-06 03:34:59.805091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-12-06 03:34:59.805170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-12-06 03:34:59.805182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-12-06 03:34:59.805316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-12-06 03:34:59.805328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-12-06 03:34:59.805407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-12-06 03:34:59.805420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-12-06 03:34:59.805500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-12-06 03:34:59.805512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-12-06 03:34:59.805611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-12-06 03:34:59.805624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-12-06 03:34:59.805710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-12-06 03:34:59.805723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-12-06 03:34:59.805792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-12-06 03:34:59.805804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-12-06 03:34:59.805866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-12-06 03:34:59.805878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-12-06 03:34:59.805957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-12-06 03:34:59.805970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-12-06 03:34:59.806052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-12-06 03:34:59.806063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-12-06 03:34:59.806140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-12-06 03:34:59.806152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-12-06 03:34:59.806226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-12-06 03:34:59.806240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-12-06 03:34:59.806376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-12-06 03:34:59.806388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-12-06 03:34:59.806544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-12-06 03:34:59.806556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-12-06 03:34:59.806627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-12-06 03:34:59.806639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-12-06 03:34:59.806714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-12-06 03:34:59.806725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-12-06 03:34:59.806856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-12-06 03:34:59.806869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-12-06 03:34:59.807034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-12-06 03:34:59.807048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-12-06 03:34:59.807117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-12-06 03:34:59.807129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-12-06 03:34:59.807280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-12-06 03:34:59.807293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-12-06 03:34:59.807377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-12-06 03:34:59.807390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-12-06 03:34:59.807476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-12-06 03:34:59.807489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-12-06 03:34:59.807623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-12-06 03:34:59.807635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-12-06 03:34:59.807781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-12-06 03:34:59.807793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-12-06 03:34:59.807878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-12-06 03:34:59.807889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-12-06 03:34:59.808033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-12-06 03:34:59.808046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-12-06 03:34:59.808180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-12-06 03:34:59.808192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-12-06 03:34:59.808280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-12-06 03:34:59.808292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-12-06 03:34:59.808361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-12-06 03:34:59.808372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-12-06 03:34:59.808527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-12-06 03:34:59.808540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-12-06 03:34:59.808613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-12-06 03:34:59.808625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-12-06 03:34:59.808698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-12-06 03:34:59.808711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:40.072 [2024-12-06 03:34:59.808874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.072 [2024-12-06 03:34:59.808887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.072 qpair failed and we were unable to recover it. 00:26:40.072 [2024-12-06 03:34:59.808962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.072 [2024-12-06 03:34:59.808974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.072 qpair failed and we were unable to recover it. 00:26:40.072 [2024-12-06 03:34:59.809054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.072 [2024-12-06 03:34:59.809066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.072 qpair failed and we were unable to recover it. 00:26:40.072 [2024-12-06 03:34:59.809150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.072 [2024-12-06 03:34:59.809162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.072 qpair failed and we were unable to recover it. 00:26:40.072 [2024-12-06 03:34:59.809231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.072 [2024-12-06 03:34:59.809243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.072 qpair failed and we were unable to recover it. 00:26:40.072 [2024-12-06 03:34:59.809388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.072 [2024-12-06 03:34:59.809400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.072 qpair failed and we were unable to recover it. 00:26:40.072 [2024-12-06 03:34:59.809476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.072 [2024-12-06 03:34:59.809489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.072 qpair failed and we were unable to recover it. 00:26:40.072 [2024-12-06 03:34:59.809566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.072 [2024-12-06 03:34:59.809578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.072 qpair failed and we were unable to recover it. 00:26:40.072 [2024-12-06 03:34:59.809664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.072 [2024-12-06 03:34:59.809676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.072 qpair failed and we were unable to recover it. 00:26:40.072 [2024-12-06 03:34:59.809749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.072 [2024-12-06 03:34:59.809760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.072 qpair failed and we were unable to recover it. 00:26:40.072 [2024-12-06 03:34:59.809836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.072 [2024-12-06 03:34:59.809847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.072 qpair failed and we were unable to recover it. 00:26:40.072 [2024-12-06 03:34:59.809983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.072 [2024-12-06 03:34:59.809995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.072 qpair failed and we were unable to recover it. 00:26:40.072 [2024-12-06 03:34:59.810167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.072 [2024-12-06 03:34:59.810185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.072 qpair failed and we were unable to recover it. 00:26:40.072 [2024-12-06 03:34:59.810263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.072 [2024-12-06 03:34:59.810278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.072 qpair failed and we were unable to recover it. 00:26:40.072 [2024-12-06 03:34:59.810362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.072 [2024-12-06 03:34:59.810376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.072 qpair failed and we were unable to recover it. 00:26:40.072 [2024-12-06 03:34:59.810475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.072 [2024-12-06 03:34:59.810491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.072 qpair failed and we were unable to recover it. 00:26:40.072 [2024-12-06 03:34:59.810593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.072 [2024-12-06 03:34:59.810610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.072 qpair failed and we were unable to recover it. 00:26:40.072 [2024-12-06 03:34:59.810699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.072 [2024-12-06 03:34:59.810716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.072 qpair failed and we were unable to recover it. 00:26:40.072 [2024-12-06 03:34:59.810861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.072 [2024-12-06 03:34:59.810877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.072 qpair failed and we were unable to recover it. 00:26:40.072 [2024-12-06 03:34:59.810964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.072 [2024-12-06 03:34:59.810981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.072 qpair failed and we were unable to recover it. 00:26:40.072 [2024-12-06 03:34:59.811081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.072 [2024-12-06 03:34:59.811097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.072 qpair failed and we were unable to recover it. 00:26:40.072 [2024-12-06 03:34:59.811243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.072 [2024-12-06 03:34:59.811260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.072 qpair failed and we were unable to recover it. 00:26:40.072 [2024-12-06 03:34:59.811487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.072 [2024-12-06 03:34:59.811504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.072 qpair failed and we were unable to recover it. 00:26:40.072 [2024-12-06 03:34:59.811584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.072 [2024-12-06 03:34:59.811600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.072 qpair failed and we were unable to recover it. 00:26:40.072 [2024-12-06 03:34:59.811744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.072 [2024-12-06 03:34:59.811761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.072 qpair failed and we were unable to recover it. 00:26:40.072 [2024-12-06 03:34:59.811866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.072 [2024-12-06 03:34:59.811886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.072 qpair failed and we were unable to recover it. 00:26:40.072 [2024-12-06 03:34:59.811971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.072 [2024-12-06 03:34:59.811988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.072 qpair failed and we were unable to recover it. 00:26:40.072 [2024-12-06 03:34:59.812082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.072 [2024-12-06 03:34:59.812097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.072 qpair failed and we were unable to recover it. 00:26:40.072 [2024-12-06 03:34:59.812267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.072 [2024-12-06 03:34:59.812283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.072 qpair failed and we were unable to recover it. 00:26:40.072 [2024-12-06 03:34:59.812456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.072 [2024-12-06 03:34:59.812478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.072 qpair failed and we were unable to recover it. 00:26:40.072 [2024-12-06 03:34:59.812571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.072 [2024-12-06 03:34:59.812585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.072 qpair failed and we were unable to recover it. 00:26:40.072 [2024-12-06 03:34:59.812734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.072 [2024-12-06 03:34:59.812746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.072 qpair failed and we were unable to recover it. 00:26:40.072 [2024-12-06 03:34:59.812894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.072 [2024-12-06 03:34:59.812907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.072 qpair failed and we were unable to recover it. 00:26:40.072 [2024-12-06 03:34:59.812988] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:26:40.073 [2024-12-06 03:34:59.813034] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:40.073 [2024-12-06 03:34:59.813071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.073 [2024-12-06 03:34:59.813083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.073 qpair failed and we were unable to recover it. 00:26:40.073 [2024-12-06 03:34:59.813148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.073 [2024-12-06 03:34:59.813158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.073 qpair failed and we were unable to recover it. 00:26:40.073 [2024-12-06 03:34:59.813243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.073 [2024-12-06 03:34:59.813253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.073 qpair failed and we were unable to recover it. 00:26:40.073 [2024-12-06 03:34:59.813490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.073 [2024-12-06 03:34:59.813501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.073 qpair failed and we were unable to recover it. 00:26:40.073 [2024-12-06 03:34:59.813656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.073 [2024-12-06 03:34:59.813669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.073 qpair failed and we were unable to recover it. 00:26:40.073 [2024-12-06 03:34:59.813782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.073 [2024-12-06 03:34:59.813793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.073 qpair failed and we were unable to recover it. 00:26:40.073 [2024-12-06 03:34:59.813942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.073 [2024-12-06 03:34:59.813959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.073 qpair failed and we were unable to recover it. 00:26:40.073 [2024-12-06 03:34:59.814036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.073 [2024-12-06 03:34:59.814049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.073 qpair failed and we were unable to recover it. 00:26:40.073 [2024-12-06 03:34:59.814137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.073 [2024-12-06 03:34:59.814150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.073 qpair failed and we were unable to recover it. 00:26:40.073 [2024-12-06 03:34:59.814283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.073 [2024-12-06 03:34:59.814294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.073 qpair failed and we were unable to recover it. 00:26:40.073 [2024-12-06 03:34:59.814380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.073 [2024-12-06 03:34:59.814392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.073 qpair failed and we were unable to recover it. 00:26:40.073 [2024-12-06 03:34:59.814483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.073 [2024-12-06 03:34:59.814495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.073 qpair failed and we were unable to recover it. 00:26:40.073 [2024-12-06 03:34:59.814576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.073 [2024-12-06 03:34:59.814589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.073 qpair failed and we were unable to recover it. 00:26:40.073 [2024-12-06 03:34:59.814819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.073 [2024-12-06 03:34:59.814832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.073 qpair failed and we were unable to recover it. 00:26:40.073 [2024-12-06 03:34:59.814968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.073 [2024-12-06 03:34:59.814981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.073 qpair failed and we were unable to recover it. 00:26:40.073 [2024-12-06 03:34:59.815130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.073 [2024-12-06 03:34:59.815142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.073 qpair failed and we were unable to recover it. 00:26:40.073 [2024-12-06 03:34:59.815243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.073 [2024-12-06 03:34:59.815256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.073 qpair failed and we were unable to recover it. 00:26:40.073 [2024-12-06 03:34:59.815338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.073 [2024-12-06 03:34:59.815350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.073 qpair failed and we were unable to recover it. 00:26:40.073 [2024-12-06 03:34:59.815488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.073 [2024-12-06 03:34:59.815501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.073 qpair failed and we were unable to recover it. 00:26:40.073 [2024-12-06 03:34:59.815647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.073 [2024-12-06 03:34:59.815660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.073 qpair failed and we were unable to recover it. 00:26:40.073 [2024-12-06 03:34:59.815822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.073 [2024-12-06 03:34:59.815835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.073 qpair failed and we were unable to recover it. 00:26:40.073 [2024-12-06 03:34:59.815928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.073 [2024-12-06 03:34:59.815940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.073 qpair failed and we were unable to recover it. 00:26:40.073 [2024-12-06 03:34:59.816024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.073 [2024-12-06 03:34:59.816036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.073 qpair failed and we were unable to recover it. 00:26:40.073 [2024-12-06 03:34:59.816111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.073 [2024-12-06 03:34:59.816123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.073 qpair failed and we were unable to recover it. 00:26:40.073 [2024-12-06 03:34:59.816346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.073 [2024-12-06 03:34:59.816358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.073 qpair failed and we were unable to recover it. 00:26:40.073 [2024-12-06 03:34:59.816446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.073 [2024-12-06 03:34:59.816457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.073 qpair failed and we were unable to recover it. 00:26:40.073 [2024-12-06 03:34:59.816598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.073 [2024-12-06 03:34:59.816610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.073 qpair failed and we were unable to recover it. 00:26:40.073 [2024-12-06 03:34:59.816678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.073 [2024-12-06 03:34:59.816690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.073 qpair failed and we were unable to recover it. 00:26:40.073 [2024-12-06 03:34:59.816764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.073 [2024-12-06 03:34:59.816776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.073 qpair failed and we were unable to recover it. 00:26:40.073 [2024-12-06 03:34:59.816983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.073 [2024-12-06 03:34:59.816996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.073 qpair failed and we were unable to recover it. 00:26:40.073 [2024-12-06 03:34:59.817139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.073 [2024-12-06 03:34:59.817152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.073 qpair failed and we were unable to recover it. 00:26:40.073 [2024-12-06 03:34:59.817245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.073 [2024-12-06 03:34:59.817258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.073 qpair failed and we were unable to recover it. 00:26:40.073 [2024-12-06 03:34:59.817400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.073 [2024-12-06 03:34:59.817412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.073 qpair failed and we were unable to recover it. 00:26:40.073 [2024-12-06 03:34:59.817494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.073 [2024-12-06 03:34:59.817505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.073 qpair failed and we were unable to recover it. 00:26:40.073 [2024-12-06 03:34:59.817592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.073 [2024-12-06 03:34:59.817604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.073 qpair failed and we were unable to recover it. 00:26:40.073 [2024-12-06 03:34:59.817766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.073 [2024-12-06 03:34:59.817779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.073 qpair failed and we were unable to recover it. 00:26:40.073 [2024-12-06 03:34:59.817914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.073 [2024-12-06 03:34:59.817927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.074 qpair failed and we were unable to recover it. 00:26:40.074 [2024-12-06 03:34:59.818000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.074 [2024-12-06 03:34:59.818013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.074 qpair failed and we were unable to recover it. 00:26:40.074 [2024-12-06 03:34:59.818161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.074 [2024-12-06 03:34:59.818173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.074 qpair failed and we were unable to recover it. 00:26:40.074 [2024-12-06 03:34:59.818335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.074 [2024-12-06 03:34:59.818347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.074 qpair failed and we were unable to recover it. 00:26:40.074 [2024-12-06 03:34:59.818417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.074 [2024-12-06 03:34:59.818429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.074 qpair failed and we were unable to recover it. 00:26:40.074 [2024-12-06 03:34:59.818521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.074 [2024-12-06 03:34:59.818533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.074 qpair failed and we were unable to recover it. 00:26:40.074 [2024-12-06 03:34:59.818628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.074 [2024-12-06 03:34:59.818641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.074 qpair failed and we were unable to recover it. 00:26:40.074 [2024-12-06 03:34:59.818807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.074 [2024-12-06 03:34:59.818820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.074 qpair failed and we were unable to recover it. 00:26:40.074 [2024-12-06 03:34:59.818982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.074 [2024-12-06 03:34:59.818994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.074 qpair failed and we were unable to recover it. 00:26:40.074 [2024-12-06 03:34:59.819091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.074 [2024-12-06 03:34:59.819104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.074 qpair failed and we were unable to recover it. 00:26:40.074 [2024-12-06 03:34:59.819180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.074 [2024-12-06 03:34:59.819192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.074 qpair failed and we were unable to recover it. 00:26:40.074 [2024-12-06 03:34:59.819257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.074 [2024-12-06 03:34:59.819269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.074 qpair failed and we were unable to recover it. 00:26:40.074 [2024-12-06 03:34:59.819415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.074 [2024-12-06 03:34:59.819427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.074 qpair failed and we were unable to recover it. 00:26:40.074 [2024-12-06 03:34:59.819514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.074 [2024-12-06 03:34:59.819526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.074 qpair failed and we were unable to recover it. 00:26:40.074 [2024-12-06 03:34:59.819603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.074 [2024-12-06 03:34:59.819616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.074 qpair failed and we were unable to recover it. 00:26:40.074 [2024-12-06 03:34:59.819708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.074 [2024-12-06 03:34:59.819720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.074 qpair failed and we were unable to recover it. 00:26:40.074 [2024-12-06 03:34:59.819801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.074 [2024-12-06 03:34:59.819813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.074 qpair failed and we were unable to recover it. 00:26:40.074 [2024-12-06 03:34:59.819907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.074 [2024-12-06 03:34:59.819920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.074 qpair failed and we were unable to recover it. 00:26:40.074 [2024-12-06 03:34:59.820000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.074 [2024-12-06 03:34:59.820012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.074 qpair failed and we were unable to recover it. 00:26:40.074 [2024-12-06 03:34:59.820101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.074 [2024-12-06 03:34:59.820113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.074 qpair failed and we were unable to recover it. 00:26:40.074 [2024-12-06 03:34:59.820253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.074 [2024-12-06 03:34:59.820265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.074 qpair failed and we were unable to recover it. 00:26:40.074 [2024-12-06 03:34:59.820333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.074 [2024-12-06 03:34:59.820345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.074 qpair failed and we were unable to recover it. 00:26:40.074 [2024-12-06 03:34:59.820433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.074 [2024-12-06 03:34:59.820445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.074 qpair failed and we were unable to recover it. 00:26:40.074 [2024-12-06 03:34:59.820591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.074 [2024-12-06 03:34:59.820602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.074 qpair failed and we were unable to recover it. 00:26:40.074 [2024-12-06 03:34:59.820676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.074 [2024-12-06 03:34:59.820687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.074 qpair failed and we were unable to recover it. 00:26:40.074 [2024-12-06 03:34:59.820818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.074 [2024-12-06 03:34:59.820830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.074 qpair failed and we were unable to recover it. 00:26:40.074 [2024-12-06 03:34:59.820899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.074 [2024-12-06 03:34:59.820911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.074 qpair failed and we were unable to recover it. 00:26:40.074 [2024-12-06 03:34:59.821015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.074 [2024-12-06 03:34:59.821029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.074 qpair failed and we were unable to recover it. 00:26:40.074 [2024-12-06 03:34:59.821119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.074 [2024-12-06 03:34:59.821131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.074 qpair failed and we were unable to recover it. 00:26:40.074 [2024-12-06 03:34:59.821211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.074 [2024-12-06 03:34:59.821222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.074 qpair failed and we were unable to recover it. 00:26:40.074 [2024-12-06 03:34:59.821388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.074 [2024-12-06 03:34:59.821400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.074 qpair failed and we were unable to recover it. 00:26:40.074 [2024-12-06 03:34:59.821481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.074 [2024-12-06 03:34:59.821493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.074 qpair failed and we were unable to recover it. 00:26:40.074 [2024-12-06 03:34:59.821666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.074 [2024-12-06 03:34:59.821678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.074 qpair failed and we were unable to recover it. 00:26:40.074 [2024-12-06 03:34:59.821748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.074 [2024-12-06 03:34:59.821760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.074 qpair failed and we were unable to recover it. 00:26:40.074 [2024-12-06 03:34:59.821824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.074 [2024-12-06 03:34:59.821836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.074 qpair failed and we were unable to recover it. 00:26:40.074 [2024-12-06 03:34:59.821908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.074 [2024-12-06 03:34:59.821922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.074 qpair failed and we were unable to recover it. 00:26:40.074 [2024-12-06 03:34:59.822073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.074 [2024-12-06 03:34:59.822085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.074 qpair failed and we were unable to recover it. 00:26:40.074 [2024-12-06 03:34:59.822224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.074 [2024-12-06 03:34:59.822236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.074 qpair failed and we were unable to recover it. 00:26:40.074 [2024-12-06 03:34:59.822453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.075 [2024-12-06 03:34:59.822465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.075 qpair failed and we were unable to recover it. 00:26:40.075 [2024-12-06 03:34:59.822554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.075 [2024-12-06 03:34:59.822566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.075 qpair failed and we were unable to recover it. 00:26:40.075 [2024-12-06 03:34:59.822651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.075 [2024-12-06 03:34:59.822663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.075 qpair failed and we were unable to recover it. 00:26:40.075 [2024-12-06 03:34:59.822742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.075 [2024-12-06 03:34:59.822754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.075 qpair failed and we were unable to recover it. 00:26:40.075 [2024-12-06 03:34:59.822907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.075 [2024-12-06 03:34:59.822918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.075 qpair failed and we were unable to recover it. 00:26:40.075 [2024-12-06 03:34:59.823021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.075 [2024-12-06 03:34:59.823034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.075 qpair failed and we were unable to recover it. 00:26:40.075 [2024-12-06 03:34:59.823103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.075 [2024-12-06 03:34:59.823116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.075 qpair failed and we were unable to recover it. 00:26:40.075 [2024-12-06 03:34:59.823191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.075 [2024-12-06 03:34:59.823202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.075 qpair failed and we were unable to recover it. 00:26:40.075 [2024-12-06 03:34:59.823410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.075 [2024-12-06 03:34:59.823422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.075 qpair failed and we were unable to recover it. 00:26:40.075 [2024-12-06 03:34:59.823578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.075 [2024-12-06 03:34:59.823590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.075 qpair failed and we were unable to recover it. 00:26:40.075 [2024-12-06 03:34:59.823674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.075 [2024-12-06 03:34:59.823686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.075 qpair failed and we were unable to recover it. 00:26:40.075 [2024-12-06 03:34:59.823780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.075 [2024-12-06 03:34:59.823794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.075 qpair failed and we were unable to recover it. 00:26:40.075 [2024-12-06 03:34:59.823879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.075 [2024-12-06 03:34:59.823891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.075 qpair failed and we were unable to recover it. 00:26:40.075 [2024-12-06 03:34:59.823962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.075 [2024-12-06 03:34:59.823973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.075 qpair failed and we were unable to recover it. 00:26:40.075 [2024-12-06 03:34:59.824061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.075 [2024-12-06 03:34:59.824073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.075 qpair failed and we were unable to recover it. 00:26:40.075 [2024-12-06 03:34:59.824159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.075 [2024-12-06 03:34:59.824171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.075 qpair failed and we were unable to recover it. 00:26:40.075 [2024-12-06 03:34:59.824302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.075 [2024-12-06 03:34:59.824313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.075 qpair failed and we were unable to recover it. 00:26:40.075 [2024-12-06 03:34:59.824515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.075 [2024-12-06 03:34:59.824528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.075 qpair failed and we were unable to recover it. 00:26:40.075 [2024-12-06 03:34:59.824681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.075 [2024-12-06 03:34:59.824692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.075 qpair failed and we were unable to recover it. 00:26:40.075 [2024-12-06 03:34:59.824845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.075 [2024-12-06 03:34:59.824857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.075 qpair failed and we were unable to recover it. 00:26:40.075 [2024-12-06 03:34:59.824940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.075 [2024-12-06 03:34:59.824957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.075 qpair failed and we were unable to recover it. 00:26:40.075 [2024-12-06 03:34:59.825045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.075 [2024-12-06 03:34:59.825058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.075 qpair failed and we were unable to recover it. 00:26:40.075 [2024-12-06 03:34:59.825194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.075 [2024-12-06 03:34:59.825207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.075 qpair failed and we were unable to recover it. 00:26:40.075 [2024-12-06 03:34:59.825285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.075 [2024-12-06 03:34:59.825298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.075 qpair failed and we were unable to recover it. 00:26:40.075 [2024-12-06 03:34:59.825383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.075 [2024-12-06 03:34:59.825395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.075 qpair failed and we were unable to recover it. 00:26:40.075 [2024-12-06 03:34:59.825551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.075 [2024-12-06 03:34:59.825563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.075 qpair failed and we were unable to recover it. 00:26:40.075 [2024-12-06 03:34:59.825727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.075 [2024-12-06 03:34:59.825740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.075 qpair failed and we were unable to recover it. 00:26:40.075 [2024-12-06 03:34:59.825834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.075 [2024-12-06 03:34:59.825846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.075 qpair failed and we were unable to recover it. 00:26:40.075 [2024-12-06 03:34:59.825940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.075 [2024-12-06 03:34:59.825957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.075 qpair failed and we were unable to recover it. 00:26:40.075 [2024-12-06 03:34:59.826096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.075 [2024-12-06 03:34:59.826108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.075 qpair failed and we were unable to recover it. 00:26:40.075 [2024-12-06 03:34:59.826246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.075 [2024-12-06 03:34:59.826258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.075 qpair failed and we were unable to recover it. 00:26:40.075 [2024-12-06 03:34:59.826316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.075 [2024-12-06 03:34:59.826327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.075 qpair failed and we were unable to recover it. 00:26:40.075 [2024-12-06 03:34:59.826457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.075 [2024-12-06 03:34:59.826469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.075 qpair failed and we were unable to recover it. 00:26:40.075 [2024-12-06 03:34:59.826675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.075 [2024-12-06 03:34:59.826687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.075 qpair failed and we were unable to recover it. 00:26:40.075 [2024-12-06 03:34:59.826777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.075 [2024-12-06 03:34:59.826789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.075 qpair failed and we were unable to recover it. 00:26:40.075 [2024-12-06 03:34:59.826864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.075 [2024-12-06 03:34:59.826875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.075 qpair failed and we were unable to recover it. 00:26:40.075 [2024-12-06 03:34:59.827024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.075 [2024-12-06 03:34:59.827037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.075 qpair failed and we were unable to recover it. 00:26:40.075 [2024-12-06 03:34:59.827140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.075 [2024-12-06 03:34:59.827154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.075 qpair failed and we were unable to recover it. 00:26:40.076 [2024-12-06 03:34:59.827219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.076 [2024-12-06 03:34:59.827231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.076 qpair failed and we were unable to recover it. 00:26:40.076 [2024-12-06 03:34:59.827308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.076 [2024-12-06 03:34:59.827319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.076 qpair failed and we were unable to recover it. 00:26:40.076 [2024-12-06 03:34:59.827452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.076 [2024-12-06 03:34:59.827465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.076 qpair failed and we were unable to recover it. 00:26:40.076 [2024-12-06 03:34:59.827548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.076 [2024-12-06 03:34:59.827561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.076 qpair failed and we were unable to recover it. 00:26:40.076 [2024-12-06 03:34:59.827701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.076 [2024-12-06 03:34:59.827712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.076 qpair failed and we were unable to recover it. 00:26:40.076 [2024-12-06 03:34:59.827840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.076 [2024-12-06 03:34:59.827851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.076 qpair failed and we were unable to recover it. 00:26:40.076 [2024-12-06 03:34:59.827922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.076 [2024-12-06 03:34:59.827934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.076 qpair failed and we were unable to recover it. 00:26:40.076 [2024-12-06 03:34:59.828124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.076 [2024-12-06 03:34:59.828137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.076 qpair failed and we were unable to recover it. 00:26:40.076 [2024-12-06 03:34:59.828230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.076 [2024-12-06 03:34:59.828242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.076 qpair failed and we were unable to recover it. 00:26:40.076 [2024-12-06 03:34:59.828376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.076 [2024-12-06 03:34:59.828389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.076 qpair failed and we were unable to recover it. 00:26:40.076 [2024-12-06 03:34:59.828546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.076 [2024-12-06 03:34:59.828559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.076 qpair failed and we were unable to recover it. 00:26:40.076 [2024-12-06 03:34:59.828642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.076 [2024-12-06 03:34:59.828654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.076 qpair failed and we were unable to recover it. 00:26:40.076 [2024-12-06 03:34:59.828784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.076 [2024-12-06 03:34:59.828796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.076 qpair failed and we were unable to recover it. 00:26:40.076 [2024-12-06 03:34:59.828879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.076 [2024-12-06 03:34:59.828891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.076 qpair failed and we were unable to recover it. 00:26:40.076 [2024-12-06 03:34:59.828972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.076 [2024-12-06 03:34:59.828984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.076 qpair failed and we were unable to recover it. 00:26:40.076 [2024-12-06 03:34:59.829185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.076 [2024-12-06 03:34:59.829196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.076 qpair failed and we were unable to recover it. 00:26:40.076 [2024-12-06 03:34:59.829273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.076 [2024-12-06 03:34:59.829285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.076 qpair failed and we were unable to recover it. 00:26:40.076 [2024-12-06 03:34:59.829429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.076 [2024-12-06 03:34:59.829441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.076 qpair failed and we were unable to recover it. 00:26:40.076 [2024-12-06 03:34:59.829569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.076 [2024-12-06 03:34:59.829581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.076 qpair failed and we were unable to recover it. 00:26:40.076 [2024-12-06 03:34:59.829676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.076 [2024-12-06 03:34:59.829688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.076 qpair failed and we were unable to recover it. 00:26:40.076 [2024-12-06 03:34:59.829874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.076 [2024-12-06 03:34:59.829887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.076 qpair failed and we were unable to recover it. 00:26:40.076 [2024-12-06 03:34:59.829975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.076 [2024-12-06 03:34:59.829987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.076 qpair failed and we were unable to recover it. 00:26:40.076 [2024-12-06 03:34:59.830135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.076 [2024-12-06 03:34:59.830148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.076 qpair failed and we were unable to recover it. 00:26:40.076 [2024-12-06 03:34:59.830293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.076 [2024-12-06 03:34:59.830305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.076 qpair failed and we were unable to recover it. 00:26:40.076 [2024-12-06 03:34:59.830373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.076 [2024-12-06 03:34:59.830386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.076 qpair failed and we were unable to recover it. 00:26:40.076 [2024-12-06 03:34:59.830473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.076 [2024-12-06 03:34:59.830486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.076 qpair failed and we were unable to recover it. 00:26:40.076 [2024-12-06 03:34:59.830580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.076 [2024-12-06 03:34:59.830592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.076 qpair failed and we were unable to recover it. 00:26:40.076 [2024-12-06 03:34:59.830750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.076 [2024-12-06 03:34:59.830761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.076 qpair failed and we were unable to recover it. 00:26:40.076 [2024-12-06 03:34:59.830913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.076 [2024-12-06 03:34:59.830925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.076 qpair failed and we were unable to recover it. 00:26:40.076 [2024-12-06 03:34:59.830995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.076 [2024-12-06 03:34:59.831007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.076 qpair failed and we were unable to recover it. 00:26:40.076 [2024-12-06 03:34:59.831097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.076 [2024-12-06 03:34:59.831109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.076 qpair failed and we were unable to recover it. 00:26:40.076 [2024-12-06 03:34:59.831261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.077 [2024-12-06 03:34:59.831272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.077 qpair failed and we were unable to recover it. 00:26:40.077 [2024-12-06 03:34:59.831347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.077 [2024-12-06 03:34:59.831359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.077 qpair failed and we were unable to recover it. 00:26:40.077 [2024-12-06 03:34:59.831437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.077 [2024-12-06 03:34:59.831449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.077 qpair failed and we were unable to recover it. 00:26:40.077 [2024-12-06 03:34:59.831515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.077 [2024-12-06 03:34:59.831527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.077 qpair failed and we were unable to recover it. 00:26:40.077 [2024-12-06 03:34:59.831714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.077 [2024-12-06 03:34:59.831725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.077 qpair failed and we were unable to recover it. 00:26:40.077 [2024-12-06 03:34:59.831891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.077 [2024-12-06 03:34:59.831903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.077 qpair failed and we were unable to recover it. 00:26:40.077 [2024-12-06 03:34:59.831992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.077 [2024-12-06 03:34:59.832005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.077 qpair failed and we were unable to recover it. 00:26:40.077 [2024-12-06 03:34:59.832087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.077 [2024-12-06 03:34:59.832099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.077 qpair failed and we were unable to recover it. 00:26:40.077 [2024-12-06 03:34:59.832179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.077 [2024-12-06 03:34:59.832192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.077 qpair failed and we were unable to recover it. 00:26:40.077 [2024-12-06 03:34:59.832326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.077 [2024-12-06 03:34:59.832338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.077 qpair failed and we were unable to recover it. 00:26:40.077 [2024-12-06 03:34:59.832400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.077 [2024-12-06 03:34:59.832411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.077 qpair failed and we were unable to recover it. 00:26:40.077 [2024-12-06 03:34:59.832488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.077 [2024-12-06 03:34:59.832500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.077 qpair failed and we were unable to recover it. 00:26:40.077 [2024-12-06 03:34:59.832634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.077 [2024-12-06 03:34:59.832646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.077 qpair failed and we were unable to recover it. 00:26:40.077 [2024-12-06 03:34:59.832722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.077 [2024-12-06 03:34:59.832734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.077 qpair failed and we were unable to recover it. 00:26:40.077 [2024-12-06 03:34:59.832878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.077 [2024-12-06 03:34:59.832890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.077 qpair failed and we were unable to recover it. 00:26:40.077 [2024-12-06 03:34:59.833032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.077 [2024-12-06 03:34:59.833044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.077 qpair failed and we were unable to recover it. 00:26:40.077 [2024-12-06 03:34:59.833139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.077 [2024-12-06 03:34:59.833151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.077 qpair failed and we were unable to recover it. 00:26:40.077 [2024-12-06 03:34:59.833235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.077 [2024-12-06 03:34:59.833248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.077 qpair failed and we were unable to recover it. 00:26:40.077 [2024-12-06 03:34:59.833323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.077 [2024-12-06 03:34:59.833335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.077 qpair failed and we were unable to recover it. 00:26:40.077 [2024-12-06 03:34:59.833405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.077 [2024-12-06 03:34:59.833417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.077 qpair failed and we were unable to recover it. 00:26:40.077 [2024-12-06 03:34:59.833630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.077 [2024-12-06 03:34:59.833642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.077 qpair failed and we were unable to recover it. 00:26:40.077 [2024-12-06 03:34:59.833732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.077 [2024-12-06 03:34:59.833745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.077 qpair failed and we were unable to recover it. 00:26:40.077 [2024-12-06 03:34:59.833882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.077 [2024-12-06 03:34:59.833894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.077 qpair failed and we were unable to recover it. 00:26:40.077 [2024-12-06 03:34:59.834044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.077 [2024-12-06 03:34:59.834056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.077 qpair failed and we were unable to recover it. 00:26:40.077 [2024-12-06 03:34:59.834139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.077 [2024-12-06 03:34:59.834151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.077 qpair failed and we were unable to recover it. 00:26:40.077 [2024-12-06 03:34:59.834234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.077 [2024-12-06 03:34:59.834246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.077 qpair failed and we were unable to recover it. 00:26:40.077 [2024-12-06 03:34:59.834391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.077 [2024-12-06 03:34:59.834403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.077 qpair failed and we were unable to recover it. 00:26:40.077 [2024-12-06 03:34:59.834508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.077 [2024-12-06 03:34:59.834520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.077 qpair failed and we were unable to recover it. 00:26:40.077 [2024-12-06 03:34:59.834596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.077 [2024-12-06 03:34:59.834609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.077 qpair failed and we were unable to recover it. 00:26:40.077 [2024-12-06 03:34:59.834693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.077 [2024-12-06 03:34:59.834705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.077 qpair failed and we were unable to recover it. 00:26:40.077 [2024-12-06 03:34:59.834783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.077 [2024-12-06 03:34:59.834795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.077 qpair failed and we were unable to recover it. 00:26:40.077 [2024-12-06 03:34:59.834938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.077 [2024-12-06 03:34:59.834953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.077 qpair failed and we were unable to recover it. 00:26:40.077 [2024-12-06 03:34:59.835090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.077 [2024-12-06 03:34:59.835102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.077 qpair failed and we were unable to recover it. 00:26:40.077 [2024-12-06 03:34:59.835170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.077 [2024-12-06 03:34:59.835182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.077 qpair failed and we were unable to recover it. 00:26:40.077 [2024-12-06 03:34:59.835379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.077 [2024-12-06 03:34:59.835391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.077 qpair failed and we were unable to recover it. 00:26:40.077 [2024-12-06 03:34:59.835493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.077 [2024-12-06 03:34:59.835505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.077 qpair failed and we were unable to recover it. 00:26:40.077 [2024-12-06 03:34:59.835589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.077 [2024-12-06 03:34:59.835601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.077 qpair failed and we were unable to recover it. 00:26:40.077 [2024-12-06 03:34:59.835666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.077 [2024-12-06 03:34:59.835678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.077 qpair failed and we were unable to recover it. 00:26:40.078 [2024-12-06 03:34:59.835743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-12-06 03:34:59.835755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.078 qpair failed and we were unable to recover it. 00:26:40.078 [2024-12-06 03:34:59.835839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-12-06 03:34:59.835850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.078 qpair failed and we were unable to recover it. 00:26:40.078 [2024-12-06 03:34:59.835986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-12-06 03:34:59.835998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.078 qpair failed and we were unable to recover it. 00:26:40.078 [2024-12-06 03:34:59.836084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-12-06 03:34:59.836095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.078 qpair failed and we were unable to recover it. 00:26:40.078 [2024-12-06 03:34:59.836166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-12-06 03:34:59.836190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.078 qpair failed and we were unable to recover it. 00:26:40.078 [2024-12-06 03:34:59.836270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-12-06 03:34:59.836281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.078 qpair failed and we were unable to recover it. 00:26:40.078 [2024-12-06 03:34:59.836417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-12-06 03:34:59.836429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.078 qpair failed and we were unable to recover it. 00:26:40.078 [2024-12-06 03:34:59.836632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-12-06 03:34:59.836645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.078 qpair failed and we were unable to recover it. 00:26:40.078 [2024-12-06 03:34:59.836727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-12-06 03:34:59.836739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.078 qpair failed and we were unable to recover it. 00:26:40.078 [2024-12-06 03:34:59.836833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-12-06 03:34:59.836845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.078 qpair failed and we were unable to recover it. 00:26:40.078 [2024-12-06 03:34:59.836979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-12-06 03:34:59.836995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.078 qpair failed and we were unable to recover it. 00:26:40.078 [2024-12-06 03:34:59.837139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-12-06 03:34:59.837151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.078 qpair failed and we were unable to recover it. 00:26:40.078 [2024-12-06 03:34:59.837285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-12-06 03:34:59.837298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.078 qpair failed and we were unable to recover it. 00:26:40.078 [2024-12-06 03:34:59.837446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-12-06 03:34:59.837458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.078 qpair failed and we were unable to recover it. 00:26:40.078 [2024-12-06 03:34:59.837553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-12-06 03:34:59.837565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.078 qpair failed and we were unable to recover it. 00:26:40.078 [2024-12-06 03:34:59.837642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-12-06 03:34:59.837654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.078 qpair failed and we were unable to recover it. 00:26:40.078 [2024-12-06 03:34:59.837729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-12-06 03:34:59.837741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.078 qpair failed and we were unable to recover it. 00:26:40.078 [2024-12-06 03:34:59.837810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-12-06 03:34:59.837822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.078 qpair failed and we were unable to recover it. 00:26:40.078 [2024-12-06 03:34:59.837956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-12-06 03:34:59.837968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.078 qpair failed and we were unable to recover it. 00:26:40.078 [2024-12-06 03:34:59.838117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-12-06 03:34:59.838128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.078 qpair failed and we were unable to recover it. 00:26:40.078 [2024-12-06 03:34:59.838258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-12-06 03:34:59.838270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.078 qpair failed and we were unable to recover it. 00:26:40.078 [2024-12-06 03:34:59.838421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-12-06 03:34:59.838432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.078 qpair failed and we were unable to recover it. 00:26:40.078 [2024-12-06 03:34:59.838626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-12-06 03:34:59.838638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.078 qpair failed and we were unable to recover it. 00:26:40.078 [2024-12-06 03:34:59.838708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-12-06 03:34:59.838721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.078 qpair failed and we were unable to recover it. 00:26:40.078 [2024-12-06 03:34:59.838815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-12-06 03:34:59.838827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.078 qpair failed and we were unable to recover it. 00:26:40.078 [2024-12-06 03:34:59.838959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-12-06 03:34:59.838971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.078 qpair failed and we were unable to recover it. 00:26:40.078 [2024-12-06 03:34:59.839051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-12-06 03:34:59.839064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.078 qpair failed and we were unable to recover it. 00:26:40.078 [2024-12-06 03:34:59.839151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-12-06 03:34:59.839164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.078 qpair failed and we were unable to recover it. 00:26:40.078 [2024-12-06 03:34:59.839299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-12-06 03:34:59.839311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.078 qpair failed and we were unable to recover it. 00:26:40.078 [2024-12-06 03:34:59.839510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-12-06 03:34:59.839521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.078 qpair failed and we were unable to recover it. 00:26:40.078 [2024-12-06 03:34:59.839725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-12-06 03:34:59.839736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.078 qpair failed and we were unable to recover it. 00:26:40.078 [2024-12-06 03:34:59.839806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-12-06 03:34:59.839818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.078 qpair failed and we were unable to recover it. 00:26:40.078 [2024-12-06 03:34:59.839903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-12-06 03:34:59.839915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.078 qpair failed and we were unable to recover it. 00:26:40.078 [2024-12-06 03:34:59.839998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-12-06 03:34:59.840010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.078 qpair failed and we were unable to recover it. 00:26:40.078 [2024-12-06 03:34:59.840111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-12-06 03:34:59.840123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.078 qpair failed and we were unable to recover it. 00:26:40.078 [2024-12-06 03:34:59.840206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-12-06 03:34:59.840218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.078 qpair failed and we were unable to recover it. 00:26:40.078 [2024-12-06 03:34:59.840365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-12-06 03:34:59.840376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.078 qpair failed and we were unable to recover it. 00:26:40.078 [2024-12-06 03:34:59.840447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.078 [2024-12-06 03:34:59.840458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.078 qpair failed and we were unable to recover it. 00:26:40.079 [2024-12-06 03:34:59.840609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.079 [2024-12-06 03:34:59.840621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.079 qpair failed and we were unable to recover it. 00:26:40.079 [2024-12-06 03:34:59.840698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.079 [2024-12-06 03:34:59.840711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.079 qpair failed and we were unable to recover it. 00:26:40.079 [2024-12-06 03:34:59.840777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.079 [2024-12-06 03:34:59.840789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.079 qpair failed and we were unable to recover it. 00:26:40.079 [2024-12-06 03:34:59.840892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.079 [2024-12-06 03:34:59.840904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.079 qpair failed and we were unable to recover it. 00:26:40.079 [2024-12-06 03:34:59.841041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.079 [2024-12-06 03:34:59.841054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.079 qpair failed and we were unable to recover it. 00:26:40.079 [2024-12-06 03:34:59.841125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.079 [2024-12-06 03:34:59.841138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.079 qpair failed and we were unable to recover it. 00:26:40.079 [2024-12-06 03:34:59.841211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.079 [2024-12-06 03:34:59.841223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.079 qpair failed and we were unable to recover it. 00:26:40.079 [2024-12-06 03:34:59.841319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.079 [2024-12-06 03:34:59.841331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.079 qpair failed and we were unable to recover it. 00:26:40.079 [2024-12-06 03:34:59.841423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.079 [2024-12-06 03:34:59.841435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.079 qpair failed and we were unable to recover it. 00:26:40.079 [2024-12-06 03:34:59.841522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.079 [2024-12-06 03:34:59.841534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.079 qpair failed and we were unable to recover it. 00:26:40.079 [2024-12-06 03:34:59.841623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.079 [2024-12-06 03:34:59.841636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.079 qpair failed and we were unable to recover it. 00:26:40.079 [2024-12-06 03:34:59.841704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.079 [2024-12-06 03:34:59.841716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.079 qpair failed and we were unable to recover it. 00:26:40.079 [2024-12-06 03:34:59.841798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.079 [2024-12-06 03:34:59.841813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.079 qpair failed and we were unable to recover it. 00:26:40.079 [2024-12-06 03:34:59.841881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.079 [2024-12-06 03:34:59.841893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.079 qpair failed and we were unable to recover it. 00:26:40.079 [2024-12-06 03:34:59.842040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.079 [2024-12-06 03:34:59.842054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.079 qpair failed and we were unable to recover it. 00:26:40.079 [2024-12-06 03:34:59.842203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.079 [2024-12-06 03:34:59.842216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.079 qpair failed and we were unable to recover it. 00:26:40.079 [2024-12-06 03:34:59.842284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.079 [2024-12-06 03:34:59.842296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.079 qpair failed and we were unable to recover it. 00:26:40.079 [2024-12-06 03:34:59.842437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.079 [2024-12-06 03:34:59.842449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.079 qpair failed and we were unable to recover it. 00:26:40.079 [2024-12-06 03:34:59.842517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.079 [2024-12-06 03:34:59.842530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.079 qpair failed and we were unable to recover it. 00:26:40.079 [2024-12-06 03:34:59.842603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.079 [2024-12-06 03:34:59.842615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.079 qpair failed and we were unable to recover it. 00:26:40.079 [2024-12-06 03:34:59.842813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.079 [2024-12-06 03:34:59.842825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.079 qpair failed and we were unable to recover it. 00:26:40.079 [2024-12-06 03:34:59.842891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.079 [2024-12-06 03:34:59.842904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.079 qpair failed and we were unable to recover it. 00:26:40.079 [2024-12-06 03:34:59.843161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.079 [2024-12-06 03:34:59.843174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.079 qpair failed and we were unable to recover it. 00:26:40.079 [2024-12-06 03:34:59.843247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.079 [2024-12-06 03:34:59.843259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.079 qpair failed and we were unable to recover it. 00:26:40.079 [2024-12-06 03:34:59.843326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.079 [2024-12-06 03:34:59.843338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.079 qpair failed and we were unable to recover it. 00:26:40.079 [2024-12-06 03:34:59.843404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.079 [2024-12-06 03:34:59.843415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.079 qpair failed and we were unable to recover it. 00:26:40.079 [2024-12-06 03:34:59.843645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.079 [2024-12-06 03:34:59.843658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.079 qpair failed and we were unable to recover it. 00:26:40.079 [2024-12-06 03:34:59.843745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.079 [2024-12-06 03:34:59.843758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.079 qpair failed and we were unable to recover it. 00:26:40.079 [2024-12-06 03:34:59.843905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.079 [2024-12-06 03:34:59.843917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.079 qpair failed and we were unable to recover it. 00:26:40.079 [2024-12-06 03:34:59.843995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.079 [2024-12-06 03:34:59.844008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.079 qpair failed and we were unable to recover it. 00:26:40.079 [2024-12-06 03:34:59.844085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.079 [2024-12-06 03:34:59.844099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.079 qpair failed and we were unable to recover it. 00:26:40.079 [2024-12-06 03:34:59.844242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.079 [2024-12-06 03:34:59.844255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.079 qpair failed and we were unable to recover it. 00:26:40.079 [2024-12-06 03:34:59.844428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.079 [2024-12-06 03:34:59.844439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.079 qpair failed and we were unable to recover it. 00:26:40.079 [2024-12-06 03:34:59.844511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.079 [2024-12-06 03:34:59.844523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.079 qpair failed and we were unable to recover it. 00:26:40.079 [2024-12-06 03:34:59.844665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.079 [2024-12-06 03:34:59.844678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.079 qpair failed and we were unable to recover it. 00:26:40.079 [2024-12-06 03:34:59.844812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.079 [2024-12-06 03:34:59.844824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.079 qpair failed and we were unable to recover it. 00:26:40.079 [2024-12-06 03:34:59.844898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.079 [2024-12-06 03:34:59.844910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.079 qpair failed and we were unable to recover it. 00:26:40.079 [2024-12-06 03:34:59.845002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.079 [2024-12-06 03:34:59.845015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.079 qpair failed and we were unable to recover it. 00:26:40.080 [2024-12-06 03:34:59.845143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.080 [2024-12-06 03:34:59.845155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.080 qpair failed and we were unable to recover it. 00:26:40.080 [2024-12-06 03:34:59.845303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.080 [2024-12-06 03:34:59.845315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.080 qpair failed and we were unable to recover it. 00:26:40.080 [2024-12-06 03:34:59.845398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.080 [2024-12-06 03:34:59.845410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.080 qpair failed and we were unable to recover it. 00:26:40.080 [2024-12-06 03:34:59.845552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.080 [2024-12-06 03:34:59.845565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.080 qpair failed and we were unable to recover it. 00:26:40.080 [2024-12-06 03:34:59.845631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.080 [2024-12-06 03:34:59.845643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.080 qpair failed and we were unable to recover it. 00:26:40.080 [2024-12-06 03:34:59.845720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.080 [2024-12-06 03:34:59.845732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.080 qpair failed and we were unable to recover it. 00:26:40.080 [2024-12-06 03:34:59.845800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.080 [2024-12-06 03:34:59.845812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.080 qpair failed and we were unable to recover it. 00:26:40.080 [2024-12-06 03:34:59.846036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.080 [2024-12-06 03:34:59.846049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.080 qpair failed and we were unable to recover it. 00:26:40.080 [2024-12-06 03:34:59.846185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.080 [2024-12-06 03:34:59.846198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.080 qpair failed and we were unable to recover it. 00:26:40.080 [2024-12-06 03:34:59.846265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.080 [2024-12-06 03:34:59.846277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.080 qpair failed and we were unable to recover it. 00:26:40.080 [2024-12-06 03:34:59.846434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.080 [2024-12-06 03:34:59.846446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.080 qpair failed and we were unable to recover it. 00:26:40.080 [2024-12-06 03:34:59.846542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.080 [2024-12-06 03:34:59.846554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.080 qpair failed and we were unable to recover it. 00:26:40.080 [2024-12-06 03:34:59.846624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.080 [2024-12-06 03:34:59.846637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.080 qpair failed and we were unable to recover it. 00:26:40.080 [2024-12-06 03:34:59.846726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.080 [2024-12-06 03:34:59.846738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.080 qpair failed and we were unable to recover it. 00:26:40.080 [2024-12-06 03:34:59.846937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.080 [2024-12-06 03:34:59.846955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.080 qpair failed and we were unable to recover it. 00:26:40.080 [2024-12-06 03:34:59.847105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.080 [2024-12-06 03:34:59.847117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.080 qpair failed and we were unable to recover it. 00:26:40.080 [2024-12-06 03:34:59.847282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.080 [2024-12-06 03:34:59.847293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.080 qpair failed and we were unable to recover it. 00:26:40.080 [2024-12-06 03:34:59.847386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.080 [2024-12-06 03:34:59.847398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.080 qpair failed and we were unable to recover it. 00:26:40.080 [2024-12-06 03:34:59.847479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.080 [2024-12-06 03:34:59.847492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.080 qpair failed and we were unable to recover it. 00:26:40.080 [2024-12-06 03:34:59.847638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.080 [2024-12-06 03:34:59.847650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.080 qpair failed and we were unable to recover it. 00:26:40.080 [2024-12-06 03:34:59.847727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.080 [2024-12-06 03:34:59.847740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.080 qpair failed and we were unable to recover it. 00:26:40.080 [2024-12-06 03:34:59.847882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.080 [2024-12-06 03:34:59.847895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.080 qpair failed and we were unable to recover it. 00:26:40.080 [2024-12-06 03:34:59.847979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.080 [2024-12-06 03:34:59.847992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.080 qpair failed and we were unable to recover it. 00:26:40.080 [2024-12-06 03:34:59.848066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.080 [2024-12-06 03:34:59.848078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.080 qpair failed and we were unable to recover it. 00:26:40.080 [2024-12-06 03:34:59.848161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.080 [2024-12-06 03:34:59.848173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.080 qpair failed and we were unable to recover it. 00:26:40.080 [2024-12-06 03:34:59.848328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.080 [2024-12-06 03:34:59.848340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.080 qpair failed and we were unable to recover it. 00:26:40.080 [2024-12-06 03:34:59.848481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.080 [2024-12-06 03:34:59.848493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.080 qpair failed and we were unable to recover it. 00:26:40.080 [2024-12-06 03:34:59.848574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.080 [2024-12-06 03:34:59.848586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.080 qpair failed and we were unable to recover it. 00:26:40.080 [2024-12-06 03:34:59.848657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.080 [2024-12-06 03:34:59.848668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.080 qpair failed and we were unable to recover it. 00:26:40.080 [2024-12-06 03:34:59.848750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.080 [2024-12-06 03:34:59.848761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.080 qpair failed and we were unable to recover it. 00:26:40.080 [2024-12-06 03:34:59.848837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.080 [2024-12-06 03:34:59.848849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.080 qpair failed and we were unable to recover it. 00:26:40.080 [2024-12-06 03:34:59.848983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.080 [2024-12-06 03:34:59.848996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.080 qpair failed and we were unable to recover it. 00:26:40.080 [2024-12-06 03:34:59.849074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.080 [2024-12-06 03:34:59.849086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.080 qpair failed and we were unable to recover it. 00:26:40.080 [2024-12-06 03:34:59.849169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.080 [2024-12-06 03:34:59.849180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.080 qpair failed and we were unable to recover it. 00:26:40.080 [2024-12-06 03:34:59.849255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.080 [2024-12-06 03:34:59.849268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.080 qpair failed and we were unable to recover it. 00:26:40.080 [2024-12-06 03:34:59.849341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.080 [2024-12-06 03:34:59.849353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.080 qpair failed and we were unable to recover it. 00:26:40.080 [2024-12-06 03:34:59.849425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.080 [2024-12-06 03:34:59.849437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.080 qpair failed and we were unable to recover it. 00:26:40.080 [2024-12-06 03:34:59.849510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.080 [2024-12-06 03:34:59.849523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.080 qpair failed and we were unable to recover it. 00:26:40.081 [2024-12-06 03:34:59.849675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.081 [2024-12-06 03:34:59.849687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.081 qpair failed and we were unable to recover it. 00:26:40.081 [2024-12-06 03:34:59.849774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.081 [2024-12-06 03:34:59.849785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.081 qpair failed and we were unable to recover it. 00:26:40.081 [2024-12-06 03:34:59.849991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.081 [2024-12-06 03:34:59.850003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.081 qpair failed and we were unable to recover it. 00:26:40.081 [2024-12-06 03:34:59.850089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.081 [2024-12-06 03:34:59.850101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.081 qpair failed and we were unable to recover it. 00:26:40.081 [2024-12-06 03:34:59.850178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.081 [2024-12-06 03:34:59.850190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.081 qpair failed and we were unable to recover it. 00:26:40.081 [2024-12-06 03:34:59.850418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.081 [2024-12-06 03:34:59.850431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.081 qpair failed and we were unable to recover it. 00:26:40.081 [2024-12-06 03:34:59.850515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.081 [2024-12-06 03:34:59.850527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.081 qpair failed and we were unable to recover it. 00:26:40.081 [2024-12-06 03:34:59.850606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.081 [2024-12-06 03:34:59.850618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.081 qpair failed and we were unable to recover it. 00:26:40.081 [2024-12-06 03:34:59.850683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.081 [2024-12-06 03:34:59.850695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.081 qpair failed and we were unable to recover it. 00:26:40.081 [2024-12-06 03:34:59.850776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.081 [2024-12-06 03:34:59.850788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.081 qpair failed and we were unable to recover it. 00:26:40.081 [2024-12-06 03:34:59.850865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.081 [2024-12-06 03:34:59.850877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.081 qpair failed and we were unable to recover it. 00:26:40.081 [2024-12-06 03:34:59.850959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.081 [2024-12-06 03:34:59.850971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.081 qpair failed and we were unable to recover it. 00:26:40.081 [2024-12-06 03:34:59.851058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.081 [2024-12-06 03:34:59.851072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.081 qpair failed and we were unable to recover it. 00:26:40.081 [2024-12-06 03:34:59.851137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.081 [2024-12-06 03:34:59.851148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.081 qpair failed and we were unable to recover it. 00:26:40.081 [2024-12-06 03:34:59.851229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.081 [2024-12-06 03:34:59.851241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.081 qpair failed and we were unable to recover it. 00:26:40.081 [2024-12-06 03:34:59.851313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.081 [2024-12-06 03:34:59.851326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.081 qpair failed and we were unable to recover it. 00:26:40.081 [2024-12-06 03:34:59.851396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.081 [2024-12-06 03:34:59.851409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.081 qpair failed and we were unable to recover it. 00:26:40.081 [2024-12-06 03:34:59.851544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.081 [2024-12-06 03:34:59.851556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.081 qpair failed and we were unable to recover it. 00:26:40.081 [2024-12-06 03:34:59.851756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.081 [2024-12-06 03:34:59.851768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.081 qpair failed and we were unable to recover it. 00:26:40.081 [2024-12-06 03:34:59.851868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.081 [2024-12-06 03:34:59.851879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.081 qpair failed and we were unable to recover it. 00:26:40.081 [2024-12-06 03:34:59.851976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.081 [2024-12-06 03:34:59.851989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.081 qpair failed and we were unable to recover it. 00:26:40.081 [2024-12-06 03:34:59.852082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.081 [2024-12-06 03:34:59.852093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.081 qpair failed and we were unable to recover it. 00:26:40.081 [2024-12-06 03:34:59.852232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.081 [2024-12-06 03:34:59.852244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.081 qpair failed and we were unable to recover it. 00:26:40.081 [2024-12-06 03:34:59.852327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.081 [2024-12-06 03:34:59.852339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.081 qpair failed and we were unable to recover it. 00:26:40.081 [2024-12-06 03:34:59.852417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.081 [2024-12-06 03:34:59.852429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.081 qpair failed and we were unable to recover it. 00:26:40.081 [2024-12-06 03:34:59.852509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.081 [2024-12-06 03:34:59.852521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.081 qpair failed and we were unable to recover it. 00:26:40.081 [2024-12-06 03:34:59.852613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.081 [2024-12-06 03:34:59.852625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.081 qpair failed and we were unable to recover it. 00:26:40.081 [2024-12-06 03:34:59.852795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.081 [2024-12-06 03:34:59.852807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.081 qpair failed and we were unable to recover it. 00:26:40.081 [2024-12-06 03:34:59.852955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.081 [2024-12-06 03:34:59.852967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.081 qpair failed and we were unable to recover it. 00:26:40.081 [2024-12-06 03:34:59.853171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.081 [2024-12-06 03:34:59.853184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.081 qpair failed and we were unable to recover it. 00:26:40.081 [2024-12-06 03:34:59.853270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.081 [2024-12-06 03:34:59.853283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.081 qpair failed and we were unable to recover it. 00:26:40.081 [2024-12-06 03:34:59.853436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.081 [2024-12-06 03:34:59.853448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.081 qpair failed and we were unable to recover it. 00:26:40.081 [2024-12-06 03:34:59.853540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.081 [2024-12-06 03:34:59.853552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.081 qpair failed and we were unable to recover it. 00:26:40.081 [2024-12-06 03:34:59.853646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.081 [2024-12-06 03:34:59.853659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.082 qpair failed and we were unable to recover it. 00:26:40.082 [2024-12-06 03:34:59.853844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.082 [2024-12-06 03:34:59.853856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.082 qpair failed and we were unable to recover it. 00:26:40.082 [2024-12-06 03:34:59.854007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.082 [2024-12-06 03:34:59.854020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.082 qpair failed and we were unable to recover it. 00:26:40.082 [2024-12-06 03:34:59.854087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.082 [2024-12-06 03:34:59.854099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.082 qpair failed and we were unable to recover it. 00:26:40.082 [2024-12-06 03:34:59.854181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.082 [2024-12-06 03:34:59.854193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.082 qpair failed and we were unable to recover it. 00:26:40.082 [2024-12-06 03:34:59.854336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.082 [2024-12-06 03:34:59.854348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.082 qpair failed and we were unable to recover it. 00:26:40.082 [2024-12-06 03:34:59.854430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.082 [2024-12-06 03:34:59.854442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.082 qpair failed and we were unable to recover it. 00:26:40.082 [2024-12-06 03:34:59.854532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.082 [2024-12-06 03:34:59.854544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.082 qpair failed and we were unable to recover it. 00:26:40.082 [2024-12-06 03:34:59.854687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.082 [2024-12-06 03:34:59.854700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.082 qpair failed and we were unable to recover it. 00:26:40.082 [2024-12-06 03:34:59.854798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.082 [2024-12-06 03:34:59.854810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.082 qpair failed and we were unable to recover it. 00:26:40.082 [2024-12-06 03:34:59.854944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.082 [2024-12-06 03:34:59.854971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.082 qpair failed and we were unable to recover it. 00:26:40.082 [2024-12-06 03:34:59.855133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.082 [2024-12-06 03:34:59.855149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.082 qpair failed and we were unable to recover it. 00:26:40.082 [2024-12-06 03:34:59.855241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.082 [2024-12-06 03:34:59.855257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.082 qpair failed and we were unable to recover it. 00:26:40.082 [2024-12-06 03:34:59.855348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.082 [2024-12-06 03:34:59.855363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.082 qpair failed and we were unable to recover it. 00:26:40.082 [2024-12-06 03:34:59.855500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.082 [2024-12-06 03:34:59.855517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.082 qpair failed and we were unable to recover it. 00:26:40.082 [2024-12-06 03:34:59.855607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.082 [2024-12-06 03:34:59.855622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.082 qpair failed and we were unable to recover it. 00:26:40.082 [2024-12-06 03:34:59.855763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.082 [2024-12-06 03:34:59.855777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.082 qpair failed and we were unable to recover it. 00:26:40.082 [2024-12-06 03:34:59.855863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.082 [2024-12-06 03:34:59.855877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.082 qpair failed and we were unable to recover it. 00:26:40.082 [2024-12-06 03:34:59.855960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.082 [2024-12-06 03:34:59.855972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.082 qpair failed and we were unable to recover it. 00:26:40.082 [2024-12-06 03:34:59.856042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.082 [2024-12-06 03:34:59.856053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.082 qpair failed and we were unable to recover it. 00:26:40.082 [2024-12-06 03:34:59.856258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.082 [2024-12-06 03:34:59.856269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.082 qpair failed and we were unable to recover it. 00:26:40.082 [2024-12-06 03:34:59.856470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.082 [2024-12-06 03:34:59.856482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.082 qpair failed and we were unable to recover it. 00:26:40.082 [2024-12-06 03:34:59.856575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.082 [2024-12-06 03:34:59.856587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.082 qpair failed and we were unable to recover it. 00:26:40.082 [2024-12-06 03:34:59.856753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.082 [2024-12-06 03:34:59.856767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.082 qpair failed and we were unable to recover it. 00:26:40.082 [2024-12-06 03:34:59.856912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.082 [2024-12-06 03:34:59.856924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.082 qpair failed and we were unable to recover it. 00:26:40.082 [2024-12-06 03:34:59.857097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.082 [2024-12-06 03:34:59.857110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.082 qpair failed and we were unable to recover it. 00:26:40.082 [2024-12-06 03:34:59.857201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.082 [2024-12-06 03:34:59.857212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.082 qpair failed and we were unable to recover it. 00:26:40.082 [2024-12-06 03:34:59.857360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.082 [2024-12-06 03:34:59.857372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.082 qpair failed and we were unable to recover it. 00:26:40.082 [2024-12-06 03:34:59.857459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.082 [2024-12-06 03:34:59.857472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.082 qpair failed and we were unable to recover it. 00:26:40.082 [2024-12-06 03:34:59.857555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.082 [2024-12-06 03:34:59.857566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.082 qpair failed and we were unable to recover it. 00:26:40.082 [2024-12-06 03:34:59.857704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.082 [2024-12-06 03:34:59.857716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.082 qpair failed and we were unable to recover it. 00:26:40.082 [2024-12-06 03:34:59.857803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.082 [2024-12-06 03:34:59.857816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.082 qpair failed and we were unable to recover it. 00:26:40.082 [2024-12-06 03:34:59.857879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.082 [2024-12-06 03:34:59.857891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.082 qpair failed and we were unable to recover it. 00:26:40.082 [2024-12-06 03:34:59.857967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.082 [2024-12-06 03:34:59.857979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.082 qpair failed and we were unable to recover it. 00:26:40.082 [2024-12-06 03:34:59.858070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.082 [2024-12-06 03:34:59.858083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.082 qpair failed and we were unable to recover it. 00:26:40.082 [2024-12-06 03:34:59.858175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.082 [2024-12-06 03:34:59.858186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.082 qpair failed and we were unable to recover it. 00:26:40.082 [2024-12-06 03:34:59.858331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.082 [2024-12-06 03:34:59.858342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.082 qpair failed and we were unable to recover it. 00:26:40.082 [2024-12-06 03:34:59.858421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.082 [2024-12-06 03:34:59.858433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.082 qpair failed and we were unable to recover it. 00:26:40.082 [2024-12-06 03:34:59.858528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.082 [2024-12-06 03:34:59.858540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.082 qpair failed and we were unable to recover it. 00:26:40.083 [2024-12-06 03:34:59.858639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.083 [2024-12-06 03:34:59.858651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.083 qpair failed and we were unable to recover it. 00:26:40.083 [2024-12-06 03:34:59.858726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.083 [2024-12-06 03:34:59.858738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.083 qpair failed and we were unable to recover it. 00:26:40.083 [2024-12-06 03:34:59.858898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.083 [2024-12-06 03:34:59.858910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.083 qpair failed and we were unable to recover it. 00:26:40.083 [2024-12-06 03:34:59.859049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.083 [2024-12-06 03:34:59.859061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.083 qpair failed and we were unable to recover it. 00:26:40.083 [2024-12-06 03:34:59.859127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.083 [2024-12-06 03:34:59.859139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.083 qpair failed and we were unable to recover it. 00:26:40.083 [2024-12-06 03:34:59.859205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.083 [2024-12-06 03:34:59.859218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.083 qpair failed and we were unable to recover it. 00:26:40.083 [2024-12-06 03:34:59.859296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.083 [2024-12-06 03:34:59.859308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.083 qpair failed and we were unable to recover it. 00:26:40.083 [2024-12-06 03:34:59.859406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.083 [2024-12-06 03:34:59.859419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.083 qpair failed and we were unable to recover it. 00:26:40.083 [2024-12-06 03:34:59.859487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.083 [2024-12-06 03:34:59.859498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.083 qpair failed and we were unable to recover it. 00:26:40.083 [2024-12-06 03:34:59.859630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.083 [2024-12-06 03:34:59.859642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.083 qpair failed and we were unable to recover it. 00:26:40.083 [2024-12-06 03:34:59.859711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.083 [2024-12-06 03:34:59.859723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.083 qpair failed and we were unable to recover it. 00:26:40.083 [2024-12-06 03:34:59.859869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.083 [2024-12-06 03:34:59.859880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.083 qpair failed and we were unable to recover it. 00:26:40.083 [2024-12-06 03:34:59.859943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.083 [2024-12-06 03:34:59.859958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.083 qpair failed and we were unable to recover it. 00:26:40.083 [2024-12-06 03:34:59.860027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.083 [2024-12-06 03:34:59.860040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.083 qpair failed and we were unable to recover it. 00:26:40.083 [2024-12-06 03:34:59.860117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.083 [2024-12-06 03:34:59.860129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.083 qpair failed and we were unable to recover it. 00:26:40.083 [2024-12-06 03:34:59.860286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.083 [2024-12-06 03:34:59.860298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.083 qpair failed and we were unable to recover it. 00:26:40.083 [2024-12-06 03:34:59.860439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.083 [2024-12-06 03:34:59.860451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.083 qpair failed and we were unable to recover it. 00:26:40.083 [2024-12-06 03:34:59.860526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.083 [2024-12-06 03:34:59.860539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.083 qpair failed and we were unable to recover it. 00:26:40.083 [2024-12-06 03:34:59.860617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.083 [2024-12-06 03:34:59.860629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.083 qpair failed and we were unable to recover it. 00:26:40.083 [2024-12-06 03:34:59.860704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.083 [2024-12-06 03:34:59.860716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.083 qpair failed and we were unable to recover it. 00:26:40.083 [2024-12-06 03:34:59.860851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.083 [2024-12-06 03:34:59.860864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.083 qpair failed and we were unable to recover it. 00:26:40.083 [2024-12-06 03:34:59.860958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.083 [2024-12-06 03:34:59.860970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.083 qpair failed and we were unable to recover it. 00:26:40.083 [2024-12-06 03:34:59.861050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.083 [2024-12-06 03:34:59.861063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.083 qpair failed and we were unable to recover it. 00:26:40.083 [2024-12-06 03:34:59.861141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.083 [2024-12-06 03:34:59.861154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.083 qpair failed and we were unable to recover it. 00:26:40.083 [2024-12-06 03:34:59.861297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.083 [2024-12-06 03:34:59.861313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.083 qpair failed and we were unable to recover it. 00:26:40.083 [2024-12-06 03:34:59.861376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.083 [2024-12-06 03:34:59.861387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.083 qpair failed and we were unable to recover it. 00:26:40.083 [2024-12-06 03:34:59.861520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.083 [2024-12-06 03:34:59.861532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.083 qpair failed and we were unable to recover it. 00:26:40.083 [2024-12-06 03:34:59.861599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.083 [2024-12-06 03:34:59.861610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.083 qpair failed and we were unable to recover it. 00:26:40.083 [2024-12-06 03:34:59.861762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.083 [2024-12-06 03:34:59.861774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.083 qpair failed and we were unable to recover it. 00:26:40.083 [2024-12-06 03:34:59.861905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.083 [2024-12-06 03:34:59.861917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.083 qpair failed and we were unable to recover it. 00:26:40.083 [2024-12-06 03:34:59.861981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.083 [2024-12-06 03:34:59.861993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.083 qpair failed and we were unable to recover it. 00:26:40.083 [2024-12-06 03:34:59.862145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.083 [2024-12-06 03:34:59.862158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.083 qpair failed and we were unable to recover it. 00:26:40.083 [2024-12-06 03:34:59.862400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.083 [2024-12-06 03:34:59.862412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.083 qpair failed and we were unable to recover it. 00:26:40.083 [2024-12-06 03:34:59.862499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.083 [2024-12-06 03:34:59.862512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.083 qpair failed and we were unable to recover it. 00:26:40.083 [2024-12-06 03:34:59.862598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.083 [2024-12-06 03:34:59.862610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.083 qpair failed and we were unable to recover it. 00:26:40.083 [2024-12-06 03:34:59.862794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.083 [2024-12-06 03:34:59.862805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.083 qpair failed and we were unable to recover it. 00:26:40.083 [2024-12-06 03:34:59.862890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.083 [2024-12-06 03:34:59.862902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.083 qpair failed and we were unable to recover it. 00:26:40.083 [2024-12-06 03:34:59.862994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.083 [2024-12-06 03:34:59.863006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.083 qpair failed and we were unable to recover it. 00:26:40.084 [2024-12-06 03:34:59.863088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-12-06 03:34:59.863100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.084 qpair failed and we were unable to recover it. 00:26:40.084 [2024-12-06 03:34:59.863253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-12-06 03:34:59.863265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.084 qpair failed and we were unable to recover it. 00:26:40.084 [2024-12-06 03:34:59.863331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-12-06 03:34:59.863343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.084 qpair failed and we were unable to recover it. 00:26:40.084 [2024-12-06 03:34:59.863410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-12-06 03:34:59.863421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.084 qpair failed and we were unable to recover it. 00:26:40.084 [2024-12-06 03:34:59.863562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-12-06 03:34:59.863574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.084 qpair failed and we were unable to recover it. 00:26:40.084 [2024-12-06 03:34:59.863676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-12-06 03:34:59.863689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.084 qpair failed and we were unable to recover it. 00:26:40.084 [2024-12-06 03:34:59.863776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-12-06 03:34:59.863788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.084 qpair failed and we were unable to recover it. 00:26:40.084 [2024-12-06 03:34:59.863866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-12-06 03:34:59.863878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.084 qpair failed and we were unable to recover it. 00:26:40.084 [2024-12-06 03:34:59.863945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-12-06 03:34:59.863959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.084 qpair failed and we were unable to recover it. 00:26:40.084 [2024-12-06 03:34:59.864051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-12-06 03:34:59.864063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.084 qpair failed and we were unable to recover it. 00:26:40.084 [2024-12-06 03:34:59.864138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-12-06 03:34:59.864150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.084 qpair failed and we were unable to recover it. 00:26:40.084 [2024-12-06 03:34:59.864231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-12-06 03:34:59.864244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.084 qpair failed and we were unable to recover it. 00:26:40.084 [2024-12-06 03:34:59.864409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-12-06 03:34:59.864420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.084 qpair failed and we were unable to recover it. 00:26:40.084 [2024-12-06 03:34:59.864562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-12-06 03:34:59.864575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.084 qpair failed and we were unable to recover it. 00:26:40.084 [2024-12-06 03:34:59.864667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-12-06 03:34:59.864678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.084 qpair failed and we were unable to recover it. 00:26:40.084 [2024-12-06 03:34:59.864769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-12-06 03:34:59.864781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.084 qpair failed and we were unable to recover it. 00:26:40.084 [2024-12-06 03:34:59.864850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-12-06 03:34:59.864863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.084 qpair failed and we were unable to recover it. 00:26:40.084 [2024-12-06 03:34:59.864939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-12-06 03:34:59.864955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.084 qpair failed and we were unable to recover it. 00:26:40.084 [2024-12-06 03:34:59.865018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-12-06 03:34:59.865029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.084 qpair failed and we were unable to recover it. 00:26:40.084 [2024-12-06 03:34:59.865119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-12-06 03:34:59.865131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.084 qpair failed and we were unable to recover it. 00:26:40.084 [2024-12-06 03:34:59.865203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-12-06 03:34:59.865215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.084 qpair failed and we were unable to recover it. 00:26:40.084 [2024-12-06 03:34:59.865310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-12-06 03:34:59.865322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.084 qpair failed and we were unable to recover it. 00:26:40.084 [2024-12-06 03:34:59.865403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-12-06 03:34:59.865416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.084 qpair failed and we were unable to recover it. 00:26:40.084 [2024-12-06 03:34:59.865510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-12-06 03:34:59.865522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.084 qpair failed and we were unable to recover it. 00:26:40.084 [2024-12-06 03:34:59.865658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-12-06 03:34:59.865671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.084 qpair failed and we were unable to recover it. 00:26:40.084 [2024-12-06 03:34:59.865745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-12-06 03:34:59.865758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.084 qpair failed and we were unable to recover it. 00:26:40.084 [2024-12-06 03:34:59.865822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-12-06 03:34:59.865835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.084 qpair failed and we were unable to recover it. 00:26:40.084 [2024-12-06 03:34:59.865973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-12-06 03:34:59.865985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.084 qpair failed and we were unable to recover it. 00:26:40.084 [2024-12-06 03:34:59.866066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-12-06 03:34:59.866078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.084 qpair failed and we were unable to recover it. 00:26:40.084 [2024-12-06 03:34:59.866232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-12-06 03:34:59.866244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.084 qpair failed and we were unable to recover it. 00:26:40.084 [2024-12-06 03:34:59.866320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-12-06 03:34:59.866332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.084 qpair failed and we were unable to recover it. 00:26:40.084 [2024-12-06 03:34:59.866466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-12-06 03:34:59.866477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.084 qpair failed and we were unable to recover it. 00:26:40.084 [2024-12-06 03:34:59.866569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-12-06 03:34:59.866582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.084 qpair failed and we were unable to recover it. 00:26:40.084 [2024-12-06 03:34:59.866760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-12-06 03:34:59.866773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.084 qpair failed and we were unable to recover it. 00:26:40.084 [2024-12-06 03:34:59.866864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-12-06 03:34:59.866877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.084 qpair failed and we were unable to recover it. 00:26:40.084 [2024-12-06 03:34:59.867022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-12-06 03:34:59.867034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.084 qpair failed and we were unable to recover it. 00:26:40.084 [2024-12-06 03:34:59.867178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-12-06 03:34:59.867191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.084 qpair failed and we were unable to recover it. 00:26:40.084 [2024-12-06 03:34:59.867256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.084 [2024-12-06 03:34:59.867268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.085 qpair failed and we were unable to recover it. 00:26:40.085 [2024-12-06 03:34:59.867430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.085 [2024-12-06 03:34:59.867442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.085 qpair failed and we were unable to recover it. 00:26:40.085 [2024-12-06 03:34:59.867536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.085 [2024-12-06 03:34:59.867548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.085 qpair failed and we were unable to recover it. 00:26:40.085 [2024-12-06 03:34:59.867708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.085 [2024-12-06 03:34:59.867720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.085 qpair failed and we were unable to recover it. 00:26:40.085 [2024-12-06 03:34:59.867799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.085 [2024-12-06 03:34:59.867811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.085 qpair failed and we were unable to recover it. 00:26:40.085 [2024-12-06 03:34:59.867897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.085 [2024-12-06 03:34:59.867908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.085 qpair failed and we were unable to recover it. 00:26:40.085 [2024-12-06 03:34:59.867985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.085 [2024-12-06 03:34:59.867997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.085 qpair failed and we were unable to recover it. 00:26:40.085 [2024-12-06 03:34:59.868141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.085 [2024-12-06 03:34:59.868155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.085 qpair failed and we were unable to recover it. 00:26:40.085 [2024-12-06 03:34:59.868237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.085 [2024-12-06 03:34:59.868249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.085 qpair failed and we were unable to recover it. 00:26:40.085 [2024-12-06 03:34:59.868396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.085 [2024-12-06 03:34:59.868408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.085 qpair failed and we were unable to recover it. 00:26:40.085 [2024-12-06 03:34:59.868486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.085 [2024-12-06 03:34:59.868499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.085 qpair failed and we were unable to recover it. 00:26:40.085 [2024-12-06 03:34:59.868579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.085 [2024-12-06 03:34:59.868591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.085 qpair failed and we were unable to recover it. 00:26:40.085 [2024-12-06 03:34:59.868793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.085 [2024-12-06 03:34:59.868805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.085 qpair failed and we were unable to recover it. 00:26:40.085 [2024-12-06 03:34:59.868959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.085 [2024-12-06 03:34:59.868971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.085 qpair failed and we were unable to recover it. 00:26:40.085 [2024-12-06 03:34:59.869039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.085 [2024-12-06 03:34:59.869052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.085 qpair failed and we were unable to recover it. 00:26:40.085 [2024-12-06 03:34:59.869139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.085 [2024-12-06 03:34:59.869151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.085 qpair failed and we were unable to recover it. 00:26:40.085 [2024-12-06 03:34:59.869218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.085 [2024-12-06 03:34:59.869229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.085 qpair failed and we were unable to recover it. 00:26:40.085 [2024-12-06 03:34:59.869299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.085 [2024-12-06 03:34:59.869311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.085 qpair failed and we were unable to recover it. 00:26:40.085 [2024-12-06 03:34:59.869408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.085 [2024-12-06 03:34:59.869420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.085 qpair failed and we were unable to recover it. 00:26:40.085 [2024-12-06 03:34:59.869499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.085 [2024-12-06 03:34:59.869510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.085 qpair failed and we were unable to recover it. 00:26:40.085 [2024-12-06 03:34:59.869588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.085 [2024-12-06 03:34:59.869600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.085 qpair failed and we were unable to recover it. 00:26:40.085 [2024-12-06 03:34:59.869738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.085 [2024-12-06 03:34:59.869751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.085 qpair failed and we were unable to recover it. 00:26:40.085 [2024-12-06 03:34:59.869883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.085 [2024-12-06 03:34:59.869898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.085 qpair failed and we were unable to recover it. 00:26:40.085 [2024-12-06 03:34:59.870057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.085 [2024-12-06 03:34:59.870073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.085 qpair failed and we were unable to recover it. 00:26:40.085 [2024-12-06 03:34:59.870156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.085 [2024-12-06 03:34:59.870172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.085 qpair failed and we were unable to recover it. 00:26:40.085 [2024-12-06 03:34:59.870265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.085 [2024-12-06 03:34:59.870282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.085 qpair failed and we were unable to recover it. 00:26:40.085 [2024-12-06 03:34:59.870364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.085 [2024-12-06 03:34:59.870379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.085 qpair failed and we were unable to recover it. 00:26:40.085 [2024-12-06 03:34:59.870468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.085 [2024-12-06 03:34:59.870484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.085 qpair failed and we were unable to recover it. 00:26:40.085 [2024-12-06 03:34:59.870559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.085 [2024-12-06 03:34:59.870575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.085 qpair failed and we were unable to recover it. 00:26:40.085 [2024-12-06 03:34:59.870655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.085 [2024-12-06 03:34:59.870674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.085 qpair failed and we were unable to recover it. 00:26:40.085 [2024-12-06 03:34:59.870768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.085 [2024-12-06 03:34:59.870780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.085 qpair failed and we were unable to recover it. 00:26:40.085 [2024-12-06 03:34:59.870915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.085 [2024-12-06 03:34:59.870927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.085 qpair failed and we were unable to recover it. 00:26:40.085 [2024-12-06 03:34:59.871013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.086 [2024-12-06 03:34:59.871025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.086 qpair failed and we were unable to recover it. 00:26:40.086 [2024-12-06 03:34:59.871162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.086 [2024-12-06 03:34:59.871174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.086 qpair failed and we were unable to recover it. 00:26:40.086 [2024-12-06 03:34:59.871257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.086 [2024-12-06 03:34:59.871268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.086 qpair failed and we were unable to recover it. 00:26:40.086 [2024-12-06 03:34:59.871341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.086 [2024-12-06 03:34:59.871352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.086 qpair failed and we were unable to recover it. 00:26:40.086 [2024-12-06 03:34:59.871431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.086 [2024-12-06 03:34:59.871442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.086 qpair failed and we were unable to recover it. 00:26:40.086 [2024-12-06 03:34:59.871530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.086 [2024-12-06 03:34:59.871540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.086 qpair failed and we were unable to recover it. 00:26:40.086 [2024-12-06 03:34:59.871609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.086 [2024-12-06 03:34:59.871620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.086 qpair failed and we were unable to recover it. 00:26:40.086 [2024-12-06 03:34:59.871716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.086 [2024-12-06 03:34:59.871727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.086 qpair failed and we were unable to recover it. 00:26:40.086 [2024-12-06 03:34:59.871862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.086 [2024-12-06 03:34:59.871874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.086 qpair failed and we were unable to recover it. 00:26:40.086 [2024-12-06 03:34:59.872027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.086 [2024-12-06 03:34:59.872039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.086 qpair failed and we were unable to recover it. 00:26:40.086 [2024-12-06 03:34:59.872108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.086 [2024-12-06 03:34:59.872120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.086 qpair failed and we were unable to recover it. 00:26:40.086 [2024-12-06 03:34:59.872205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.086 [2024-12-06 03:34:59.872218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.086 qpair failed and we were unable to recover it. 00:26:40.086 [2024-12-06 03:34:59.872298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.086 [2024-12-06 03:34:59.872310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.086 qpair failed and we were unable to recover it. 00:26:40.086 [2024-12-06 03:34:59.872449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.086 [2024-12-06 03:34:59.872460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.086 qpair failed and we were unable to recover it. 00:26:40.086 [2024-12-06 03:34:59.872596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.086 [2024-12-06 03:34:59.872608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.086 qpair failed and we were unable to recover it. 00:26:40.086 [2024-12-06 03:34:59.872683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.086 [2024-12-06 03:34:59.872694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.086 qpair failed and we were unable to recover it. 00:26:40.086 [2024-12-06 03:34:59.872839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.086 [2024-12-06 03:34:59.872850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.086 qpair failed and we were unable to recover it. 00:26:40.086 [2024-12-06 03:34:59.873020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.086 [2024-12-06 03:34:59.873033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.086 qpair failed and we were unable to recover it. 00:26:40.086 [2024-12-06 03:34:59.873117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.086 [2024-12-06 03:34:59.873128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.086 qpair failed and we were unable to recover it. 00:26:40.086 [2024-12-06 03:34:59.873293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.086 [2024-12-06 03:34:59.873305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.086 qpair failed and we were unable to recover it. 00:26:40.086 [2024-12-06 03:34:59.873387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.086 [2024-12-06 03:34:59.873398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.086 qpair failed and we were unable to recover it. 00:26:40.086 [2024-12-06 03:34:59.873463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.086 [2024-12-06 03:34:59.873474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.086 qpair failed and we were unable to recover it. 00:26:40.086 [2024-12-06 03:34:59.873542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.086 [2024-12-06 03:34:59.873553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.086 qpair failed and we were unable to recover it. 00:26:40.086 [2024-12-06 03:34:59.873639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.086 [2024-12-06 03:34:59.873651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.086 qpair failed and we were unable to recover it. 00:26:40.086 [2024-12-06 03:34:59.876100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.086 [2024-12-06 03:34:59.876119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.086 qpair failed and we were unable to recover it. 00:26:40.086 [2024-12-06 03:34:59.876334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.086 [2024-12-06 03:34:59.876345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.086 qpair failed and we were unable to recover it. 00:26:40.086 [2024-12-06 03:34:59.876434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.086 [2024-12-06 03:34:59.876445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.086 qpair failed and we were unable to recover it. 00:26:40.086 [2024-12-06 03:34:59.876598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.086 [2024-12-06 03:34:59.876611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.086 qpair failed and we were unable to recover it. 00:26:40.086 [2024-12-06 03:34:59.876783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.086 [2024-12-06 03:34:59.876796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.086 qpair failed and we were unable to recover it. 00:26:40.086 [2024-12-06 03:34:59.876888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.086 [2024-12-06 03:34:59.876900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.086 qpair failed and we were unable to recover it. 00:26:40.086 [2024-12-06 03:34:59.876983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.086 [2024-12-06 03:34:59.876994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.086 qpair failed and we were unable to recover it. 00:26:40.086 [2024-12-06 03:34:59.877145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.086 [2024-12-06 03:34:59.877158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.086 qpair failed and we were unable to recover it. 00:26:40.086 [2024-12-06 03:34:59.877309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.086 [2024-12-06 03:34:59.877322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.086 qpair failed and we were unable to recover it. 00:26:40.086 [2024-12-06 03:34:59.877420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.086 [2024-12-06 03:34:59.877431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.086 qpair failed and we were unable to recover it. 00:26:40.086 [2024-12-06 03:34:59.877580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.086 [2024-12-06 03:34:59.877593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.086 qpair failed and we were unable to recover it. 00:26:40.086 [2024-12-06 03:34:59.877674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.086 [2024-12-06 03:34:59.877686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.086 qpair failed and we were unable to recover it. 00:26:40.086 [2024-12-06 03:34:59.877749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.086 [2024-12-06 03:34:59.877771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.086 qpair failed and we were unable to recover it. 00:26:40.086 [2024-12-06 03:34:59.877848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.086 [2024-12-06 03:34:59.877860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.086 qpair failed and we were unable to recover it. 00:26:40.087 [2024-12-06 03:34:59.877965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.087 [2024-12-06 03:34:59.877977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.087 qpair failed and we were unable to recover it. 00:26:40.087 [2024-12-06 03:34:59.878069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.087 [2024-12-06 03:34:59.878082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.087 qpair failed and we were unable to recover it. 00:26:40.087 [2024-12-06 03:34:59.878225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.087 [2024-12-06 03:34:59.878239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.087 qpair failed and we were unable to recover it. 00:26:40.087 [2024-12-06 03:34:59.878304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.087 [2024-12-06 03:34:59.878316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.087 qpair failed and we were unable to recover it. 00:26:40.087 [2024-12-06 03:34:59.878393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.087 [2024-12-06 03:34:59.878405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.087 qpair failed and we were unable to recover it. 00:26:40.087 [2024-12-06 03:34:59.878481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.087 [2024-12-06 03:34:59.878494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.087 qpair failed and we were unable to recover it. 00:26:40.087 [2024-12-06 03:34:59.878638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.087 [2024-12-06 03:34:59.878651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.087 qpair failed and we were unable to recover it. 00:26:40.087 [2024-12-06 03:34:59.878727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.087 [2024-12-06 03:34:59.878739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.087 qpair failed and we were unable to recover it. 00:26:40.087 [2024-12-06 03:34:59.878878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.087 [2024-12-06 03:34:59.878890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.087 qpair failed and we were unable to recover it. 00:26:40.087 [2024-12-06 03:34:59.878986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.087 [2024-12-06 03:34:59.878998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.087 qpair failed and we were unable to recover it. 00:26:40.087 [2024-12-06 03:34:59.879129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.087 [2024-12-06 03:34:59.879142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.087 qpair failed and we were unable to recover it. 00:26:40.087 [2024-12-06 03:34:59.879209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.087 [2024-12-06 03:34:59.879221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.087 qpair failed and we were unable to recover it. 00:26:40.087 [2024-12-06 03:34:59.879298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.087 [2024-12-06 03:34:59.879309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.087 qpair failed and we were unable to recover it. 00:26:40.087 [2024-12-06 03:34:59.879382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.087 [2024-12-06 03:34:59.879395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.087 qpair failed and we were unable to recover it. 00:26:40.087 [2024-12-06 03:34:59.879476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.087 [2024-12-06 03:34:59.879487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.087 qpair failed and we were unable to recover it. 00:26:40.087 [2024-12-06 03:34:59.879557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.087 [2024-12-06 03:34:59.879568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.087 qpair failed and we were unable to recover it. 00:26:40.087 [2024-12-06 03:34:59.879659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.087 [2024-12-06 03:34:59.879672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.087 qpair failed and we were unable to recover it. 00:26:40.087 [2024-12-06 03:34:59.879738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.087 [2024-12-06 03:34:59.879749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.087 qpair failed and we were unable to recover it. 00:26:40.087 [2024-12-06 03:34:59.879822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.087 [2024-12-06 03:34:59.879833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.087 qpair failed and we were unable to recover it. 00:26:40.087 [2024-12-06 03:34:59.879901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.087 [2024-12-06 03:34:59.879914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.087 qpair failed and we were unable to recover it. 00:26:40.087 [2024-12-06 03:34:59.879988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.087 [2024-12-06 03:34:59.880000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.087 qpair failed and we were unable to recover it. 00:26:40.087 [2024-12-06 03:34:59.880071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.087 [2024-12-06 03:34:59.880082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.087 qpair failed and we were unable to recover it. 00:26:40.087 [2024-12-06 03:34:59.880160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.087 [2024-12-06 03:34:59.880172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.087 qpair failed and we were unable to recover it. 00:26:40.087 [2024-12-06 03:34:59.880239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.087 [2024-12-06 03:34:59.880250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.087 qpair failed and we were unable to recover it. 00:26:40.087 [2024-12-06 03:34:59.880340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.087 [2024-12-06 03:34:59.880352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.087 qpair failed and we were unable to recover it. 00:26:40.087 [2024-12-06 03:34:59.880416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.087 [2024-12-06 03:34:59.880427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.087 qpair failed and we were unable to recover it. 00:26:40.087 [2024-12-06 03:34:59.880562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.087 [2024-12-06 03:34:59.880575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.087 qpair failed and we were unable to recover it. 00:26:40.087 [2024-12-06 03:34:59.880720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.087 [2024-12-06 03:34:59.880731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.087 qpair failed and we were unable to recover it. 00:26:40.087 [2024-12-06 03:34:59.880896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.087 [2024-12-06 03:34:59.880907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.087 qpair failed and we were unable to recover it. 00:26:40.087 [2024-12-06 03:34:59.881041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.087 [2024-12-06 03:34:59.881055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.087 qpair failed and we were unable to recover it. 00:26:40.087 [2024-12-06 03:34:59.881140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.087 [2024-12-06 03:34:59.881151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.087 qpair failed and we were unable to recover it. 00:26:40.087 [2024-12-06 03:34:59.881303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.087 [2024-12-06 03:34:59.881314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.087 qpair failed and we were unable to recover it. 00:26:40.087 [2024-12-06 03:34:59.881393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.087 [2024-12-06 03:34:59.881404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.087 qpair failed and we were unable to recover it. 00:26:40.087 [2024-12-06 03:34:59.881494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.087 [2024-12-06 03:34:59.881506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.087 qpair failed and we were unable to recover it. 00:26:40.087 [2024-12-06 03:34:59.881642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.087 [2024-12-06 03:34:59.881653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.087 qpair failed and we were unable to recover it. 00:26:40.087 [2024-12-06 03:34:59.881857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.087 [2024-12-06 03:34:59.881869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.087 qpair failed and we were unable to recover it. 00:26:40.087 [2024-12-06 03:34:59.881993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.087 [2024-12-06 03:34:59.882006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.087 qpair failed and we were unable to recover it. 00:26:40.087 [2024-12-06 03:34:59.882072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.087 [2024-12-06 03:34:59.882084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.087 qpair failed and we were unable to recover it. 00:26:40.088 [2024-12-06 03:34:59.882159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.088 [2024-12-06 03:34:59.882170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.088 qpair failed and we were unable to recover it. 00:26:40.088 [2024-12-06 03:34:59.882243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.088 [2024-12-06 03:34:59.882255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.088 qpair failed and we were unable to recover it. 00:26:40.088 [2024-12-06 03:34:59.882416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.088 [2024-12-06 03:34:59.882429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.088 qpair failed and we were unable to recover it. 00:26:40.088 [2024-12-06 03:34:59.882520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.088 [2024-12-06 03:34:59.882531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.088 qpair failed and we were unable to recover it. 00:26:40.088 [2024-12-06 03:34:59.882597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.088 [2024-12-06 03:34:59.882608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.088 qpair failed and we were unable to recover it. 00:26:40.088 [2024-12-06 03:34:59.882701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.088 [2024-12-06 03:34:59.882712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.088 qpair failed and we were unable to recover it. 00:26:40.088 [2024-12-06 03:34:59.882778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.088 [2024-12-06 03:34:59.882789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.088 qpair failed and we were unable to recover it. 00:26:40.088 [2024-12-06 03:34:59.882925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.088 [2024-12-06 03:34:59.882937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.088 qpair failed and we were unable to recover it. 00:26:40.088 [2024-12-06 03:34:59.883108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.088 [2024-12-06 03:34:59.883136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.088 qpair failed and we were unable to recover it. 00:26:40.088 [2024-12-06 03:34:59.883293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.088 [2024-12-06 03:34:59.883310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.088 qpair failed and we were unable to recover it. 00:26:40.088 [2024-12-06 03:34:59.883466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.088 [2024-12-06 03:34:59.883482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.088 qpair failed and we were unable to recover it. 00:26:40.088 [2024-12-06 03:34:59.883569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.088 [2024-12-06 03:34:59.883584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.088 qpair failed and we were unable to recover it. 00:26:40.088 [2024-12-06 03:34:59.883674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.088 [2024-12-06 03:34:59.883690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.088 qpair failed and we were unable to recover it. 00:26:40.088 [2024-12-06 03:34:59.883896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.088 [2024-12-06 03:34:59.883913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.088 qpair failed and we were unable to recover it. 00:26:40.088 [2024-12-06 03:34:59.884095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.088 [2024-12-06 03:34:59.884120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.088 qpair failed and we were unable to recover it. 00:26:40.088 [2024-12-06 03:34:59.884215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.088 [2024-12-06 03:34:59.884232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.088 qpair failed and we were unable to recover it. 00:26:40.088 [2024-12-06 03:34:59.884322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.088 [2024-12-06 03:34:59.884338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.088 qpair failed and we were unable to recover it. 00:26:40.088 [2024-12-06 03:34:59.884426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.088 [2024-12-06 03:34:59.884441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.088 qpair failed and we were unable to recover it. 00:26:40.088 [2024-12-06 03:34:59.884543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.088 [2024-12-06 03:34:59.884559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.088 qpair failed and we were unable to recover it. 00:26:40.088 [2024-12-06 03:34:59.884720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.088 [2024-12-06 03:34:59.884737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.088 qpair failed and we were unable to recover it. 00:26:40.088 [2024-12-06 03:34:59.884834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.088 [2024-12-06 03:34:59.884851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.088 qpair failed and we were unable to recover it. 00:26:40.088 [2024-12-06 03:34:59.884938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.088 [2024-12-06 03:34:59.884958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.088 qpair failed and we were unable to recover it. 00:26:40.088 [2024-12-06 03:34:59.885135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.088 [2024-12-06 03:34:59.885151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.088 qpair failed and we were unable to recover it. 00:26:40.088 [2024-12-06 03:34:59.885236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.088 [2024-12-06 03:34:59.885255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.088 qpair failed and we were unable to recover it. 00:26:40.088 [2024-12-06 03:34:59.885400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.088 [2024-12-06 03:34:59.885418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.088 qpair failed and we were unable to recover it. 00:26:40.088 [2024-12-06 03:34:59.885508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.088 [2024-12-06 03:34:59.885523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.088 qpair failed and we were unable to recover it. 00:26:40.088 [2024-12-06 03:34:59.885691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.088 [2024-12-06 03:34:59.885707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.088 qpair failed and we were unable to recover it. 00:26:40.088 [2024-12-06 03:34:59.885787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.088 [2024-12-06 03:34:59.885799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.088 qpair failed and we were unable to recover it. 00:26:40.088 [2024-12-06 03:34:59.885887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.088 [2024-12-06 03:34:59.885899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.088 qpair failed and we were unable to recover it. 00:26:40.088 [2024-12-06 03:34:59.886067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.088 [2024-12-06 03:34:59.886081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.088 qpair failed and we were unable to recover it. 00:26:40.088 [2024-12-06 03:34:59.886157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.088 [2024-12-06 03:34:59.886169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.088 qpair failed and we were unable to recover it. 00:26:40.088 [2024-12-06 03:34:59.886261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.088 [2024-12-06 03:34:59.886272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.088 qpair failed and we were unable to recover it. 00:26:40.088 [2024-12-06 03:34:59.886356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.088 [2024-12-06 03:34:59.886368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.088 qpair failed and we were unable to recover it. 00:26:40.088 [2024-12-06 03:34:59.886503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.088 [2024-12-06 03:34:59.886516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.088 qpair failed and we were unable to recover it. 00:26:40.088 [2024-12-06 03:34:59.886592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.088 [2024-12-06 03:34:59.886604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.088 qpair failed and we were unable to recover it. 00:26:40.088 [2024-12-06 03:34:59.886697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.088 [2024-12-06 03:34:59.886710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.088 qpair failed and we were unable to recover it. 00:26:40.088 [2024-12-06 03:34:59.886814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.088 [2024-12-06 03:34:59.886826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.088 qpair failed and we were unable to recover it. 00:26:40.088 [2024-12-06 03:34:59.886909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.088 [2024-12-06 03:34:59.886921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.088 qpair failed and we were unable to recover it. 00:26:40.088 [2024-12-06 03:34:59.886994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.089 [2024-12-06 03:34:59.887006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.089 qpair failed and we were unable to recover it. 00:26:40.089 [2024-12-06 03:34:59.887224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.089 [2024-12-06 03:34:59.887237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.089 qpair failed and we were unable to recover it. 00:26:40.089 [2024-12-06 03:34:59.887300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.089 [2024-12-06 03:34:59.887312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.089 qpair failed and we were unable to recover it. 00:26:40.089 [2024-12-06 03:34:59.887475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.089 [2024-12-06 03:34:59.887487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.089 qpair failed and we were unable to recover it. 00:26:40.089 [2024-12-06 03:34:59.887557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.089 [2024-12-06 03:34:59.887568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.089 qpair failed and we were unable to recover it. 00:26:40.089 [2024-12-06 03:34:59.887660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.089 [2024-12-06 03:34:59.887671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.089 qpair failed and we were unable to recover it. 00:26:40.089 [2024-12-06 03:34:59.887764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.089 [2024-12-06 03:34:59.887777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.089 qpair failed and we were unable to recover it. 00:26:40.089 [2024-12-06 03:34:59.887856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.089 [2024-12-06 03:34:59.887868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.089 qpair failed and we were unable to recover it. 00:26:40.089 [2024-12-06 03:34:59.887963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.089 [2024-12-06 03:34:59.887976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.089 qpair failed and we were unable to recover it. 00:26:40.089 [2024-12-06 03:34:59.888119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.089 [2024-12-06 03:34:59.888132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.089 qpair failed and we were unable to recover it. 00:26:40.089 [2024-12-06 03:34:59.888281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.089 [2024-12-06 03:34:59.888294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.089 qpair failed and we were unable to recover it. 00:26:40.089 [2024-12-06 03:34:59.888378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.089 [2024-12-06 03:34:59.888390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.089 qpair failed and we were unable to recover it. 00:26:40.089 [2024-12-06 03:34:59.888540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.089 [2024-12-06 03:34:59.888553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.089 qpair failed and we were unable to recover it. 00:26:40.089 [2024-12-06 03:34:59.888636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.089 [2024-12-06 03:34:59.888647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.089 qpair failed and we were unable to recover it. 00:26:40.089 [2024-12-06 03:34:59.888736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.089 [2024-12-06 03:34:59.888748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.089 qpair failed and we were unable to recover it. 00:26:40.089 [2024-12-06 03:34:59.888830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.089 [2024-12-06 03:34:59.888842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.089 qpair failed and we were unable to recover it. 00:26:40.089 [2024-12-06 03:34:59.888916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.089 [2024-12-06 03:34:59.888927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.089 qpair failed and we were unable to recover it. 00:26:40.089 [2024-12-06 03:34:59.889010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.089 [2024-12-06 03:34:59.889026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.089 qpair failed and we were unable to recover it. 00:26:40.089 [2024-12-06 03:34:59.889105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.089 [2024-12-06 03:34:59.889117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.089 qpair failed and we were unable to recover it. 00:26:40.089 [2024-12-06 03:34:59.889210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.089 [2024-12-06 03:34:59.889221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.089 qpair failed and we were unable to recover it. 00:26:40.089 [2024-12-06 03:34:59.889285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.089 [2024-12-06 03:34:59.889298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.089 qpair failed and we were unable to recover it. 00:26:40.089 [2024-12-06 03:34:59.889376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.089 [2024-12-06 03:34:59.889388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.089 qpair failed and we were unable to recover it. 00:26:40.089 [2024-12-06 03:34:59.889524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.089 [2024-12-06 03:34:59.889537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.089 qpair failed and we were unable to recover it. 00:26:40.089 [2024-12-06 03:34:59.889629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.089 [2024-12-06 03:34:59.889642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.089 qpair failed and we were unable to recover it. 00:26:40.089 [2024-12-06 03:34:59.889726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.089 [2024-12-06 03:34:59.889738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.089 qpair failed and we were unable to recover it. 00:26:40.089 [2024-12-06 03:34:59.889828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.089 [2024-12-06 03:34:59.889841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.089 qpair failed and we were unable to recover it. 00:26:40.089 [2024-12-06 03:34:59.889988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.089 [2024-12-06 03:34:59.890002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.089 qpair failed and we were unable to recover it. 00:26:40.089 [2024-12-06 03:34:59.890099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.089 [2024-12-06 03:34:59.890112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.089 qpair failed and we were unable to recover it. 00:26:40.089 [2024-12-06 03:34:59.890197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.089 [2024-12-06 03:34:59.890208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.089 qpair failed and we were unable to recover it. 00:26:40.089 [2024-12-06 03:34:59.890293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.089 [2024-12-06 03:34:59.890305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.089 qpair failed and we were unable to recover it. 00:26:40.089 [2024-12-06 03:34:59.890385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.089 [2024-12-06 03:34:59.890396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.089 qpair failed and we were unable to recover it. 00:26:40.089 [2024-12-06 03:34:59.890556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.089 [2024-12-06 03:34:59.890568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.089 qpair failed and we were unable to recover it. 00:26:40.089 [2024-12-06 03:34:59.890649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.089 [2024-12-06 03:34:59.890660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.089 qpair failed and we were unable to recover it. 00:26:40.089 [2024-12-06 03:34:59.890740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.089 [2024-12-06 03:34:59.890752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.089 qpair failed and we were unable to recover it. 00:26:40.089 [2024-12-06 03:34:59.890891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.089 [2024-12-06 03:34:59.890904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.089 qpair failed and we were unable to recover it. 00:26:40.089 [2024-12-06 03:34:59.890979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.089 [2024-12-06 03:34:59.890992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.089 qpair failed and we were unable to recover it. 00:26:40.089 [2024-12-06 03:34:59.891127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.089 [2024-12-06 03:34:59.891141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.089 qpair failed and we were unable to recover it. 00:26:40.089 [2024-12-06 03:34:59.891229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.089 [2024-12-06 03:34:59.891242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.090 qpair failed and we were unable to recover it. 00:26:40.090 [2024-12-06 03:34:59.891316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.090 [2024-12-06 03:34:59.891328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.090 qpair failed and we were unable to recover it. 00:26:40.090 [2024-12-06 03:34:59.891421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.090 [2024-12-06 03:34:59.891433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.090 qpair failed and we were unable to recover it. 00:26:40.090 [2024-12-06 03:34:59.891507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.090 [2024-12-06 03:34:59.891519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.090 qpair failed and we were unable to recover it. 00:26:40.090 [2024-12-06 03:34:59.891594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.090 [2024-12-06 03:34:59.891606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.090 qpair failed and we were unable to recover it. 00:26:40.090 [2024-12-06 03:34:59.891675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.090 [2024-12-06 03:34:59.891687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.090 qpair failed and we were unable to recover it. 00:26:40.090 [2024-12-06 03:34:59.891764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.090 [2024-12-06 03:34:59.891777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.090 qpair failed and we were unable to recover it. 00:26:40.090 [2024-12-06 03:34:59.891856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.090 [2024-12-06 03:34:59.891868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.090 qpair failed and we were unable to recover it. 00:26:40.090 [2024-12-06 03:34:59.892003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.090 [2024-12-06 03:34:59.892016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.090 qpair failed and we were unable to recover it. 00:26:40.090 [2024-12-06 03:34:59.892093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.090 [2024-12-06 03:34:59.892105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.090 qpair failed and we were unable to recover it. 00:26:40.090 [2024-12-06 03:34:59.892256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.090 [2024-12-06 03:34:59.892270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.090 qpair failed and we were unable to recover it. 00:26:40.090 [2024-12-06 03:34:59.892430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.090 [2024-12-06 03:34:59.892443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.090 qpair failed and we were unable to recover it. 00:26:40.090 [2024-12-06 03:34:59.892590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.090 [2024-12-06 03:34:59.892602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.090 qpair failed and we were unable to recover it. 00:26:40.090 [2024-12-06 03:34:59.892748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.090 [2024-12-06 03:34:59.892762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.090 qpair failed and we were unable to recover it. 00:26:40.090 [2024-12-06 03:34:59.892853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.090 [2024-12-06 03:34:59.892865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.090 qpair failed and we were unable to recover it. 00:26:40.090 [2024-12-06 03:34:59.892940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.090 [2024-12-06 03:34:59.892956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.090 qpair failed and we were unable to recover it. 00:26:40.090 [2024-12-06 03:34:59.893069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.090 [2024-12-06 03:34:59.893066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:40.090 [2024-12-06 03:34:59.893082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.090 qpair failed and we were unable to recover it. 00:26:40.090 [2024-12-06 03:34:59.893153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.090 [2024-12-06 03:34:59.893165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.090 qpair failed and we were unable to recover it. 00:26:40.090 [2024-12-06 03:34:59.893325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.090 [2024-12-06 03:34:59.893337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.090 qpair failed and we were unable to recover it. 00:26:40.090 [2024-12-06 03:34:59.893415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.090 [2024-12-06 03:34:59.893427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.090 qpair failed and we were unable to recover it. 00:26:40.090 [2024-12-06 03:34:59.893521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.090 [2024-12-06 03:34:59.893534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.090 qpair failed and we were unable to recover it. 00:26:40.090 [2024-12-06 03:34:59.893693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.090 [2024-12-06 03:34:59.893706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.090 qpair failed and we were unable to recover it. 00:26:40.090 [2024-12-06 03:34:59.893787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.090 [2024-12-06 03:34:59.893799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.090 qpair failed and we were unable to recover it. 00:26:40.090 [2024-12-06 03:34:59.893867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.090 [2024-12-06 03:34:59.893879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.090 qpair failed and we were unable to recover it. 00:26:40.090 [2024-12-06 03:34:59.893959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.090 [2024-12-06 03:34:59.893971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.090 qpair failed and we were unable to recover it. 00:26:40.090 [2024-12-06 03:34:59.894042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.090 [2024-12-06 03:34:59.894055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.090 qpair failed and we were unable to recover it. 00:26:40.090 [2024-12-06 03:34:59.894137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.090 [2024-12-06 03:34:59.894151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.090 qpair failed and we were unable to recover it. 00:26:40.090 [2024-12-06 03:34:59.894222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.090 [2024-12-06 03:34:59.894234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.090 qpair failed and we were unable to recover it. 00:26:40.090 [2024-12-06 03:34:59.894328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.090 [2024-12-06 03:34:59.894340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.090 qpair failed and we were unable to recover it. 00:26:40.090 [2024-12-06 03:34:59.894478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.090 [2024-12-06 03:34:59.894491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.090 qpair failed and we were unable to recover it. 00:26:40.090 [2024-12-06 03:34:59.894580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.090 [2024-12-06 03:34:59.894592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.090 qpair failed and we were unable to recover it. 00:26:40.090 [2024-12-06 03:34:59.894690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.090 [2024-12-06 03:34:59.894702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.090 qpair failed and we were unable to recover it. 00:26:40.090 [2024-12-06 03:34:59.894784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.090 [2024-12-06 03:34:59.894797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.090 qpair failed and we were unable to recover it. 00:26:40.090 [2024-12-06 03:34:59.895028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.091 [2024-12-06 03:34:59.895043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.091 qpair failed and we were unable to recover it. 00:26:40.091 [2024-12-06 03:34:59.895173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.091 [2024-12-06 03:34:59.895186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.091 qpair failed and we were unable to recover it. 00:26:40.091 [2024-12-06 03:34:59.895342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.091 [2024-12-06 03:34:59.895355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.091 qpair failed and we were unable to recover it. 00:26:40.091 [2024-12-06 03:34:59.895432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.091 [2024-12-06 03:34:59.895446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.091 qpair failed and we were unable to recover it. 00:26:40.091 [2024-12-06 03:34:59.895526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.091 [2024-12-06 03:34:59.895540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.091 qpair failed and we were unable to recover it. 00:26:40.091 [2024-12-06 03:34:59.895614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.091 [2024-12-06 03:34:59.895627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.091 qpair failed and we were unable to recover it. 00:26:40.091 [2024-12-06 03:34:59.895696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.091 [2024-12-06 03:34:59.895709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.091 qpair failed and we were unable to recover it. 00:26:40.091 [2024-12-06 03:34:59.895797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.091 [2024-12-06 03:34:59.895810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.091 qpair failed and we were unable to recover it. 00:26:40.091 [2024-12-06 03:34:59.895877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.091 [2024-12-06 03:34:59.895888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.091 qpair failed and we were unable to recover it. 00:26:40.091 [2024-12-06 03:34:59.895965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.091 [2024-12-06 03:34:59.895979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.091 qpair failed and we were unable to recover it. 00:26:40.091 [2024-12-06 03:34:59.896041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.091 [2024-12-06 03:34:59.896053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.091 qpair failed and we were unable to recover it. 00:26:40.091 [2024-12-06 03:34:59.896202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.091 [2024-12-06 03:34:59.896215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.091 qpair failed and we were unable to recover it. 00:26:40.091 [2024-12-06 03:34:59.896288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.091 [2024-12-06 03:34:59.896300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.091 qpair failed and we were unable to recover it. 00:26:40.091 [2024-12-06 03:34:59.896372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.091 [2024-12-06 03:34:59.896386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.091 qpair failed and we were unable to recover it. 00:26:40.091 [2024-12-06 03:34:59.896474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.091 [2024-12-06 03:34:59.896487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.091 qpair failed and we were unable to recover it. 00:26:40.091 [2024-12-06 03:34:59.896559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.091 [2024-12-06 03:34:59.896572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.091 qpair failed and we were unable to recover it. 00:26:40.091 [2024-12-06 03:34:59.896642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.091 [2024-12-06 03:34:59.896655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.091 qpair failed and we were unable to recover it. 00:26:40.091 [2024-12-06 03:34:59.896788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.091 [2024-12-06 03:34:59.896801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.091 qpair failed and we were unable to recover it. 00:26:40.091 [2024-12-06 03:34:59.896870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.091 [2024-12-06 03:34:59.896883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.091 qpair failed and we were unable to recover it. 00:26:40.091 [2024-12-06 03:34:59.896946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.091 [2024-12-06 03:34:59.896961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.091 qpair failed and we were unable to recover it. 00:26:40.091 [2024-12-06 03:34:59.897036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.091 [2024-12-06 03:34:59.897048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.091 qpair failed and we were unable to recover it. 00:26:40.091 [2024-12-06 03:34:59.897127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.091 [2024-12-06 03:34:59.897140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.091 qpair failed and we were unable to recover it. 00:26:40.091 [2024-12-06 03:34:59.897218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.091 [2024-12-06 03:34:59.897232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.091 qpair failed and we were unable to recover it. 00:26:40.091 [2024-12-06 03:34:59.897307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.091 [2024-12-06 03:34:59.897320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.091 qpair failed and we were unable to recover it. 00:26:40.091 [2024-12-06 03:34:59.897416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.091 [2024-12-06 03:34:59.897429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.091 qpair failed and we were unable to recover it. 00:26:40.091 [2024-12-06 03:34:59.897516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.091 [2024-12-06 03:34:59.897529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.091 qpair failed and we were unable to recover it. 00:26:40.091 [2024-12-06 03:34:59.897596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.091 [2024-12-06 03:34:59.897609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.091 qpair failed and we were unable to recover it. 00:26:40.091 [2024-12-06 03:34:59.897975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.091 [2024-12-06 03:34:59.897998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.091 qpair failed and we were unable to recover it. 00:26:40.091 [2024-12-06 03:34:59.898090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.091 [2024-12-06 03:34:59.898105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.091 qpair failed and we were unable to recover it. 00:26:40.091 [2024-12-06 03:34:59.898177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.091 [2024-12-06 03:34:59.898191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.091 qpair failed and we were unable to recover it. 00:26:40.091 [2024-12-06 03:34:59.898277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.091 [2024-12-06 03:34:59.898291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.091 qpair failed and we were unable to recover it. 00:26:40.091 [2024-12-06 03:34:59.898449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.091 [2024-12-06 03:34:59.898465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.091 qpair failed and we were unable to recover it. 00:26:40.091 [2024-12-06 03:34:59.898826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.091 [2024-12-06 03:34:59.898844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.091 qpair failed and we were unable to recover it. 00:26:40.091 [2024-12-06 03:34:59.898916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.091 [2024-12-06 03:34:59.898929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.091 qpair failed and we were unable to recover it. 00:26:40.091 [2024-12-06 03:34:59.899024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.091 [2024-12-06 03:34:59.899037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.091 qpair failed and we were unable to recover it. 00:26:40.091 [2024-12-06 03:34:59.899171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.091 [2024-12-06 03:34:59.899184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.091 qpair failed and we were unable to recover it. 00:26:40.091 [2024-12-06 03:34:59.899252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.091 [2024-12-06 03:34:59.899265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.091 qpair failed and we were unable to recover it. 00:26:40.091 [2024-12-06 03:34:59.899347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.091 [2024-12-06 03:34:59.899359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.091 qpair failed and we were unable to recover it. 00:26:40.091 [2024-12-06 03:34:59.899493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.092 [2024-12-06 03:34:59.899505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.092 qpair failed and we were unable to recover it. 00:26:40.092 [2024-12-06 03:34:59.899657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.092 [2024-12-06 03:34:59.899670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.092 qpair failed and we were unable to recover it. 00:26:40.092 [2024-12-06 03:34:59.899737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.092 [2024-12-06 03:34:59.899753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.092 qpair failed and we were unable to recover it. 00:26:40.092 [2024-12-06 03:34:59.899821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.092 [2024-12-06 03:34:59.899834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.092 qpair failed and we were unable to recover it. 00:26:40.092 [2024-12-06 03:34:59.900000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.092 [2024-12-06 03:34:59.900013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.092 qpair failed and we were unable to recover it. 00:26:40.092 [2024-12-06 03:34:59.900082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.092 [2024-12-06 03:34:59.900096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.092 qpair failed and we were unable to recover it. 00:26:40.092 [2024-12-06 03:34:59.900232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.092 [2024-12-06 03:34:59.900245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.092 qpair failed and we were unable to recover it. 00:26:40.092 [2024-12-06 03:34:59.900394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.092 [2024-12-06 03:34:59.900406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.092 qpair failed and we were unable to recover it. 00:26:40.092 [2024-12-06 03:34:59.900500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.092 [2024-12-06 03:34:59.900513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.092 qpair failed and we were unable to recover it. 00:26:40.092 [2024-12-06 03:34:59.900647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.092 [2024-12-06 03:34:59.900659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.092 qpair failed and we were unable to recover it. 00:26:40.092 [2024-12-06 03:34:59.900747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.092 [2024-12-06 03:34:59.900760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.092 qpair failed and we were unable to recover it. 00:26:40.092 [2024-12-06 03:34:59.900829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.092 [2024-12-06 03:34:59.900842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.092 qpair failed and we were unable to recover it. 00:26:40.092 [2024-12-06 03:34:59.900912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.092 [2024-12-06 03:34:59.900924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.092 qpair failed and we were unable to recover it. 00:26:40.092 [2024-12-06 03:34:59.901007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.092 [2024-12-06 03:34:59.901019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.092 qpair failed and we were unable to recover it. 00:26:40.092 [2024-12-06 03:34:59.901107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.092 [2024-12-06 03:34:59.901120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.092 qpair failed and we were unable to recover it. 00:26:40.092 [2024-12-06 03:34:59.901274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.092 [2024-12-06 03:34:59.901286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.092 qpair failed and we were unable to recover it. 00:26:40.092 [2024-12-06 03:34:59.901422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.092 [2024-12-06 03:34:59.901435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.092 qpair failed and we were unable to recover it. 00:26:40.092 [2024-12-06 03:34:59.901569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.092 [2024-12-06 03:34:59.901581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.092 qpair failed and we were unable to recover it. 00:26:40.092 [2024-12-06 03:34:59.901661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.092 [2024-12-06 03:34:59.901673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.092 qpair failed and we were unable to recover it. 00:26:40.092 [2024-12-06 03:34:59.901821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.092 [2024-12-06 03:34:59.901833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.092 qpair failed and we were unable to recover it. 00:26:40.092 [2024-12-06 03:34:59.901969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.092 [2024-12-06 03:34:59.901982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.092 qpair failed and we were unable to recover it. 00:26:40.092 [2024-12-06 03:34:59.902052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.092 [2024-12-06 03:34:59.902064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.092 qpair failed and we were unable to recover it. 00:26:40.092 [2024-12-06 03:34:59.902198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.092 [2024-12-06 03:34:59.902210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.092 qpair failed and we were unable to recover it. 00:26:40.092 [2024-12-06 03:34:59.902277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.092 [2024-12-06 03:34:59.902289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.092 qpair failed and we were unable to recover it. 00:26:40.092 [2024-12-06 03:34:59.902353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.092 [2024-12-06 03:34:59.902364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.092 qpair failed and we were unable to recover it. 00:26:40.092 [2024-12-06 03:34:59.902507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.092 [2024-12-06 03:34:59.902519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.092 qpair failed and we were unable to recover it. 00:26:40.092 [2024-12-06 03:34:59.902600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.092 [2024-12-06 03:34:59.902612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.092 qpair failed and we were unable to recover it. 00:26:40.092 [2024-12-06 03:34:59.902680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.092 [2024-12-06 03:34:59.902691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.092 qpair failed and we were unable to recover it. 00:26:40.092 [2024-12-06 03:34:59.902764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.092 [2024-12-06 03:34:59.902776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.092 qpair failed and we were unable to recover it. 00:26:40.092 [2024-12-06 03:34:59.902937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.092 [2024-12-06 03:34:59.902956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.092 qpair failed and we were unable to recover it. 00:26:40.092 [2024-12-06 03:34:59.903036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.092 [2024-12-06 03:34:59.903049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.092 qpair failed and we were unable to recover it. 00:26:40.092 [2024-12-06 03:34:59.903127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.092 [2024-12-06 03:34:59.903140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.092 qpair failed and we were unable to recover it. 00:26:40.092 [2024-12-06 03:34:59.903274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.092 [2024-12-06 03:34:59.903286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.092 qpair failed and we were unable to recover it. 00:26:40.092 [2024-12-06 03:34:59.903368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.092 [2024-12-06 03:34:59.903381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.092 qpair failed and we were unable to recover it. 00:26:40.092 [2024-12-06 03:34:59.903464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.092 [2024-12-06 03:34:59.903478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.092 qpair failed and we were unable to recover it. 00:26:40.092 [2024-12-06 03:34:59.903545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.092 [2024-12-06 03:34:59.903559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.092 qpair failed and we were unable to recover it. 00:26:40.092 [2024-12-06 03:34:59.903627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.092 [2024-12-06 03:34:59.903640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.092 qpair failed and we were unable to recover it. 00:26:40.092 [2024-12-06 03:34:59.903722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.092 [2024-12-06 03:34:59.903735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.092 qpair failed and we were unable to recover it. 00:26:40.092 [2024-12-06 03:34:59.903800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.093 [2024-12-06 03:34:59.903811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.093 qpair failed and we were unable to recover it. 00:26:40.093 [2024-12-06 03:34:59.903915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.093 [2024-12-06 03:34:59.903928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.093 qpair failed and we were unable to recover it. 00:26:40.093 [2024-12-06 03:34:59.904076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.093 [2024-12-06 03:34:59.904101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.093 qpair failed and we were unable to recover it. 00:26:40.093 [2024-12-06 03:34:59.904184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.093 [2024-12-06 03:34:59.904196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.093 qpair failed and we were unable to recover it. 00:26:40.093 [2024-12-06 03:34:59.904265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.093 [2024-12-06 03:34:59.904285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.093 qpair failed and we were unable to recover it. 00:26:40.093 [2024-12-06 03:34:59.904367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.093 [2024-12-06 03:34:59.904380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.093 qpair failed and we were unable to recover it. 00:26:40.093 [2024-12-06 03:34:59.904449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.093 [2024-12-06 03:34:59.904463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.093 qpair failed and we were unable to recover it. 00:26:40.093 [2024-12-06 03:34:59.904599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.093 [2024-12-06 03:34:59.904612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.093 qpair failed and we were unable to recover it. 00:26:40.093 [2024-12-06 03:34:59.904684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.093 [2024-12-06 03:34:59.904696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.093 qpair failed and we were unable to recover it. 00:26:40.093 [2024-12-06 03:34:59.904788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.093 [2024-12-06 03:34:59.904802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.093 qpair failed and we were unable to recover it. 00:26:40.093 [2024-12-06 03:34:59.904880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.093 [2024-12-06 03:34:59.904893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.093 qpair failed and we were unable to recover it. 00:26:40.093 [2024-12-06 03:34:59.904963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.093 [2024-12-06 03:34:59.904977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.093 qpair failed and we were unable to recover it. 00:26:40.093 [2024-12-06 03:34:59.905058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.093 [2024-12-06 03:34:59.905072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.093 qpair failed and we were unable to recover it. 00:26:40.093 [2024-12-06 03:34:59.905150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.093 [2024-12-06 03:34:59.905163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.093 qpair failed and we were unable to recover it. 00:26:40.093 [2024-12-06 03:34:59.905300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.093 [2024-12-06 03:34:59.905313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.093 qpair failed and we were unable to recover it. 00:26:40.093 [2024-12-06 03:34:59.905400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.093 [2024-12-06 03:34:59.905414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.093 qpair failed and we were unable to recover it. 00:26:40.093 [2024-12-06 03:34:59.905481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.093 [2024-12-06 03:34:59.905494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.093 qpair failed and we were unable to recover it. 00:26:40.093 [2024-12-06 03:34:59.905571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.093 [2024-12-06 03:34:59.905584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.093 qpair failed and we were unable to recover it. 00:26:40.093 [2024-12-06 03:34:59.905670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.093 [2024-12-06 03:34:59.905683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.093 qpair failed and we were unable to recover it. 00:26:40.093 [2024-12-06 03:34:59.905818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.093 [2024-12-06 03:34:59.905831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.093 qpair failed and we were unable to recover it. 00:26:40.093 [2024-12-06 03:34:59.905906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.093 [2024-12-06 03:34:59.905919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.093 qpair failed and we were unable to recover it. 00:26:40.093 [2024-12-06 03:34:59.905992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.093 [2024-12-06 03:34:59.906004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.093 qpair failed and we were unable to recover it. 00:26:40.093 [2024-12-06 03:34:59.906141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.093 [2024-12-06 03:34:59.906153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.093 qpair failed and we were unable to recover it. 00:26:40.093 [2024-12-06 03:34:59.906226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.093 [2024-12-06 03:34:59.906239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.093 qpair failed and we were unable to recover it. 00:26:40.093 [2024-12-06 03:34:59.906304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.093 [2024-12-06 03:34:59.906316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.093 qpair failed and we were unable to recover it. 00:26:40.093 [2024-12-06 03:34:59.906456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.093 [2024-12-06 03:34:59.906470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.093 qpair failed and we were unable to recover it. 00:26:40.093 [2024-12-06 03:34:59.906539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.093 [2024-12-06 03:34:59.906552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.093 qpair failed and we were unable to recover it. 00:26:40.093 [2024-12-06 03:34:59.906636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.093 [2024-12-06 03:34:59.906650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.093 qpair failed and we were unable to recover it. 00:26:40.093 [2024-12-06 03:34:59.906730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.093 [2024-12-06 03:34:59.906742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.093 qpair failed and we were unable to recover it. 00:26:40.093 [2024-12-06 03:34:59.906811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.093 [2024-12-06 03:34:59.906824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.093 qpair failed and we were unable to recover it. 00:26:40.093 [2024-12-06 03:34:59.906899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.093 [2024-12-06 03:34:59.906912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.093 qpair failed and we were unable to recover it. 00:26:40.093 [2024-12-06 03:34:59.907001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.093 [2024-12-06 03:34:59.907023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.093 qpair failed and we were unable to recover it. 00:26:40.093 [2024-12-06 03:34:59.907127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.093 [2024-12-06 03:34:59.907145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.093 qpair failed and we were unable to recover it. 00:26:40.093 [2024-12-06 03:34:59.907242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.093 [2024-12-06 03:34:59.907258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.093 qpair failed and we were unable to recover it. 00:26:40.093 [2024-12-06 03:34:59.907343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.093 [2024-12-06 03:34:59.907359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.093 qpair failed and we were unable to recover it. 00:26:40.093 [2024-12-06 03:34:59.907428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.093 [2024-12-06 03:34:59.907444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.093 qpair failed and we were unable to recover it. 00:26:40.093 [2024-12-06 03:34:59.907594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.093 [2024-12-06 03:34:59.907612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.093 qpair failed and we were unable to recover it. 00:26:40.093 [2024-12-06 03:34:59.907696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.093 [2024-12-06 03:34:59.907710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.093 qpair failed and we were unable to recover it. 00:26:40.093 [2024-12-06 03:34:59.907919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.093 [2024-12-06 03:34:59.907932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.094 qpair failed and we were unable to recover it. 00:26:40.094 [2024-12-06 03:34:59.908048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.094 [2024-12-06 03:34:59.908061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.094 qpair failed and we were unable to recover it. 00:26:40.094 [2024-12-06 03:34:59.908205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.094 [2024-12-06 03:34:59.908218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.094 qpair failed and we were unable to recover it. 00:26:40.094 [2024-12-06 03:34:59.908300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.094 [2024-12-06 03:34:59.908313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.094 qpair failed and we were unable to recover it. 00:26:40.094 [2024-12-06 03:34:59.908450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.094 [2024-12-06 03:34:59.908462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.094 qpair failed and we were unable to recover it. 00:26:40.094 [2024-12-06 03:34:59.908597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.094 [2024-12-06 03:34:59.908609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.094 qpair failed and we were unable to recover it. 00:26:40.094 [2024-12-06 03:34:59.908747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.094 [2024-12-06 03:34:59.908764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.094 qpair failed and we were unable to recover it. 00:26:40.094 [2024-12-06 03:34:59.908829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.094 [2024-12-06 03:34:59.908841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.094 qpair failed and we were unable to recover it. 00:26:40.094 [2024-12-06 03:34:59.908933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.094 [2024-12-06 03:34:59.908946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.094 qpair failed and we were unable to recover it. 00:26:40.094 [2024-12-06 03:34:59.909173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.094 [2024-12-06 03:34:59.909187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.094 qpair failed and we were unable to recover it. 00:26:40.094 [2024-12-06 03:34:59.909271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.094 [2024-12-06 03:34:59.909293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.094 qpair failed and we were unable to recover it. 00:26:40.094 [2024-12-06 03:34:59.909350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.094 [2024-12-06 03:34:59.909362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.094 qpair failed and we were unable to recover it. 00:26:40.094 [2024-12-06 03:34:59.909432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.094 [2024-12-06 03:34:59.909445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.094 qpair failed and we were unable to recover it. 00:26:40.094 [2024-12-06 03:34:59.909539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.094 [2024-12-06 03:34:59.909552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.094 qpair failed and we were unable to recover it. 00:26:40.094 [2024-12-06 03:34:59.909616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.094 [2024-12-06 03:34:59.909629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.094 qpair failed and we were unable to recover it. 00:26:40.094 [2024-12-06 03:34:59.909710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.094 [2024-12-06 03:34:59.909724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.094 qpair failed and we were unable to recover it. 00:26:40.094 [2024-12-06 03:34:59.909806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.094 [2024-12-06 03:34:59.909826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.094 qpair failed and we were unable to recover it. 00:26:40.094 [2024-12-06 03:34:59.909906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.094 [2024-12-06 03:34:59.909918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.094 qpair failed and we were unable to recover it. 00:26:40.094 [2024-12-06 03:34:59.910067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.094 [2024-12-06 03:34:59.910082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.094 qpair failed and we were unable to recover it. 00:26:40.094 [2024-12-06 03:34:59.910152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.094 [2024-12-06 03:34:59.910165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.094 qpair failed and we were unable to recover it. 00:26:40.094 [2024-12-06 03:34:59.910243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.094 [2024-12-06 03:34:59.910256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.094 qpair failed and we were unable to recover it. 00:26:40.094 [2024-12-06 03:34:59.910390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.094 [2024-12-06 03:34:59.910403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.094 qpair failed and we were unable to recover it. 00:26:40.094 [2024-12-06 03:34:59.910479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.094 [2024-12-06 03:34:59.910491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.094 qpair failed and we were unable to recover it. 00:26:40.094 [2024-12-06 03:34:59.910574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.094 [2024-12-06 03:34:59.910587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.094 qpair failed and we were unable to recover it. 00:26:40.094 [2024-12-06 03:34:59.910676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.094 [2024-12-06 03:34:59.910691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.094 qpair failed and we were unable to recover it. 00:26:40.094 [2024-12-06 03:34:59.910760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.094 [2024-12-06 03:34:59.910773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.094 qpair failed and we were unable to recover it. 00:26:40.094 [2024-12-06 03:34:59.910841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.094 [2024-12-06 03:34:59.910853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.094 qpair failed and we were unable to recover it. 00:26:40.094 [2024-12-06 03:34:59.911008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.094 [2024-12-06 03:34:59.911021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.094 qpair failed and we were unable to recover it. 00:26:40.094 [2024-12-06 03:34:59.911099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.094 [2024-12-06 03:34:59.911111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.094 qpair failed and we were unable to recover it. 00:26:40.094 [2024-12-06 03:34:59.911191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.094 [2024-12-06 03:34:59.911204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.094 qpair failed and we were unable to recover it. 00:26:40.094 [2024-12-06 03:34:59.911274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.094 [2024-12-06 03:34:59.911286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.094 qpair failed and we were unable to recover it. 00:26:40.094 [2024-12-06 03:34:59.911371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.094 [2024-12-06 03:34:59.911383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.094 qpair failed and we were unable to recover it. 00:26:40.094 [2024-12-06 03:34:59.911592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.094 [2024-12-06 03:34:59.911605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.094 qpair failed and we were unable to recover it. 00:26:40.094 [2024-12-06 03:34:59.911787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.094 [2024-12-06 03:34:59.911826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.094 qpair failed and we were unable to recover it. 00:26:40.094 [2024-12-06 03:34:59.911960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.094 [2024-12-06 03:34:59.911997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.094 qpair failed and we were unable to recover it. 00:26:40.094 [2024-12-06 03:34:59.912096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.094 [2024-12-06 03:34:59.912116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.094 qpair failed and we were unable to recover it. 00:26:40.094 [2024-12-06 03:34:59.912286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.094 [2024-12-06 03:34:59.912303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.094 qpair failed and we were unable to recover it. 00:26:40.094 [2024-12-06 03:34:59.912401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.094 [2024-12-06 03:34:59.912418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.095 qpair failed and we were unable to recover it. 00:26:40.095 [2024-12-06 03:34:59.912585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.095 [2024-12-06 03:34:59.912601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.095 qpair failed and we were unable to recover it. 00:26:40.095 [2024-12-06 03:34:59.912675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.095 [2024-12-06 03:34:59.912688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.095 qpair failed and we were unable to recover it. 00:26:40.095 [2024-12-06 03:34:59.912841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.095 [2024-12-06 03:34:59.912853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.095 qpair failed and we were unable to recover it. 00:26:40.095 [2024-12-06 03:34:59.913014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.095 [2024-12-06 03:34:59.913026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.095 qpair failed and we were unable to recover it. 00:26:40.095 [2024-12-06 03:34:59.913102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.095 [2024-12-06 03:34:59.913115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.095 qpair failed and we were unable to recover it. 00:26:40.095 [2024-12-06 03:34:59.913183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.095 [2024-12-06 03:34:59.913195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.095 qpair failed and we were unable to recover it. 00:26:40.095 [2024-12-06 03:34:59.913400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.095 [2024-12-06 03:34:59.913412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.095 qpair failed and we were unable to recover it. 00:26:40.095 [2024-12-06 03:34:59.913487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.095 [2024-12-06 03:34:59.913498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.095 qpair failed and we were unable to recover it. 00:26:40.095 [2024-12-06 03:34:59.913581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.095 [2024-12-06 03:34:59.913593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.095 qpair failed and we were unable to recover it. 00:26:40.095 [2024-12-06 03:34:59.913744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.095 [2024-12-06 03:34:59.913756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.095 qpair failed and we were unable to recover it. 00:26:40.095 [2024-12-06 03:34:59.913846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.095 [2024-12-06 03:34:59.913858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.095 qpair failed and we were unable to recover it. 00:26:40.095 [2024-12-06 03:34:59.913990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.095 [2024-12-06 03:34:59.914002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.095 qpair failed and we were unable to recover it. 00:26:40.095 [2024-12-06 03:34:59.914073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.095 [2024-12-06 03:34:59.914085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.095 qpair failed and we were unable to recover it. 00:26:40.095 [2024-12-06 03:34:59.914156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.095 [2024-12-06 03:34:59.914169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.095 qpair failed and we were unable to recover it. 00:26:40.095 [2024-12-06 03:34:59.914256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.095 [2024-12-06 03:34:59.914269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.095 qpair failed and we were unable to recover it. 00:26:40.095 [2024-12-06 03:34:59.914413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.095 [2024-12-06 03:34:59.914426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.095 qpair failed and we were unable to recover it. 00:26:40.095 [2024-12-06 03:34:59.914578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.095 [2024-12-06 03:34:59.914591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.095 qpair failed and we were unable to recover it. 00:26:40.095 [2024-12-06 03:34:59.914677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.095 [2024-12-06 03:34:59.914689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.095 qpair failed and we were unable to recover it. 00:26:40.095 [2024-12-06 03:34:59.914821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.095 [2024-12-06 03:34:59.914833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.095 qpair failed and we were unable to recover it. 00:26:40.095 [2024-12-06 03:34:59.914924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.095 [2024-12-06 03:34:59.914937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.095 qpair failed and we were unable to recover it. 00:26:40.095 [2024-12-06 03:34:59.915036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.095 [2024-12-06 03:34:59.915049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.095 qpair failed and we were unable to recover it. 00:26:40.095 [2024-12-06 03:34:59.915249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.095 [2024-12-06 03:34:59.915261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.095 qpair failed and we were unable to recover it. 00:26:40.095 [2024-12-06 03:34:59.915337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.095 [2024-12-06 03:34:59.915350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.095 qpair failed and we were unable to recover it. 00:26:40.095 [2024-12-06 03:34:59.915412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.095 [2024-12-06 03:34:59.915424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.095 qpair failed and we were unable to recover it. 00:26:40.095 [2024-12-06 03:34:59.915501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.095 [2024-12-06 03:34:59.915514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.095 qpair failed and we were unable to recover it. 00:26:40.095 [2024-12-06 03:34:59.915609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.095 [2024-12-06 03:34:59.915621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.095 qpair failed and we were unable to recover it. 00:26:40.095 [2024-12-06 03:34:59.915752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.095 [2024-12-06 03:34:59.915764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.095 qpair failed and we were unable to recover it. 00:26:40.095 [2024-12-06 03:34:59.915841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.095 [2024-12-06 03:34:59.915853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.095 qpair failed and we were unable to recover it. 00:26:40.095 [2024-12-06 03:34:59.915927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.095 [2024-12-06 03:34:59.915940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.095 qpair failed and we were unable to recover it. 00:26:40.095 [2024-12-06 03:34:59.916081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.095 [2024-12-06 03:34:59.916094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.095 qpair failed and we were unable to recover it. 00:26:40.095 [2024-12-06 03:34:59.916189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.095 [2024-12-06 03:34:59.916202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.095 qpair failed and we were unable to recover it. 00:26:40.095 [2024-12-06 03:34:59.916271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.095 [2024-12-06 03:34:59.916285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.095 qpair failed and we were unable to recover it. 00:26:40.095 [2024-12-06 03:34:59.916431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.095 [2024-12-06 03:34:59.916445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.095 qpair failed and we were unable to recover it. 00:26:40.095 [2024-12-06 03:34:59.916590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.095 [2024-12-06 03:34:59.916603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.096 qpair failed and we were unable to recover it. 00:26:40.096 [2024-12-06 03:34:59.916684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.096 [2024-12-06 03:34:59.916697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.096 qpair failed and we were unable to recover it. 00:26:40.096 [2024-12-06 03:34:59.916867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.096 [2024-12-06 03:34:59.916882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.096 qpair failed and we were unable to recover it. 00:26:40.096 [2024-12-06 03:34:59.916954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.096 [2024-12-06 03:34:59.916966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.096 qpair failed and we were unable to recover it. 00:26:40.096 [2024-12-06 03:34:59.917043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.096 [2024-12-06 03:34:59.917055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.096 qpair failed and we were unable to recover it. 00:26:40.096 [2024-12-06 03:34:59.917127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.096 [2024-12-06 03:34:59.917140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.096 qpair failed and we were unable to recover it. 00:26:40.096 [2024-12-06 03:34:59.917289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.096 [2024-12-06 03:34:59.917302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.096 qpair failed and we were unable to recover it. 00:26:40.096 [2024-12-06 03:34:59.917447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.096 [2024-12-06 03:34:59.917459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.096 qpair failed and we were unable to recover it. 00:26:40.096 [2024-12-06 03:34:59.917552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.096 [2024-12-06 03:34:59.917564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.096 qpair failed and we were unable to recover it. 00:26:40.096 [2024-12-06 03:34:59.917632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.096 [2024-12-06 03:34:59.917644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.096 qpair failed and we were unable to recover it. 00:26:40.096 [2024-12-06 03:34:59.917711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.096 [2024-12-06 03:34:59.917723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.096 qpair failed and we were unable to recover it. 00:26:40.096 [2024-12-06 03:34:59.917805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.096 [2024-12-06 03:34:59.917817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.096 qpair failed and we were unable to recover it. 00:26:40.096 [2024-12-06 03:34:59.918021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.096 [2024-12-06 03:34:59.918040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.096 qpair failed and we were unable to recover it. 00:26:40.096 [2024-12-06 03:34:59.918122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.096 [2024-12-06 03:34:59.918134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.096 qpair failed and we were unable to recover it. 00:26:40.096 [2024-12-06 03:34:59.918205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.096 [2024-12-06 03:34:59.918217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.096 qpair failed and we were unable to recover it. 00:26:40.096 [2024-12-06 03:34:59.918285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.096 [2024-12-06 03:34:59.918297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.096 qpair failed and we were unable to recover it. 00:26:40.096 [2024-12-06 03:34:59.918429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.096 [2024-12-06 03:34:59.918442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.096 qpair failed and we were unable to recover it. 00:26:40.096 [2024-12-06 03:34:59.918518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.096 [2024-12-06 03:34:59.918530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.096 qpair failed and we were unable to recover it. 00:26:40.096 [2024-12-06 03:34:59.918622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.096 [2024-12-06 03:34:59.918634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.096 qpair failed and we were unable to recover it. 00:26:40.096 [2024-12-06 03:34:59.918781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.096 [2024-12-06 03:34:59.918793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.096 qpair failed and we were unable to recover it. 00:26:40.096 [2024-12-06 03:34:59.918879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.096 [2024-12-06 03:34:59.918893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.096 qpair failed and we were unable to recover it. 00:26:40.096 [2024-12-06 03:34:59.919029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.096 [2024-12-06 03:34:59.919043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.096 qpair failed and we were unable to recover it. 00:26:40.096 [2024-12-06 03:34:59.919177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.096 [2024-12-06 03:34:59.919189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.096 qpair failed and we were unable to recover it. 00:26:40.096 [2024-12-06 03:34:59.919282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.096 [2024-12-06 03:34:59.919294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.096 qpair failed and we were unable to recover it. 00:26:40.096 [2024-12-06 03:34:59.919359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.096 [2024-12-06 03:34:59.919372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.096 qpair failed and we were unable to recover it. 00:26:40.096 [2024-12-06 03:34:59.919464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.096 [2024-12-06 03:34:59.919477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.096 qpair failed and we were unable to recover it. 00:26:40.096 [2024-12-06 03:34:59.919551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.096 [2024-12-06 03:34:59.919564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.096 qpair failed and we were unable to recover it. 00:26:40.096 [2024-12-06 03:34:59.919638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.096 [2024-12-06 03:34:59.919650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.096 qpair failed and we were unable to recover it. 00:26:40.096 [2024-12-06 03:34:59.919724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.096 [2024-12-06 03:34:59.919737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.096 qpair failed and we were unable to recover it. 00:26:40.096 [2024-12-06 03:34:59.919817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.096 [2024-12-06 03:34:59.919830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.096 qpair failed and we were unable to recover it. 00:26:40.096 [2024-12-06 03:34:59.919922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.096 [2024-12-06 03:34:59.919934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.096 qpair failed and we were unable to recover it. 00:26:40.096 [2024-12-06 03:34:59.920027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.096 [2024-12-06 03:34:59.920040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.096 qpair failed and we were unable to recover it. 00:26:40.096 [2024-12-06 03:34:59.920104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.096 [2024-12-06 03:34:59.920116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.096 qpair failed and we were unable to recover it. 00:26:40.096 [2024-12-06 03:34:59.920185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.096 [2024-12-06 03:34:59.920197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.096 qpair failed and we were unable to recover it. 00:26:40.096 [2024-12-06 03:34:59.920279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.096 [2024-12-06 03:34:59.920292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.096 qpair failed and we were unable to recover it. 00:26:40.096 [2024-12-06 03:34:59.920351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.096 [2024-12-06 03:34:59.920363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.096 qpair failed and we were unable to recover it. 00:26:40.096 [2024-12-06 03:34:59.920497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.096 [2024-12-06 03:34:59.920510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.096 qpair failed and we were unable to recover it. 00:26:40.096 [2024-12-06 03:34:59.920605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.096 [2024-12-06 03:34:59.920617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.096 qpair failed and we were unable to recover it. 00:26:40.096 [2024-12-06 03:34:59.920746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.096 [2024-12-06 03:34:59.920757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.097 qpair failed and we were unable to recover it. 00:26:40.097 [2024-12-06 03:34:59.920835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.097 [2024-12-06 03:34:59.920846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.097 qpair failed and we were unable to recover it. 00:26:40.097 [2024-12-06 03:34:59.920929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.097 [2024-12-06 03:34:59.920941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.097 qpair failed and we were unable to recover it. 00:26:40.097 [2024-12-06 03:34:59.921020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.097 [2024-12-06 03:34:59.921034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.097 qpair failed and we were unable to recover it. 00:26:40.097 [2024-12-06 03:34:59.921099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.097 [2024-12-06 03:34:59.921116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.097 qpair failed and we were unable to recover it. 00:26:40.097 [2024-12-06 03:34:59.921188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.097 [2024-12-06 03:34:59.921201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.097 qpair failed and we were unable to recover it. 00:26:40.097 [2024-12-06 03:34:59.921281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.097 [2024-12-06 03:34:59.921294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.097 qpair failed and we were unable to recover it. 00:26:40.097 [2024-12-06 03:34:59.921371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.097 [2024-12-06 03:34:59.921383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.097 qpair failed and we were unable to recover it. 00:26:40.097 [2024-12-06 03:34:59.921476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.097 [2024-12-06 03:34:59.921488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.097 qpair failed and we were unable to recover it. 00:26:40.097 [2024-12-06 03:34:59.921557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.097 [2024-12-06 03:34:59.921569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.097 qpair failed and we were unable to recover it. 00:26:40.097 [2024-12-06 03:34:59.921647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.097 [2024-12-06 03:34:59.921659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.097 qpair failed and we were unable to recover it. 00:26:40.097 [2024-12-06 03:34:59.921808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.097 [2024-12-06 03:34:59.921821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.097 qpair failed and we were unable to recover it. 00:26:40.097 [2024-12-06 03:34:59.921970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.097 [2024-12-06 03:34:59.921983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.097 qpair failed and we were unable to recover it. 00:26:40.097 [2024-12-06 03:34:59.922050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.097 [2024-12-06 03:34:59.922062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.097 qpair failed and we were unable to recover it. 00:26:40.097 [2024-12-06 03:34:59.922209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.097 [2024-12-06 03:34:59.922222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.097 qpair failed and we were unable to recover it. 00:26:40.097 [2024-12-06 03:34:59.922291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.097 [2024-12-06 03:34:59.922304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.097 qpair failed and we were unable to recover it. 00:26:40.097 [2024-12-06 03:34:59.922449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.097 [2024-12-06 03:34:59.922462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.097 qpair failed and we were unable to recover it. 00:26:40.097 [2024-12-06 03:34:59.922545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.097 [2024-12-06 03:34:59.922558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.097 qpair failed and we were unable to recover it. 00:26:40.097 [2024-12-06 03:34:59.922632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.097 [2024-12-06 03:34:59.922644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.097 qpair failed and we were unable to recover it. 00:26:40.097 [2024-12-06 03:34:59.922803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.097 [2024-12-06 03:34:59.922815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.097 qpair failed and we were unable to recover it. 00:26:40.097 [2024-12-06 03:34:59.922912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.097 [2024-12-06 03:34:59.922925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.097 qpair failed and we were unable to recover it. 00:26:40.097 [2024-12-06 03:34:59.923029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.097 [2024-12-06 03:34:59.923042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.097 qpair failed and we were unable to recover it. 00:26:40.097 [2024-12-06 03:34:59.923119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.097 [2024-12-06 03:34:59.923131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.097 qpair failed and we were unable to recover it. 00:26:40.097 [2024-12-06 03:34:59.923272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.097 [2024-12-06 03:34:59.923286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.097 qpair failed and we were unable to recover it. 00:26:40.097 [2024-12-06 03:34:59.923461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.097 [2024-12-06 03:34:59.923473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.097 qpair failed and we were unable to recover it. 00:26:40.097 [2024-12-06 03:34:59.923561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.097 [2024-12-06 03:34:59.923574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.097 qpair failed and we were unable to recover it. 00:26:40.097 [2024-12-06 03:34:59.923644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.097 [2024-12-06 03:34:59.923657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.097 qpair failed and we were unable to recover it. 00:26:40.097 [2024-12-06 03:34:59.923803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.097 [2024-12-06 03:34:59.923816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.097 qpair failed and we were unable to recover it. 00:26:40.097 [2024-12-06 03:34:59.923886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.097 [2024-12-06 03:34:59.923898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.097 qpair failed and we were unable to recover it. 00:26:40.097 [2024-12-06 03:34:59.924034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.097 [2024-12-06 03:34:59.924047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.097 qpair failed and we were unable to recover it. 00:26:40.097 [2024-12-06 03:34:59.924182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.097 [2024-12-06 03:34:59.924195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.097 qpair failed and we were unable to recover it. 00:26:40.097 [2024-12-06 03:34:59.924366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.097 [2024-12-06 03:34:59.924379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.097 qpair failed and we were unable to recover it. 00:26:40.097 [2024-12-06 03:34:59.924459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.097 [2024-12-06 03:34:59.924472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.097 qpair failed and we were unable to recover it. 00:26:40.097 [2024-12-06 03:34:59.924553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.097 [2024-12-06 03:34:59.924565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.097 qpair failed and we were unable to recover it. 00:26:40.097 [2024-12-06 03:34:59.924642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.097 [2024-12-06 03:34:59.924654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.097 qpair failed and we were unable to recover it. 00:26:40.097 [2024-12-06 03:34:59.924748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.097 [2024-12-06 03:34:59.924761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.097 qpair failed and we were unable to recover it. 00:26:40.097 [2024-12-06 03:34:59.924837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.097 [2024-12-06 03:34:59.924849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.097 qpair failed and we were unable to recover it. 00:26:40.097 [2024-12-06 03:34:59.924914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.097 [2024-12-06 03:34:59.924927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.097 qpair failed and we were unable to recover it. 00:26:40.097 [2024-12-06 03:34:59.925002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.097 [2024-12-06 03:34:59.925016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.097 qpair failed and we were unable to recover it. 00:26:40.098 [2024-12-06 03:34:59.925088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.098 [2024-12-06 03:34:59.925102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.098 qpair failed and we were unable to recover it. 00:26:40.098 [2024-12-06 03:34:59.925176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.098 [2024-12-06 03:34:59.925189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.098 qpair failed and we were unable to recover it. 00:26:40.098 [2024-12-06 03:34:59.925320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.098 [2024-12-06 03:34:59.925333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.098 qpair failed and we were unable to recover it. 00:26:40.098 [2024-12-06 03:34:59.925404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.098 [2024-12-06 03:34:59.925417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.098 qpair failed and we were unable to recover it. 00:26:40.098 [2024-12-06 03:34:59.925579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.098 [2024-12-06 03:34:59.925592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.098 qpair failed and we were unable to recover it. 00:26:40.098 [2024-12-06 03:34:59.925669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.098 [2024-12-06 03:34:59.925684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.098 qpair failed and we were unable to recover it. 00:26:40.098 [2024-12-06 03:34:59.925753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.098 [2024-12-06 03:34:59.925765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.098 qpair failed and we were unable to recover it. 00:26:40.098 [2024-12-06 03:34:59.925847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.098 [2024-12-06 03:34:59.925860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.098 qpair failed and we were unable to recover it. 00:26:40.098 [2024-12-06 03:34:59.925997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.098 [2024-12-06 03:34:59.926010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.098 qpair failed and we were unable to recover it. 00:26:40.098 [2024-12-06 03:34:59.926098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.098 [2024-12-06 03:34:59.926111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.098 qpair failed and we were unable to recover it. 00:26:40.098 [2024-12-06 03:34:59.926181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.098 [2024-12-06 03:34:59.926193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.098 qpair failed and we were unable to recover it. 00:26:40.098 [2024-12-06 03:34:59.926289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.098 [2024-12-06 03:34:59.926313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.098 qpair failed and we were unable to recover it. 00:26:40.098 [2024-12-06 03:34:59.926396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.098 [2024-12-06 03:34:59.926409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.098 qpair failed and we were unable to recover it. 00:26:40.098 [2024-12-06 03:34:59.926556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.098 [2024-12-06 03:34:59.926569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.098 qpair failed and we were unable to recover it. 00:26:40.098 [2024-12-06 03:34:59.926651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.098 [2024-12-06 03:34:59.926663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.098 qpair failed and we were unable to recover it. 00:26:40.098 [2024-12-06 03:34:59.926736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.098 [2024-12-06 03:34:59.926749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.098 qpair failed and we were unable to recover it. 00:26:40.098 [2024-12-06 03:34:59.926824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.098 [2024-12-06 03:34:59.926837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.098 qpair failed and we were unable to recover it. 00:26:40.098 [2024-12-06 03:34:59.926904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.098 [2024-12-06 03:34:59.926917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.098 qpair failed and we were unable to recover it. 00:26:40.098 [2024-12-06 03:34:59.926995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.098 [2024-12-06 03:34:59.927008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.098 qpair failed and we were unable to recover it. 00:26:40.098 [2024-12-06 03:34:59.927089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.098 [2024-12-06 03:34:59.927101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.098 qpair failed and we were unable to recover it. 00:26:40.098 [2024-12-06 03:34:59.927170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.098 [2024-12-06 03:34:59.927193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.098 qpair failed and we were unable to recover it. 00:26:40.098 [2024-12-06 03:34:59.927259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.098 [2024-12-06 03:34:59.927271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.098 qpair failed and we were unable to recover it. 00:26:40.098 [2024-12-06 03:34:59.927356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.098 [2024-12-06 03:34:59.927370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.098 qpair failed and we were unable to recover it. 00:26:40.098 [2024-12-06 03:34:59.927447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.098 [2024-12-06 03:34:59.927460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.098 qpair failed and we were unable to recover it. 00:26:40.098 [2024-12-06 03:34:59.927543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.098 [2024-12-06 03:34:59.927555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.098 qpair failed and we were unable to recover it. 00:26:40.098 [2024-12-06 03:34:59.927764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.098 [2024-12-06 03:34:59.927777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.098 qpair failed and we were unable to recover it. 00:26:40.098 [2024-12-06 03:34:59.927918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.098 [2024-12-06 03:34:59.927931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.098 qpair failed and we were unable to recover it. 00:26:40.098 [2024-12-06 03:34:59.928093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.098 [2024-12-06 03:34:59.928107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.098 qpair failed and we were unable to recover it. 00:26:40.098 [2024-12-06 03:34:59.928255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.098 [2024-12-06 03:34:59.928268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.098 qpair failed and we were unable to recover it. 00:26:40.098 [2024-12-06 03:34:59.928335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.098 [2024-12-06 03:34:59.928347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.098 qpair failed and we were unable to recover it. 00:26:40.098 [2024-12-06 03:34:59.928480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.098 [2024-12-06 03:34:59.928493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.098 qpair failed and we were unable to recover it. 00:26:40.098 [2024-12-06 03:34:59.928667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.098 [2024-12-06 03:34:59.928679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.098 qpair failed and we were unable to recover it. 00:26:40.098 [2024-12-06 03:34:59.928841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.098 [2024-12-06 03:34:59.928853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.098 qpair failed and we were unable to recover it. 00:26:40.098 [2024-12-06 03:34:59.928918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.098 [2024-12-06 03:34:59.928930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.098 qpair failed and we were unable to recover it. 00:26:40.098 [2024-12-06 03:34:59.929004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.098 [2024-12-06 03:34:59.929016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.098 qpair failed and we were unable to recover it. 00:26:40.098 [2024-12-06 03:34:59.929127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.098 [2024-12-06 03:34:59.929140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.098 qpair failed and we were unable to recover it. 00:26:40.098 [2024-12-06 03:34:59.929232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.098 [2024-12-06 03:34:59.929244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.098 qpair failed and we were unable to recover it. 00:26:40.098 [2024-12-06 03:34:59.929316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.098 [2024-12-06 03:34:59.929328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.099 qpair failed and we were unable to recover it. 00:26:40.099 [2024-12-06 03:34:59.929482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.099 [2024-12-06 03:34:59.929495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.099 qpair failed and we were unable to recover it. 00:26:40.099 [2024-12-06 03:34:59.929643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.099 [2024-12-06 03:34:59.929656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.099 qpair failed and we were unable to recover it. 00:26:40.099 [2024-12-06 03:34:59.929841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.099 [2024-12-06 03:34:59.929854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.099 qpair failed and we were unable to recover it. 00:26:40.099 [2024-12-06 03:34:59.929922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.099 [2024-12-06 03:34:59.929934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.099 qpair failed and we were unable to recover it. 00:26:40.099 [2024-12-06 03:34:59.930080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.099 [2024-12-06 03:34:59.930093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.099 qpair failed and we were unable to recover it. 00:26:40.099 [2024-12-06 03:34:59.930184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.099 [2024-12-06 03:34:59.930197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.099 qpair failed and we were unable to recover it. 00:26:40.099 [2024-12-06 03:34:59.930339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.099 [2024-12-06 03:34:59.930352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.099 qpair failed and we were unable to recover it. 00:26:40.099 [2024-12-06 03:34:59.930490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.099 [2024-12-06 03:34:59.930505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.099 qpair failed and we were unable to recover it. 00:26:40.099 [2024-12-06 03:34:59.930585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.099 [2024-12-06 03:34:59.930598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.099 qpair failed and we were unable to recover it. 00:26:40.099 [2024-12-06 03:34:59.930836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.099 [2024-12-06 03:34:59.930849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.099 qpair failed and we were unable to recover it. 00:26:40.099 [2024-12-06 03:34:59.930931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.099 [2024-12-06 03:34:59.930943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.099 qpair failed and we were unable to recover it. 00:26:40.099 [2024-12-06 03:34:59.931037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.099 [2024-12-06 03:34:59.931050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.099 qpair failed and we were unable to recover it. 00:26:40.099 [2024-12-06 03:34:59.931118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.099 [2024-12-06 03:34:59.931130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.099 qpair failed and we were unable to recover it. 00:26:40.099 [2024-12-06 03:34:59.931337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.099 [2024-12-06 03:34:59.931350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.099 qpair failed and we were unable to recover it. 00:26:40.099 [2024-12-06 03:34:59.931489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.099 [2024-12-06 03:34:59.931501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.099 qpair failed and we were unable to recover it. 00:26:40.099 [2024-12-06 03:34:59.931655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.099 [2024-12-06 03:34:59.931667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.099 qpair failed and we were unable to recover it. 00:26:40.099 [2024-12-06 03:34:59.931769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.099 [2024-12-06 03:34:59.931781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.099 qpair failed and we were unable to recover it. 00:26:40.099 [2024-12-06 03:34:59.931852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.099 [2024-12-06 03:34:59.931864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.099 qpair failed and we were unable to recover it. 00:26:40.099 [2024-12-06 03:34:59.931942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.099 [2024-12-06 03:34:59.931959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.099 qpair failed and we were unable to recover it. 00:26:40.099 [2024-12-06 03:34:59.932112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.099 [2024-12-06 03:34:59.932124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.099 qpair failed and we were unable to recover it. 00:26:40.099 [2024-12-06 03:34:59.932276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.099 [2024-12-06 03:34:59.932289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.099 qpair failed and we were unable to recover it. 00:26:40.099 [2024-12-06 03:34:59.932453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.099 [2024-12-06 03:34:59.932465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.099 qpair failed and we were unable to recover it. 00:26:40.099 [2024-12-06 03:34:59.932561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.099 [2024-12-06 03:34:59.932578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.099 qpair failed and we were unable to recover it. 00:26:40.099 [2024-12-06 03:34:59.932644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.099 [2024-12-06 03:34:59.932657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.099 qpair failed and we were unable to recover it. 00:26:40.099 [2024-12-06 03:34:59.932741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.099 [2024-12-06 03:34:59.932752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.099 qpair failed and we were unable to recover it. 00:26:40.099 [2024-12-06 03:34:59.932933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.099 [2024-12-06 03:34:59.932945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.099 qpair failed and we were unable to recover it. 00:26:40.099 [2024-12-06 03:34:59.933046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.099 [2024-12-06 03:34:59.933057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.099 qpair failed and we were unable to recover it. 00:26:40.099 [2024-12-06 03:34:59.933188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.099 [2024-12-06 03:34:59.933200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.099 qpair failed and we were unable to recover it. 00:26:40.099 [2024-12-06 03:34:59.933290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.099 [2024-12-06 03:34:59.933302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.099 qpair failed and we were unable to recover it. 00:26:40.099 [2024-12-06 03:34:59.933449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.099 [2024-12-06 03:34:59.933462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.099 qpair failed and we were unable to recover it. 00:26:40.099 [2024-12-06 03:34:59.933602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.099 [2024-12-06 03:34:59.933615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.099 qpair failed and we were unable to recover it. 00:26:40.099 [2024-12-06 03:34:59.933760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.099 [2024-12-06 03:34:59.933773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.099 qpair failed and we were unable to recover it. 00:26:40.099 [2024-12-06 03:34:59.933963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.099 [2024-12-06 03:34:59.933976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.099 qpair failed and we were unable to recover it. 00:26:40.099 [2024-12-06 03:34:59.934120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.099 [2024-12-06 03:34:59.934133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.100 qpair failed and we were unable to recover it. 00:26:40.100 [2024-12-06 03:34:59.934290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.100 [2024-12-06 03:34:59.934302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.100 qpair failed and we were unable to recover it. 00:26:40.100 [2024-12-06 03:34:59.934439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.100 [2024-12-06 03:34:59.934451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.100 qpair failed and we were unable to recover it. 00:26:40.100 [2024-12-06 03:34:59.934602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.100 [2024-12-06 03:34:59.934622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.100 qpair failed and we were unable to recover it. 00:26:40.100 [2024-12-06 03:34:59.934695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.100 [2024-12-06 03:34:59.934707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.100 qpair failed and we were unable to recover it. 00:26:40.100 [2024-12-06 03:34:59.934789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.100 [2024-12-06 03:34:59.934801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.100 qpair failed and we were unable to recover it. 00:26:40.100 [2024-12-06 03:34:59.935007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.100 [2024-12-06 03:34:59.935020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.100 qpair failed and we were unable to recover it. 00:26:40.100 [2024-12-06 03:34:59.935091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.100 [2024-12-06 03:34:59.935104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.100 qpair failed and we were unable to recover it. 00:26:40.100 [2024-12-06 03:34:59.935183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.100 [2024-12-06 03:34:59.935195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.100 qpair failed and we were unable to recover it. 00:26:40.100 [2024-12-06 03:34:59.935372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.100 [2024-12-06 03:34:59.935385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.100 qpair failed and we were unable to recover it. 00:26:40.100 [2024-12-06 03:34:59.935521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.100 [2024-12-06 03:34:59.935534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.100 qpair failed and we were unable to recover it. 00:26:40.100 [2024-12-06 03:34:59.935675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.100 [2024-12-06 03:34:59.935687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.100 qpair failed and we were unable to recover it. 00:26:40.100 [2024-12-06 03:34:59.935904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.100 [2024-12-06 03:34:59.935917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.100 qpair failed and we were unable to recover it. 00:26:40.100 [2024-12-06 03:34:59.935998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.100 [2024-12-06 03:34:59.936011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.100 qpair failed and we were unable to recover it. 00:26:40.100 [2024-12-06 03:34:59.936156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.100 [2024-12-06 03:34:59.936172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.100 qpair failed and we were unable to recover it. 00:26:40.100 [2024-12-06 03:34:59.936234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.100 [2024-12-06 03:34:59.936246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.100 qpair failed and we were unable to recover it. 00:26:40.100 [2024-12-06 03:34:59.936326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.100 [2024-12-06 03:34:59.936339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.100 qpair failed and we were unable to recover it. 00:26:40.100 [2024-12-06 03:34:59.936476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.100 [2024-12-06 03:34:59.936488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.100 qpair failed and we were unable to recover it. 00:26:40.100 [2024-12-06 03:34:59.936667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.100 [2024-12-06 03:34:59.936680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.100 qpair failed and we were unable to recover it. 00:26:40.100 [2024-12-06 03:34:59.936821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.100 [2024-12-06 03:34:59.936834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.100 qpair failed and we were unable to recover it. 00:26:40.100 [2024-12-06 03:34:59.936899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.100 [2024-12-06 03:34:59.936911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.100 qpair failed and we were unable to recover it. 00:26:40.100 [2024-12-06 03:34:59.937053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.100 [2024-12-06 03:34:59.937066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.100 qpair failed and we were unable to recover it. 00:26:40.100 [2024-12-06 03:34:59.937143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.100 [2024-12-06 03:34:59.937166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.100 qpair failed and we were unable to recover it. 00:26:40.100 [2024-12-06 03:34:59.937305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.100 [2024-12-06 03:34:59.937318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.100 qpair failed and we were unable to recover it. 00:26:40.100 [2024-12-06 03:34:59.937523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.100 [2024-12-06 03:34:59.937536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.100 qpair failed and we were unable to recover it. 00:26:40.100 [2024-12-06 03:34:59.937613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.100 [2024-12-06 03:34:59.937625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.100 qpair failed and we were unable to recover it. 00:26:40.100 [2024-12-06 03:34:59.937707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.100 [2024-12-06 03:34:59.937719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.100 qpair failed and we were unable to recover it. 00:26:40.100 [2024-12-06 03:34:59.937796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.100 [2024-12-06 03:34:59.937809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.100 qpair failed and we were unable to recover it. 00:26:40.100 [2024-12-06 03:34:59.937892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.100 [2024-12-06 03:34:59.937905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.100 qpair failed and we were unable to recover it. 00:26:40.100 [2024-12-06 03:34:59.938041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.100 [2024-12-06 03:34:59.938055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.100 qpair failed and we were unable to recover it. 00:26:40.100 [2024-12-06 03:34:59.938193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.100 [2024-12-06 03:34:59.938206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.100 qpair failed and we were unable to recover it. 00:26:40.100 [2024-12-06 03:34:59.938341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.100 [2024-12-06 03:34:59.938353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.100 qpair failed and we were unable to recover it. 00:26:40.100 [2024-12-06 03:34:59.938501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.100 [2024-12-06 03:34:59.938513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.100 qpair failed and we were unable to recover it. 00:26:40.100 [2024-12-06 03:34:59.938650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.100 [2024-12-06 03:34:59.938664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.100 qpair failed and we were unable to recover it. 00:26:40.100 [2024-12-06 03:34:59.938730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.100 [2024-12-06 03:34:59.938742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.100 qpair failed and we were unable to recover it. 00:26:40.100 [2024-12-06 03:34:59.938839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.100 [2024-12-06 03:34:59.938851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.100 qpair failed and we were unable to recover it. 00:26:40.100 [2024-12-06 03:34:59.939012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.100 [2024-12-06 03:34:59.939032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.100 qpair failed and we were unable to recover it. 00:26:40.100 [2024-12-06 03:34:59.939091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.100 [2024-12-06 03:34:59.939102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.100 qpair failed and we were unable to recover it. 00:26:40.100 [2024-12-06 03:34:59.939199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.100 [2024-12-06 03:34:59.939211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.101 qpair failed and we were unable to recover it. 00:26:40.101 [2024-12-06 03:34:59.939348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.101 [2024-12-06 03:34:59.939361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.101 qpair failed and we were unable to recover it. 00:26:40.101 [2024-12-06 03:34:59.939421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.101 [2024-12-06 03:34:59.939432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.101 qpair failed and we were unable to recover it. 00:26:40.101 [2024-12-06 03:34:59.939614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.101 [2024-12-06 03:34:59.939638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.101 qpair failed and we were unable to recover it. 00:26:40.101 [2024-12-06 03:34:59.939735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.101 [2024-12-06 03:34:59.939752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.101 qpair failed and we were unable to recover it. 00:26:40.101 [2024-12-06 03:34:59.939856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.101 [2024-12-06 03:34:59.939872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.101 qpair failed and we were unable to recover it. 00:26:40.101 [2024-12-06 03:34:59.939957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.101 [2024-12-06 03:34:59.939974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.101 qpair failed and we were unable to recover it. 00:26:40.101 [2024-12-06 03:34:59.940023] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:40.101 [2024-12-06 03:34:59.940048] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:40.101 [2024-12-06 03:34:59.940061] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:40.101 [2024-12-06 03:34:59.940069] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:40.101 [2024-12-06 03:34:59.940068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.101 [2024-12-06 03:34:59.940076] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:40.101 [2024-12-06 03:34:59.940085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.101 qpair failed and we were unable to recover it. 00:26:40.101 [2024-12-06 03:34:59.940179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.101 [2024-12-06 03:34:59.940194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.101 qpair failed and we were unable to recover it. 00:26:40.101 [2024-12-06 03:34:59.940277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.101 [2024-12-06 03:34:59.940292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.101 qpair failed and we were unable to recover it. 00:26:40.101 [2024-12-06 03:34:59.940380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.101 [2024-12-06 03:34:59.940394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.101 qpair failed and we were unable to recover it. 00:26:40.101 [2024-12-06 03:34:59.940474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.101 [2024-12-06 03:34:59.940489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.101 qpair failed and we were unable to recover it. 00:26:40.101 [2024-12-06 03:34:59.940654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.101 [2024-12-06 03:34:59.940668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.101 qpair failed and we were unable to recover it. 00:26:40.101 [2024-12-06 03:34:59.940759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.101 [2024-12-06 03:34:59.940771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.101 qpair failed and we were unable to recover it. 00:26:40.101 [2024-12-06 03:34:59.940857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.101 [2024-12-06 03:34:59.940872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.101 qpair failed and we were unable to recover it. 00:26:40.101 [2024-12-06 03:34:59.940957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.101 [2024-12-06 03:34:59.940970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.101 qpair failed and we were unable to recover it. 00:26:40.101 [2024-12-06 03:34:59.941074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.101 [2024-12-06 03:34:59.941087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.101 qpair failed and we were unable to recover it. 00:26:40.101 [2024-12-06 03:34:59.941170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.101 [2024-12-06 03:34:59.941183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.101 qpair failed and we were unable to recover it. 00:26:40.101 [2024-12-06 03:34:59.941320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.101 [2024-12-06 03:34:59.941332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.101 qpair failed and we were unable to recover it. 00:26:40.101 [2024-12-06 03:34:59.941586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.101 [2024-12-06 03:34:59.941599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.101 qpair failed and we were unable to recover it. 00:26:40.101 [2024-12-06 03:34:59.941675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.101 [2024-12-06 03:34:59.941688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.101 qpair failed and we were unable to recover it. 00:26:40.101 [2024-12-06 03:34:59.941717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:26:40.101 [2024-12-06 03:34:59.941904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.101 [2024-12-06 03:34:59.941830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:26:40.101 [2024-12-06 03:34:59.941917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.101 qpair failed and we were unable to recover it. 00:26:40.101 [2024-12-06 03:34:59.941937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:40.101 [2024-12-06 03:34:59.941938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:26:40.101 [2024-12-06 03:34:59.942079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.101 [2024-12-06 03:34:59.942095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.101 qpair failed and we were unable to recover it. 00:26:40.101 [2024-12-06 03:34:59.942252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.101 [2024-12-06 03:34:59.942265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.101 qpair failed and we were unable to recover it. 00:26:40.101 [2024-12-06 03:34:59.942348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.101 [2024-12-06 03:34:59.942361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.101 qpair failed and we were unable to recover it. 00:26:40.101 [2024-12-06 03:34:59.942433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.101 [2024-12-06 03:34:59.942445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.101 qpair failed and we were unable to recover it. 00:26:40.101 [2024-12-06 03:34:59.942608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.101 [2024-12-06 03:34:59.942622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.101 qpair failed and we were unable to recover it. 00:26:40.101 [2024-12-06 03:34:59.942692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.101 [2024-12-06 03:34:59.942704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.101 qpair failed and we were unable to recover it. 00:26:40.101 [2024-12-06 03:34:59.942774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.101 [2024-12-06 03:34:59.942786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.101 qpair failed and we were unable to recover it. 00:26:40.101 [2024-12-06 03:34:59.942931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.101 [2024-12-06 03:34:59.942945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.101 qpair failed and we were unable to recover it. 00:26:40.101 [2024-12-06 03:34:59.943120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.101 [2024-12-06 03:34:59.943133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.101 qpair failed and we were unable to recover it. 00:26:40.101 [2024-12-06 03:34:59.943198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.101 [2024-12-06 03:34:59.943211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.101 qpair failed and we were unable to recover it. 00:26:40.101 [2024-12-06 03:34:59.943318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.101 [2024-12-06 03:34:59.943331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.101 qpair failed and we were unable to recover it. 00:26:40.101 [2024-12-06 03:34:59.943419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.101 [2024-12-06 03:34:59.943432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.101 qpair failed and we were unable to recover it. 00:26:40.101 [2024-12-06 03:34:59.943527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.101 [2024-12-06 03:34:59.943540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.101 qpair failed and we were unable to recover it. 00:26:40.101 [2024-12-06 03:34:59.943820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.101 [2024-12-06 03:34:59.943833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.102 qpair failed and we were unable to recover it. 00:26:40.102 [2024-12-06 03:34:59.943970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.102 [2024-12-06 03:34:59.943983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.102 qpair failed and we were unable to recover it. 00:26:40.102 [2024-12-06 03:34:59.944156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.102 [2024-12-06 03:34:59.944170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.102 qpair failed and we were unable to recover it. 00:26:40.102 [2024-12-06 03:34:59.944346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.102 [2024-12-06 03:34:59.944359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.102 qpair failed and we were unable to recover it. 00:26:40.102 [2024-12-06 03:34:59.944515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.102 [2024-12-06 03:34:59.944528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.102 qpair failed and we were unable to recover it. 00:26:40.102 [2024-12-06 03:34:59.944617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.102 [2024-12-06 03:34:59.944632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.102 qpair failed and we were unable to recover it. 00:26:40.102 [2024-12-06 03:34:59.944832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.102 [2024-12-06 03:34:59.944846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.102 qpair failed and we were unable to recover it. 00:26:40.102 [2024-12-06 03:34:59.944931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.102 [2024-12-06 03:34:59.944943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.102 qpair failed and we were unable to recover it. 00:26:40.102 [2024-12-06 03:34:59.945148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.102 [2024-12-06 03:34:59.945162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.102 qpair failed and we were unable to recover it. 00:26:40.102 [2024-12-06 03:34:59.945244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.102 [2024-12-06 03:34:59.945255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.102 qpair failed and we were unable to recover it. 00:26:40.102 [2024-12-06 03:34:59.945331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.102 [2024-12-06 03:34:59.945343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.102 qpair failed and we were unable to recover it. 00:26:40.102 [2024-12-06 03:34:59.945520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.102 [2024-12-06 03:34:59.945533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.102 qpair failed and we were unable to recover it. 00:26:40.102 [2024-12-06 03:34:59.945685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.102 [2024-12-06 03:34:59.945699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.102 qpair failed and we were unable to recover it. 00:26:40.102 [2024-12-06 03:34:59.945871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.102 [2024-12-06 03:34:59.945883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.102 qpair failed and we were unable to recover it. 00:26:40.102 [2024-12-06 03:34:59.945968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.102 [2024-12-06 03:34:59.945981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.102 qpair failed and we were unable to recover it. 00:26:40.102 [2024-12-06 03:34:59.946135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.102 [2024-12-06 03:34:59.946149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.102 qpair failed and we were unable to recover it. 00:26:40.102 [2024-12-06 03:34:59.946228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.102 [2024-12-06 03:34:59.946240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.102 qpair failed and we were unable to recover it. 00:26:40.102 [2024-12-06 03:34:59.946328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.102 [2024-12-06 03:34:59.946340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.102 qpair failed and we were unable to recover it. 00:26:40.102 [2024-12-06 03:34:59.946419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.102 [2024-12-06 03:34:59.946435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.102 qpair failed and we were unable to recover it. 00:26:40.102 [2024-12-06 03:34:59.946507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.102 [2024-12-06 03:34:59.946519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.102 qpair failed and we were unable to recover it. 00:26:40.102 [2024-12-06 03:34:59.946677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.102 [2024-12-06 03:34:59.946692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.102 qpair failed and we were unable to recover it. 00:26:40.102 [2024-12-06 03:34:59.946776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.102 [2024-12-06 03:34:59.946788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.102 qpair failed and we were unable to recover it. 00:26:40.102 [2024-12-06 03:34:59.946859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.102 [2024-12-06 03:34:59.946871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.102 qpair failed and we were unable to recover it. 00:26:40.102 [2024-12-06 03:34:59.947023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.102 [2024-12-06 03:34:59.947037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.102 qpair failed and we were unable to recover it. 00:26:40.102 [2024-12-06 03:34:59.947211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.102 [2024-12-06 03:34:59.947226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.102 qpair failed and we were unable to recover it. 00:26:40.102 [2024-12-06 03:34:59.947328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.102 [2024-12-06 03:34:59.947339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.102 qpair failed and we were unable to recover it. 00:26:40.102 [2024-12-06 03:34:59.947436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.102 [2024-12-06 03:34:59.947449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.102 qpair failed and we were unable to recover it. 00:26:40.102 [2024-12-06 03:34:59.947585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.102 [2024-12-06 03:34:59.947598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.102 qpair failed and we were unable to recover it. 00:26:40.102 [2024-12-06 03:34:59.947688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.102 [2024-12-06 03:34:59.947700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.102 qpair failed and we were unable to recover it. 00:26:40.102 [2024-12-06 03:34:59.947782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.102 [2024-12-06 03:34:59.947795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.102 qpair failed and we were unable to recover it. 00:26:40.102 [2024-12-06 03:34:59.947957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.102 [2024-12-06 03:34:59.947972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.102 qpair failed and we were unable to recover it. 00:26:40.102 [2024-12-06 03:34:59.948042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.102 [2024-12-06 03:34:59.948055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.102 qpair failed and we were unable to recover it. 00:26:40.102 [2024-12-06 03:34:59.948133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.102 [2024-12-06 03:34:59.948145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.102 qpair failed and we were unable to recover it. 00:26:40.102 [2024-12-06 03:34:59.948289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.102 [2024-12-06 03:34:59.948301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.102 qpair failed and we were unable to recover it. 00:26:40.102 [2024-12-06 03:34:59.948458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.102 [2024-12-06 03:34:59.948472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.102 qpair failed and we were unable to recover it. 00:26:40.102 [2024-12-06 03:34:59.948622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.102 [2024-12-06 03:34:59.948634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.102 qpair failed and we were unable to recover it. 00:26:40.102 [2024-12-06 03:34:59.948719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.102 [2024-12-06 03:34:59.948730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.102 qpair failed and we were unable to recover it. 00:26:40.102 [2024-12-06 03:34:59.948800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.102 [2024-12-06 03:34:59.948811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.102 qpair failed and we were unable to recover it. 00:26:40.102 [2024-12-06 03:34:59.948887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.102 [2024-12-06 03:34:59.948899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.103 qpair failed and we were unable to recover it. 00:26:40.103 [2024-12-06 03:34:59.949031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.103 [2024-12-06 03:34:59.949045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.103 qpair failed and we were unable to recover it. 00:26:40.103 [2024-12-06 03:34:59.949206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.103 [2024-12-06 03:34:59.949219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.103 qpair failed and we were unable to recover it. 00:26:40.103 [2024-12-06 03:34:59.949375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.103 [2024-12-06 03:34:59.949389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.103 qpair failed and we were unable to recover it. 00:26:40.103 [2024-12-06 03:34:59.949517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.103 [2024-12-06 03:34:59.949531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.103 qpair failed and we were unable to recover it. 00:26:40.103 [2024-12-06 03:34:59.949597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.103 [2024-12-06 03:34:59.949608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.103 qpair failed and we were unable to recover it. 00:26:40.103 [2024-12-06 03:34:59.949692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.103 [2024-12-06 03:34:59.949703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.103 qpair failed and we were unable to recover it. 00:26:40.103 [2024-12-06 03:34:59.949844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.103 [2024-12-06 03:34:59.949858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.103 qpair failed and we were unable to recover it. 00:26:40.103 [2024-12-06 03:34:59.949940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.103 [2024-12-06 03:34:59.949958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.103 qpair failed and we were unable to recover it. 00:26:40.103 [2024-12-06 03:34:59.950091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.103 [2024-12-06 03:34:59.950105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.103 qpair failed and we were unable to recover it. 00:26:40.103 [2024-12-06 03:34:59.950257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.103 [2024-12-06 03:34:59.950270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.103 qpair failed and we were unable to recover it. 00:26:40.103 [2024-12-06 03:34:59.950354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.103 [2024-12-06 03:34:59.950366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.103 qpair failed and we were unable to recover it. 00:26:40.103 [2024-12-06 03:34:59.950456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.103 [2024-12-06 03:34:59.950468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.103 qpair failed and we were unable to recover it. 00:26:40.103 [2024-12-06 03:34:59.950618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.103 [2024-12-06 03:34:59.950631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.103 qpair failed and we were unable to recover it. 00:26:40.103 [2024-12-06 03:34:59.950725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.103 [2024-12-06 03:34:59.950736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.103 qpair failed and we were unable to recover it. 00:26:40.103 [2024-12-06 03:34:59.950830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.103 [2024-12-06 03:34:59.950842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.103 qpair failed and we were unable to recover it. 00:26:40.103 [2024-12-06 03:34:59.950924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.103 [2024-12-06 03:34:59.950936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.103 qpair failed and we were unable to recover it. 00:26:40.103 [2024-12-06 03:34:59.951015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.103 [2024-12-06 03:34:59.951028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.103 qpair failed and we were unable to recover it. 00:26:40.103 [2024-12-06 03:34:59.951095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.103 [2024-12-06 03:34:59.951108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.103 qpair failed and we were unable to recover it. 00:26:40.103 [2024-12-06 03:34:59.951194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.103 [2024-12-06 03:34:59.951206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.103 qpair failed and we were unable to recover it. 00:26:40.103 [2024-12-06 03:34:59.951285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.103 [2024-12-06 03:34:59.951301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.103 qpair failed and we were unable to recover it. 00:26:40.103 [2024-12-06 03:34:59.951446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.103 [2024-12-06 03:34:59.951460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.103 qpair failed and we were unable to recover it. 00:26:40.103 [2024-12-06 03:34:59.951599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.103 [2024-12-06 03:34:59.951613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.103 qpair failed and we were unable to recover it. 00:26:40.103 [2024-12-06 03:34:59.951803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.103 [2024-12-06 03:34:59.951816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.103 qpair failed and we were unable to recover it. 00:26:40.103 [2024-12-06 03:34:59.951907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.103 [2024-12-06 03:34:59.951918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.103 qpair failed and we were unable to recover it. 00:26:40.103 [2024-12-06 03:34:59.952002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.103 [2024-12-06 03:34:59.952016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.103 qpair failed and we were unable to recover it. 00:26:40.103 [2024-12-06 03:34:59.952175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.103 [2024-12-06 03:34:59.952189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.103 qpair failed and we were unable to recover it. 00:26:40.103 [2024-12-06 03:34:59.952294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.103 [2024-12-06 03:34:59.952306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.103 qpair failed and we were unable to recover it. 00:26:40.103 [2024-12-06 03:34:59.952395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.103 [2024-12-06 03:34:59.952407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.103 qpair failed and we were unable to recover it. 00:26:40.103 [2024-12-06 03:34:59.952471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.103 [2024-12-06 03:34:59.952483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.103 qpair failed and we were unable to recover it. 00:26:40.103 [2024-12-06 03:34:59.952549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.103 [2024-12-06 03:34:59.952562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.103 qpair failed and we were unable to recover it. 00:26:40.103 [2024-12-06 03:34:59.952650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.103 [2024-12-06 03:34:59.952661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.103 qpair failed and we were unable to recover it. 00:26:40.103 [2024-12-06 03:34:59.952862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.103 [2024-12-06 03:34:59.952876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.103 qpair failed and we were unable to recover it. 00:26:40.103 [2024-12-06 03:34:59.953024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.103 [2024-12-06 03:34:59.953038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.103 qpair failed and we were unable to recover it. 00:26:40.103 [2024-12-06 03:34:59.953194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.103 [2024-12-06 03:34:59.953206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.103 qpair failed and we were unable to recover it. 00:26:40.103 [2024-12-06 03:34:59.953290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.103 [2024-12-06 03:34:59.953301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.103 qpair failed and we were unable to recover it. 00:26:40.103 [2024-12-06 03:34:59.953440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.103 [2024-12-06 03:34:59.953454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.103 qpair failed and we were unable to recover it. 00:26:40.103 [2024-12-06 03:34:59.953615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.103 [2024-12-06 03:34:59.953628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.103 qpair failed and we were unable to recover it. 00:26:40.103 [2024-12-06 03:34:59.953698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.103 [2024-12-06 03:34:59.953710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.104 qpair failed and we were unable to recover it. 00:26:40.104 [2024-12-06 03:34:59.953856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.104 [2024-12-06 03:34:59.953871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.104 qpair failed and we were unable to recover it. 00:26:40.104 [2024-12-06 03:34:59.953963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.104 [2024-12-06 03:34:59.953977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.104 qpair failed and we were unable to recover it. 00:26:40.104 [2024-12-06 03:34:59.954054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.104 [2024-12-06 03:34:59.954066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.104 qpair failed and we were unable to recover it. 00:26:40.104 [2024-12-06 03:34:59.954135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.104 [2024-12-06 03:34:59.954147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.104 qpair failed and we were unable to recover it. 00:26:40.104 [2024-12-06 03:34:59.954231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.104 [2024-12-06 03:34:59.954242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.104 qpair failed and we were unable to recover it. 00:26:40.104 [2024-12-06 03:34:59.954334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.104 [2024-12-06 03:34:59.954346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.104 qpair failed and we were unable to recover it. 00:26:40.104 [2024-12-06 03:34:59.954417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.104 [2024-12-06 03:34:59.954429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.104 qpair failed and we were unable to recover it. 00:26:40.104 [2024-12-06 03:34:59.954512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.104 [2024-12-06 03:34:59.954524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.104 qpair failed and we were unable to recover it. 00:26:40.104 [2024-12-06 03:34:59.954671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.104 [2024-12-06 03:34:59.954684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.104 qpair failed and we were unable to recover it. 00:26:40.104 [2024-12-06 03:34:59.954767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.104 [2024-12-06 03:34:59.954778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.104 qpair failed and we were unable to recover it. 00:26:40.104 [2024-12-06 03:34:59.954854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.104 [2024-12-06 03:34:59.954866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.104 qpair failed and we were unable to recover it. 00:26:40.104 [2024-12-06 03:34:59.954951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.104 [2024-12-06 03:34:59.954963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.104 qpair failed and we were unable to recover it. 00:26:40.104 [2024-12-06 03:34:59.955038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.104 [2024-12-06 03:34:59.955049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.104 qpair failed and we were unable to recover it. 00:26:40.104 [2024-12-06 03:34:59.955117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.104 [2024-12-06 03:34:59.955129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.104 qpair failed and we were unable to recover it. 00:26:40.104 [2024-12-06 03:34:59.955199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.104 [2024-12-06 03:34:59.955210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.104 qpair failed and we were unable to recover it. 00:26:40.104 [2024-12-06 03:34:59.955286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.104 [2024-12-06 03:34:59.955299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.104 qpair failed and we were unable to recover it. 00:26:40.104 [2024-12-06 03:34:59.955438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.104 [2024-12-06 03:34:59.955450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.104 qpair failed and we were unable to recover it. 00:26:40.104 [2024-12-06 03:34:59.955535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.104 [2024-12-06 03:34:59.955547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.104 qpair failed and we were unable to recover it. 00:26:40.104 [2024-12-06 03:34:59.955639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.104 [2024-12-06 03:34:59.955651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.104 qpair failed and we were unable to recover it. 00:26:40.104 [2024-12-06 03:34:59.955812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.104 [2024-12-06 03:34:59.955826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.104 qpair failed and we were unable to recover it. 00:26:40.104 [2024-12-06 03:34:59.955902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.104 [2024-12-06 03:34:59.955914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.104 qpair failed and we were unable to recover it. 00:26:40.104 [2024-12-06 03:34:59.955988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.104 [2024-12-06 03:34:59.956003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.104 qpair failed and we were unable to recover it. 00:26:40.104 [2024-12-06 03:34:59.956150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.104 [2024-12-06 03:34:59.956163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.104 qpair failed and we were unable to recover it. 00:26:40.104 [2024-12-06 03:34:59.956270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.104 [2024-12-06 03:34:59.956283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.104 qpair failed and we were unable to recover it. 00:26:40.104 [2024-12-06 03:34:59.956368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.104 [2024-12-06 03:34:59.956381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.104 qpair failed and we were unable to recover it. 00:26:40.104 [2024-12-06 03:34:59.956450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.104 [2024-12-06 03:34:59.956462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.104 qpair failed and we were unable to recover it. 00:26:40.104 [2024-12-06 03:34:59.956538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.104 [2024-12-06 03:34:59.956552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.104 qpair failed and we were unable to recover it. 00:26:40.104 [2024-12-06 03:34:59.956628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.104 [2024-12-06 03:34:59.956642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.104 qpair failed and we were unable to recover it. 00:26:40.104 [2024-12-06 03:34:59.956720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.104 [2024-12-06 03:34:59.956733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.104 qpair failed and we were unable to recover it. 00:26:40.104 [2024-12-06 03:34:59.956891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.104 [2024-12-06 03:34:59.956904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.104 qpair failed and we were unable to recover it. 00:26:40.104 [2024-12-06 03:34:59.956977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.104 [2024-12-06 03:34:59.956990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.104 qpair failed and we were unable to recover it. 00:26:40.104 [2024-12-06 03:34:59.957064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.104 [2024-12-06 03:34:59.957078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.104 qpair failed and we were unable to recover it. 00:26:40.104 [2024-12-06 03:34:59.957150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.104 [2024-12-06 03:34:59.957168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.104 qpair failed and we were unable to recover it. 00:26:40.104 [2024-12-06 03:34:59.957303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.105 [2024-12-06 03:34:59.957315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.105 qpair failed and we were unable to recover it. 00:26:40.105 [2024-12-06 03:34:59.957387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.105 [2024-12-06 03:34:59.957399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.105 qpair failed and we were unable to recover it. 00:26:40.105 [2024-12-06 03:34:59.957468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.105 [2024-12-06 03:34:59.957481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.105 qpair failed and we were unable to recover it. 00:26:40.105 [2024-12-06 03:34:59.957551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.105 [2024-12-06 03:34:59.957563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.105 qpair failed and we were unable to recover it. 00:26:40.105 [2024-12-06 03:34:59.957702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.105 [2024-12-06 03:34:59.957716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.105 qpair failed and we were unable to recover it. 00:26:40.105 [2024-12-06 03:34:59.957794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.105 [2024-12-06 03:34:59.957806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.105 qpair failed and we were unable to recover it. 00:26:40.105 [2024-12-06 03:34:59.957940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.105 [2024-12-06 03:34:59.957956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.105 qpair failed and we were unable to recover it. 00:26:40.105 [2024-12-06 03:34:59.958025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.105 [2024-12-06 03:34:59.958038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.105 qpair failed and we were unable to recover it. 00:26:40.105 [2024-12-06 03:34:59.958107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.105 [2024-12-06 03:34:59.958120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.105 qpair failed and we were unable to recover it. 00:26:40.105 [2024-12-06 03:34:59.958192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.105 [2024-12-06 03:34:59.958205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.105 qpair failed and we were unable to recover it. 00:26:40.105 [2024-12-06 03:34:59.958284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.105 [2024-12-06 03:34:59.958297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.105 qpair failed and we were unable to recover it. 00:26:40.105 [2024-12-06 03:34:59.958359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.105 [2024-12-06 03:34:59.958371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.105 qpair failed and we were unable to recover it. 00:26:40.105 [2024-12-06 03:34:59.958448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.105 [2024-12-06 03:34:59.958462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.105 qpair failed and we were unable to recover it. 00:26:40.105 [2024-12-06 03:34:59.958537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.105 [2024-12-06 03:34:59.958549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.105 qpair failed and we were unable to recover it. 00:26:40.105 [2024-12-06 03:34:59.958617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.105 [2024-12-06 03:34:59.958628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.105 qpair failed and we were unable to recover it. 00:26:40.105 [2024-12-06 03:34:59.958771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.105 [2024-12-06 03:34:59.958784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.105 qpair failed and we were unable to recover it. 00:26:40.105 [2024-12-06 03:34:59.958872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.105 [2024-12-06 03:34:59.958885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.105 qpair failed and we were unable to recover it. 00:26:40.105 [2024-12-06 03:34:59.958957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.105 [2024-12-06 03:34:59.958969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.105 qpair failed and we were unable to recover it. 00:26:40.105 [2024-12-06 03:34:59.959038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.105 [2024-12-06 03:34:59.959050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.105 qpair failed and we were unable to recover it. 00:26:40.105 [2024-12-06 03:34:59.959114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.105 [2024-12-06 03:34:59.959125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.105 qpair failed and we were unable to recover it. 00:26:40.105 [2024-12-06 03:34:59.959256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.105 [2024-12-06 03:34:59.959269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.105 qpair failed and we were unable to recover it. 00:26:40.105 [2024-12-06 03:34:59.959402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.105 [2024-12-06 03:34:59.959415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.105 qpair failed and we were unable to recover it. 00:26:40.105 [2024-12-06 03:34:59.959487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.105 [2024-12-06 03:34:59.959500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.105 qpair failed and we were unable to recover it. 00:26:40.105 [2024-12-06 03:34:59.959567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.105 [2024-12-06 03:34:59.959579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.105 qpair failed and we were unable to recover it. 00:26:40.105 [2024-12-06 03:34:59.959658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.105 [2024-12-06 03:34:59.959670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.105 qpair failed and we were unable to recover it. 00:26:40.105 [2024-12-06 03:34:59.959807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.105 [2024-12-06 03:34:59.959820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.105 qpair failed and we were unable to recover it. 00:26:40.105 [2024-12-06 03:34:59.959888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.105 [2024-12-06 03:34:59.959899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.105 qpair failed and we were unable to recover it. 00:26:40.105 [2024-12-06 03:34:59.959976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.105 [2024-12-06 03:34:59.959990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.105 qpair failed and we were unable to recover it. 00:26:40.105 [2024-12-06 03:34:59.960064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.105 [2024-12-06 03:34:59.960079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.105 qpair failed and we were unable to recover it. 00:26:40.105 [2024-12-06 03:34:59.960159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.105 [2024-12-06 03:34:59.960172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.105 qpair failed and we were unable to recover it. 00:26:40.105 [2024-12-06 03:34:59.960238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.105 [2024-12-06 03:34:59.960251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.105 qpair failed and we were unable to recover it. 00:26:40.105 [2024-12-06 03:34:59.960346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.105 [2024-12-06 03:34:59.960360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.105 qpair failed and we were unable to recover it. 00:26:40.105 [2024-12-06 03:34:59.960498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.105 [2024-12-06 03:34:59.960511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.105 qpair failed and we were unable to recover it. 00:26:40.105 [2024-12-06 03:34:59.960581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.105 [2024-12-06 03:34:59.960593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.105 qpair failed and we were unable to recover it. 00:26:40.105 [2024-12-06 03:34:59.960728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.105 [2024-12-06 03:34:59.960742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.105 qpair failed and we were unable to recover it. 00:26:40.105 [2024-12-06 03:34:59.960821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.105 [2024-12-06 03:34:59.960832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.105 qpair failed and we were unable to recover it. 00:26:40.105 [2024-12-06 03:34:59.960916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.105 [2024-12-06 03:34:59.960929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.105 qpair failed and we were unable to recover it. 00:26:40.106 [2024-12-06 03:34:59.961002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.106 [2024-12-06 03:34:59.961016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.106 qpair failed and we were unable to recover it. 00:26:40.106 [2024-12-06 03:34:59.961084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.106 [2024-12-06 03:34:59.961097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.106 qpair failed and we were unable to recover it. 00:26:40.106 [2024-12-06 03:34:59.961161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.106 [2024-12-06 03:34:59.961173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.106 qpair failed and we were unable to recover it. 00:26:40.106 [2024-12-06 03:34:59.961245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.106 [2024-12-06 03:34:59.961258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.106 qpair failed and we were unable to recover it. 00:26:40.106 [2024-12-06 03:34:59.961330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.106 [2024-12-06 03:34:59.961344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.106 qpair failed and we were unable to recover it. 00:26:40.106 [2024-12-06 03:34:59.961423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.106 [2024-12-06 03:34:59.961435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.106 qpair failed and we were unable to recover it. 00:26:40.106 [2024-12-06 03:34:59.961593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.106 [2024-12-06 03:34:59.961605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.106 qpair failed and we were unable to recover it. 00:26:40.106 [2024-12-06 03:34:59.961742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.106 [2024-12-06 03:34:59.961756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.106 qpair failed and we were unable to recover it. 00:26:40.106 [2024-12-06 03:34:59.961923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.106 [2024-12-06 03:34:59.961936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.106 qpair failed and we were unable to recover it. 00:26:40.106 [2024-12-06 03:34:59.962075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.106 [2024-12-06 03:34:59.962110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.106 qpair failed and we were unable to recover it. 00:26:40.106 [2024-12-06 03:34:59.962227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.106 [2024-12-06 03:34:59.962244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.106 qpair failed and we were unable to recover it. 00:26:40.106 [2024-12-06 03:34:59.962390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.106 [2024-12-06 03:34:59.962406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.106 qpair failed and we were unable to recover it. 00:26:40.106 [2024-12-06 03:34:59.962609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.106 [2024-12-06 03:34:59.962624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.106 qpair failed and we were unable to recover it. 00:26:40.106 [2024-12-06 03:34:59.962772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.106 [2024-12-06 03:34:59.962789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.106 qpair failed and we were unable to recover it. 00:26:40.106 [2024-12-06 03:34:59.963029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.106 [2024-12-06 03:34:59.963045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.106 qpair failed and we were unable to recover it. 00:26:40.106 [2024-12-06 03:34:59.963149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.106 [2024-12-06 03:34:59.963164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.106 qpair failed and we were unable to recover it. 00:26:40.106 [2024-12-06 03:34:59.963252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.106 [2024-12-06 03:34:59.963268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.106 qpair failed and we were unable to recover it. 00:26:40.106 [2024-12-06 03:34:59.963381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.106 [2024-12-06 03:34:59.963397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.106 qpair failed and we were unable to recover it. 00:26:40.106 [2024-12-06 03:34:59.963594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.106 [2024-12-06 03:34:59.963609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.106 qpair failed and we were unable to recover it. 00:26:40.106 [2024-12-06 03:34:59.963708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.106 [2024-12-06 03:34:59.963720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.106 qpair failed and we were unable to recover it. 00:26:40.106 [2024-12-06 03:34:59.963801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.106 [2024-12-06 03:34:59.963815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.106 qpair failed and we were unable to recover it. 00:26:40.106 [2024-12-06 03:34:59.963893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.106 [2024-12-06 03:34:59.963906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.106 qpair failed and we were unable to recover it. 00:26:40.106 [2024-12-06 03:34:59.964105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.106 [2024-12-06 03:34:59.964117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.106 qpair failed and we were unable to recover it. 00:26:40.106 [2024-12-06 03:34:59.964214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.106 [2024-12-06 03:34:59.964227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.106 qpair failed and we were unable to recover it. 00:26:40.106 [2024-12-06 03:34:59.964326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.106 [2024-12-06 03:34:59.964338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.106 qpair failed and we were unable to recover it. 00:26:40.106 [2024-12-06 03:34:59.964413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.106 [2024-12-06 03:34:59.964426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.106 qpair failed and we were unable to recover it. 00:26:40.106 [2024-12-06 03:34:59.964589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.106 [2024-12-06 03:34:59.964602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.106 qpair failed and we were unable to recover it. 00:26:40.106 [2024-12-06 03:34:59.964677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.106 [2024-12-06 03:34:59.964689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.106 qpair failed and we were unable to recover it. 00:26:40.106 [2024-12-06 03:34:59.964765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.106 [2024-12-06 03:34:59.964777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.106 qpair failed and we were unable to recover it. 00:26:40.106 [2024-12-06 03:34:59.964856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.106 [2024-12-06 03:34:59.964868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.106 qpair failed and we were unable to recover it. 00:26:40.106 [2024-12-06 03:34:59.965027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.106 [2024-12-06 03:34:59.965039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.106 qpair failed and we were unable to recover it. 00:26:40.106 [2024-12-06 03:34:59.965218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.106 [2024-12-06 03:34:59.965233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.106 qpair failed and we were unable to recover it. 00:26:40.106 [2024-12-06 03:34:59.965410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.106 [2024-12-06 03:34:59.965423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.106 qpair failed and we were unable to recover it. 00:26:40.106 [2024-12-06 03:34:59.965524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.106 [2024-12-06 03:34:59.965537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.106 qpair failed and we were unable to recover it. 00:26:40.106 [2024-12-06 03:34:59.965616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.106 [2024-12-06 03:34:59.965629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.106 qpair failed and we were unable to recover it. 00:26:40.106 [2024-12-06 03:34:59.965776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.106 [2024-12-06 03:34:59.965790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.106 qpair failed and we were unable to recover it. 00:26:40.106 [2024-12-06 03:34:59.966021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.106 [2024-12-06 03:34:59.966035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.106 qpair failed and we were unable to recover it. 00:26:40.106 [2024-12-06 03:34:59.966245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.107 [2024-12-06 03:34:59.966259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.107 qpair failed and we were unable to recover it. 00:26:40.107 [2024-12-06 03:34:59.966405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.107 [2024-12-06 03:34:59.966419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.107 qpair failed and we were unable to recover it. 00:26:40.107 [2024-12-06 03:34:59.966582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.107 [2024-12-06 03:34:59.966596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.107 qpair failed and we were unable to recover it. 00:26:40.107 [2024-12-06 03:34:59.966674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.107 [2024-12-06 03:34:59.966686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.107 qpair failed and we were unable to recover it. 00:26:40.107 [2024-12-06 03:34:59.966765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.107 [2024-12-06 03:34:59.966778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.107 qpair failed and we were unable to recover it. 00:26:40.107 [2024-12-06 03:34:59.966911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.107 [2024-12-06 03:34:59.966925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.107 qpair failed and we were unable to recover it. 00:26:40.107 [2024-12-06 03:34:59.967079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.107 [2024-12-06 03:34:59.967093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.107 qpair failed and we were unable to recover it. 00:26:40.107 [2024-12-06 03:34:59.967194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.107 [2024-12-06 03:34:59.967210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.107 qpair failed and we were unable to recover it. 00:26:40.107 [2024-12-06 03:34:59.967318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.107 [2024-12-06 03:34:59.967333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.107 qpair failed and we were unable to recover it. 00:26:40.107 [2024-12-06 03:34:59.967489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.107 [2024-12-06 03:34:59.967503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.107 qpair failed and we were unable to recover it. 00:26:40.107 [2024-12-06 03:34:59.967668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.107 [2024-12-06 03:34:59.967682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.107 qpair failed and we were unable to recover it. 00:26:40.107 [2024-12-06 03:34:59.967820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.107 [2024-12-06 03:34:59.967833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.107 qpair failed and we were unable to recover it. 00:26:40.107 [2024-12-06 03:34:59.967913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.107 [2024-12-06 03:34:59.967925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.107 qpair failed and we were unable to recover it. 00:26:40.107 [2024-12-06 03:34:59.968087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.107 [2024-12-06 03:34:59.968101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.107 qpair failed and we were unable to recover it. 00:26:40.107 [2024-12-06 03:34:59.968238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.107 [2024-12-06 03:34:59.968252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.107 qpair failed and we were unable to recover it. 00:26:40.107 [2024-12-06 03:34:59.968410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.107 [2024-12-06 03:34:59.968425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.107 qpair failed and we were unable to recover it. 00:26:40.107 [2024-12-06 03:34:59.968527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.107 [2024-12-06 03:34:59.968541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.107 qpair failed and we were unable to recover it. 00:26:40.107 [2024-12-06 03:34:59.968770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.107 [2024-12-06 03:34:59.968787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.107 qpair failed and we were unable to recover it. 00:26:40.107 [2024-12-06 03:34:59.968935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.107 [2024-12-06 03:34:59.968953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.107 qpair failed and we were unable to recover it. 00:26:40.107 [2024-12-06 03:34:59.969030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.107 [2024-12-06 03:34:59.969043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.107 qpair failed and we were unable to recover it. 00:26:40.107 [2024-12-06 03:34:59.969132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.107 [2024-12-06 03:34:59.969145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.107 qpair failed and we were unable to recover it. 00:26:40.107 [2024-12-06 03:34:59.969222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.107 [2024-12-06 03:34:59.969235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.107 qpair failed and we were unable to recover it. 00:26:40.107 [2024-12-06 03:34:59.969398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.107 [2024-12-06 03:34:59.969411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.107 qpair failed and we were unable to recover it. 00:26:40.107 [2024-12-06 03:34:59.969552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.107 [2024-12-06 03:34:59.969565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.107 qpair failed and we were unable to recover it. 00:26:40.107 [2024-12-06 03:34:59.969702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.107 [2024-12-06 03:34:59.969715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.107 qpair failed and we were unable to recover it. 00:26:40.107 [2024-12-06 03:34:59.969813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.107 [2024-12-06 03:34:59.969825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.107 qpair failed and we were unable to recover it. 00:26:40.107 [2024-12-06 03:34:59.969901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.107 [2024-12-06 03:34:59.969914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.107 qpair failed and we were unable to recover it. 00:26:40.107 [2024-12-06 03:34:59.970045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.107 [2024-12-06 03:34:59.970058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.107 qpair failed and we were unable to recover it. 00:26:40.107 [2024-12-06 03:34:59.970138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.107 [2024-12-06 03:34:59.970151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.107 qpair failed and we were unable to recover it. 00:26:40.107 [2024-12-06 03:34:59.970354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.107 [2024-12-06 03:34:59.970368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.107 qpair failed and we were unable to recover it. 00:26:40.107 [2024-12-06 03:34:59.970458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.107 [2024-12-06 03:34:59.970471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.107 qpair failed and we were unable to recover it. 00:26:40.107 [2024-12-06 03:34:59.970692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.107 [2024-12-06 03:34:59.970705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.107 qpair failed and we were unable to recover it. 00:26:40.107 [2024-12-06 03:34:59.970848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.107 [2024-12-06 03:34:59.970860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.107 qpair failed and we were unable to recover it. 00:26:40.107 [2024-12-06 03:34:59.971016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.107 [2024-12-06 03:34:59.971029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.107 qpair failed and we were unable to recover it. 00:26:40.107 [2024-12-06 03:34:59.971134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.107 [2024-12-06 03:34:59.971149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.107 qpair failed and we were unable to recover it. 00:26:40.107 [2024-12-06 03:34:59.971238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.107 [2024-12-06 03:34:59.971251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.107 qpair failed and we were unable to recover it. 00:26:40.107 [2024-12-06 03:34:59.971335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.107 [2024-12-06 03:34:59.971347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.107 qpair failed and we were unable to recover it. 00:26:40.107 [2024-12-06 03:34:59.971434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.107 [2024-12-06 03:34:59.971447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.107 qpair failed and we were unable to recover it. 00:26:40.107 [2024-12-06 03:34:59.971598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.108 [2024-12-06 03:34:59.971611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.108 qpair failed and we were unable to recover it. 00:26:40.108 [2024-12-06 03:34:59.971751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.108 [2024-12-06 03:34:59.971763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.108 qpair failed and we were unable to recover it. 00:26:40.108 [2024-12-06 03:34:59.971833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.108 [2024-12-06 03:34:59.971845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.108 qpair failed and we were unable to recover it. 00:26:40.108 [2024-12-06 03:34:59.971928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.108 [2024-12-06 03:34:59.971941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.108 qpair failed and we were unable to recover it. 00:26:40.108 [2024-12-06 03:34:59.972120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.108 [2024-12-06 03:34:59.972133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.108 qpair failed and we were unable to recover it. 00:26:40.108 [2024-12-06 03:34:59.972228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.108 [2024-12-06 03:34:59.972240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.108 qpair failed and we were unable to recover it. 00:26:40.108 [2024-12-06 03:34:59.972306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.108 [2024-12-06 03:34:59.972319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.108 qpair failed and we were unable to recover it. 00:26:40.108 [2024-12-06 03:34:59.972406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.108 [2024-12-06 03:34:59.972418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.108 qpair failed and we were unable to recover it. 00:26:40.108 [2024-12-06 03:34:59.972502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.108 [2024-12-06 03:34:59.972514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.108 qpair failed and we were unable to recover it. 00:26:40.108 [2024-12-06 03:34:59.972584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.108 [2024-12-06 03:34:59.972596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.108 qpair failed and we were unable to recover it. 00:26:40.108 [2024-12-06 03:34:59.972775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.108 [2024-12-06 03:34:59.972788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.108 qpair failed and we were unable to recover it. 00:26:40.108 [2024-12-06 03:34:59.972939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.108 [2024-12-06 03:34:59.972972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.108 qpair failed and we were unable to recover it. 00:26:40.108 [2024-12-06 03:34:59.973141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.108 [2024-12-06 03:34:59.973154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.108 qpair failed and we were unable to recover it. 00:26:40.108 [2024-12-06 03:34:59.973245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.108 [2024-12-06 03:34:59.973257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.108 qpair failed and we were unable to recover it. 00:26:40.108 [2024-12-06 03:34:59.973394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.108 [2024-12-06 03:34:59.973406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.108 qpair failed and we were unable to recover it. 00:26:40.108 [2024-12-06 03:34:59.973552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.108 [2024-12-06 03:34:59.973566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.108 qpair failed and we were unable to recover it. 00:26:40.108 [2024-12-06 03:34:59.973646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.108 [2024-12-06 03:34:59.973658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.108 qpair failed and we were unable to recover it. 00:26:40.108 [2024-12-06 03:34:59.973745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.108 [2024-12-06 03:34:59.973758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.108 qpair failed and we were unable to recover it. 00:26:40.108 [2024-12-06 03:34:59.973931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.108 [2024-12-06 03:34:59.973944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.108 qpair failed and we were unable to recover it. 00:26:40.108 [2024-12-06 03:34:59.974031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.108 [2024-12-06 03:34:59.974044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.108 qpair failed and we were unable to recover it. 00:26:40.108 [2024-12-06 03:34:59.974120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.108 [2024-12-06 03:34:59.974132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.108 qpair failed and we were unable to recover it. 00:26:40.108 [2024-12-06 03:34:59.974213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.108 [2024-12-06 03:34:59.974225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.108 qpair failed and we were unable to recover it. 00:26:40.108 [2024-12-06 03:34:59.974324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.108 [2024-12-06 03:34:59.974337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.108 qpair failed and we were unable to recover it. 00:26:40.108 [2024-12-06 03:34:59.974485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.108 [2024-12-06 03:34:59.974497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.108 qpair failed and we were unable to recover it. 00:26:40.108 [2024-12-06 03:34:59.974635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.108 [2024-12-06 03:34:59.974648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.108 qpair failed and we were unable to recover it. 00:26:40.108 [2024-12-06 03:34:59.974720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.108 [2024-12-06 03:34:59.974733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.108 qpair failed and we were unable to recover it. 00:26:40.108 [2024-12-06 03:34:59.974809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.108 [2024-12-06 03:34:59.974821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.108 qpair failed and we were unable to recover it. 00:26:40.108 [2024-12-06 03:34:59.974965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.108 [2024-12-06 03:34:59.974978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.108 qpair failed and we were unable to recover it. 00:26:40.108 [2024-12-06 03:34:59.975054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.108 [2024-12-06 03:34:59.975067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.108 qpair failed and we were unable to recover it. 00:26:40.108 [2024-12-06 03:34:59.975172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.108 [2024-12-06 03:34:59.975184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.108 qpair failed and we were unable to recover it. 00:26:40.108 [2024-12-06 03:34:59.975258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.108 [2024-12-06 03:34:59.975270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.108 qpair failed and we were unable to recover it. 00:26:40.108 [2024-12-06 03:34:59.975414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.108 [2024-12-06 03:34:59.975427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.108 qpair failed and we were unable to recover it. 00:26:40.108 [2024-12-06 03:34:59.975514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.108 [2024-12-06 03:34:59.975527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.108 qpair failed and we were unable to recover it. 00:26:40.109 [2024-12-06 03:34:59.975607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.109 [2024-12-06 03:34:59.975619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.109 qpair failed and we were unable to recover it. 00:26:40.109 [2024-12-06 03:34:59.975704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.109 [2024-12-06 03:34:59.975717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.109 qpair failed and we were unable to recover it. 00:26:40.109 [2024-12-06 03:34:59.975817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.109 [2024-12-06 03:34:59.975830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.109 qpair failed and we were unable to recover it. 00:26:40.109 [2024-12-06 03:34:59.975921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.109 [2024-12-06 03:34:59.975936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.109 qpair failed and we were unable to recover it. 00:26:40.109 [2024-12-06 03:34:59.976078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.109 [2024-12-06 03:34:59.976092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.109 qpair failed and we were unable to recover it. 00:26:40.109 [2024-12-06 03:34:59.976233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.109 [2024-12-06 03:34:59.976245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.109 qpair failed and we were unable to recover it. 00:26:40.109 [2024-12-06 03:34:59.976375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.109 [2024-12-06 03:34:59.976387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.109 qpair failed and we were unable to recover it. 00:26:40.109 [2024-12-06 03:34:59.976533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.109 [2024-12-06 03:34:59.976546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.109 qpair failed and we were unable to recover it. 00:26:40.109 [2024-12-06 03:34:59.976775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.109 [2024-12-06 03:34:59.976788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.109 qpair failed and we were unable to recover it. 00:26:40.109 [2024-12-06 03:34:59.977007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.109 [2024-12-06 03:34:59.977020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.109 qpair failed and we were unable to recover it. 00:26:40.109 [2024-12-06 03:34:59.977113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.109 [2024-12-06 03:34:59.977126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.109 qpair failed and we were unable to recover it. 00:26:40.109 [2024-12-06 03:34:59.977208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.109 [2024-12-06 03:34:59.977221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.109 qpair failed and we were unable to recover it. 00:26:40.109 [2024-12-06 03:34:59.977353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.109 [2024-12-06 03:34:59.977366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.109 qpair failed and we were unable to recover it. 00:26:40.109 [2024-12-06 03:34:59.977503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.109 [2024-12-06 03:34:59.977516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.109 qpair failed and we were unable to recover it. 00:26:40.109 [2024-12-06 03:34:59.977656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.109 [2024-12-06 03:34:59.977668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.109 qpair failed and we were unable to recover it. 00:26:40.109 [2024-12-06 03:34:59.977904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.109 [2024-12-06 03:34:59.977916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.109 qpair failed and we were unable to recover it. 00:26:40.109 [2024-12-06 03:34:59.978053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.109 [2024-12-06 03:34:59.978066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.109 qpair failed and we were unable to recover it. 00:26:40.109 [2024-12-06 03:34:59.978170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.109 [2024-12-06 03:34:59.978182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.109 qpair failed and we were unable to recover it. 00:26:40.109 [2024-12-06 03:34:59.978301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.109 [2024-12-06 03:34:59.978313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.109 qpair failed and we were unable to recover it. 00:26:40.109 [2024-12-06 03:34:59.978455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.109 [2024-12-06 03:34:59.978468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.109 qpair failed and we were unable to recover it. 00:26:40.109 [2024-12-06 03:34:59.978558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.109 [2024-12-06 03:34:59.978571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.109 qpair failed and we were unable to recover it. 00:26:40.109 [2024-12-06 03:34:59.978718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.109 [2024-12-06 03:34:59.978730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.109 qpair failed and we were unable to recover it. 00:26:40.109 [2024-12-06 03:34:59.978927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.109 [2024-12-06 03:34:59.978939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.109 qpair failed and we were unable to recover it. 00:26:40.109 [2024-12-06 03:34:59.979177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.109 [2024-12-06 03:34:59.979190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.109 qpair failed and we were unable to recover it. 00:26:40.109 [2024-12-06 03:34:59.979363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.109 [2024-12-06 03:34:59.979375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.109 qpair failed and we were unable to recover it. 00:26:40.109 [2024-12-06 03:34:59.979515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.109 [2024-12-06 03:34:59.979528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.109 qpair failed and we were unable to recover it. 00:26:40.109 [2024-12-06 03:34:59.979673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.109 [2024-12-06 03:34:59.979685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.109 qpair failed and we were unable to recover it. 00:26:40.109 [2024-12-06 03:34:59.979836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.109 [2024-12-06 03:34:59.979848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.109 qpair failed and we were unable to recover it. 00:26:40.109 [2024-12-06 03:34:59.980002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.109 [2024-12-06 03:34:59.980015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.109 qpair failed and we were unable to recover it. 00:26:40.109 [2024-12-06 03:34:59.980149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.109 [2024-12-06 03:34:59.980161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.109 qpair failed and we were unable to recover it. 00:26:40.109 [2024-12-06 03:34:59.980355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.109 [2024-12-06 03:34:59.980381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.109 qpair failed and we were unable to recover it. 00:26:40.109 [2024-12-06 03:34:59.980547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.109 [2024-12-06 03:34:59.980564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.109 qpair failed and we were unable to recover it. 00:26:40.109 [2024-12-06 03:34:59.980802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.109 [2024-12-06 03:34:59.980818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.109 qpair failed and we were unable to recover it. 00:26:40.109 [2024-12-06 03:34:59.980897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.109 [2024-12-06 03:34:59.980914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.109 qpair failed and we were unable to recover it. 00:26:40.109 [2024-12-06 03:34:59.981018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.109 [2024-12-06 03:34:59.981035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.109 qpair failed and we were unable to recover it. 00:26:40.109 [2024-12-06 03:34:59.981244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.109 [2024-12-06 03:34:59.981260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.109 qpair failed and we were unable to recover it. 00:26:40.109 [2024-12-06 03:34:59.981413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.109 [2024-12-06 03:34:59.981430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.109 qpair failed and we were unable to recover it. 00:26:40.109 [2024-12-06 03:34:59.981523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.110 [2024-12-06 03:34:59.981540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.110 qpair failed and we were unable to recover it. 00:26:40.110 [2024-12-06 03:34:59.981730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.110 [2024-12-06 03:34:59.981745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.110 qpair failed and we were unable to recover it. 00:26:40.110 [2024-12-06 03:34:59.981842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.110 [2024-12-06 03:34:59.981856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.110 qpair failed and we were unable to recover it. 00:26:40.110 [2024-12-06 03:34:59.981940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.110 [2024-12-06 03:34:59.981958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.110 qpair failed and we were unable to recover it. 00:26:40.110 [2024-12-06 03:34:59.982053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.110 [2024-12-06 03:34:59.982065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.110 qpair failed and we were unable to recover it. 00:26:40.110 [2024-12-06 03:34:59.982264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.110 [2024-12-06 03:34:59.982276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.110 qpair failed and we were unable to recover it. 00:26:40.110 [2024-12-06 03:34:59.982418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.110 [2024-12-06 03:34:59.982432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.110 qpair failed and we were unable to recover it. 00:26:40.110 [2024-12-06 03:34:59.982573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.110 [2024-12-06 03:34:59.982586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.110 qpair failed and we were unable to recover it. 00:26:40.110 [2024-12-06 03:34:59.982688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.110 [2024-12-06 03:34:59.982700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.110 qpair failed and we were unable to recover it. 00:26:40.110 [2024-12-06 03:34:59.982780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.110 [2024-12-06 03:34:59.982793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.110 qpair failed and we were unable to recover it. 00:26:40.110 [2024-12-06 03:34:59.982926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.110 [2024-12-06 03:34:59.982939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.110 qpair failed and we were unable to recover it. 00:26:40.110 [2024-12-06 03:34:59.983052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.110 [2024-12-06 03:34:59.983066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.110 qpair failed and we were unable to recover it. 00:26:40.110 [2024-12-06 03:34:59.983157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.110 [2024-12-06 03:34:59.983170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.110 qpair failed and we were unable to recover it. 00:26:40.110 [2024-12-06 03:34:59.983260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.110 [2024-12-06 03:34:59.983272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.110 qpair failed and we were unable to recover it. 00:26:40.110 [2024-12-06 03:34:59.983349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.110 [2024-12-06 03:34:59.983362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.110 qpair failed and we were unable to recover it. 00:26:40.110 [2024-12-06 03:34:59.983448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.110 [2024-12-06 03:34:59.983460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.110 qpair failed and we were unable to recover it. 00:26:40.110 [2024-12-06 03:34:59.983609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.110 [2024-12-06 03:34:59.983622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.110 qpair failed and we were unable to recover it. 00:26:40.110 [2024-12-06 03:34:59.983690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.110 [2024-12-06 03:34:59.983702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.110 qpair failed and we were unable to recover it. 00:26:40.110 [2024-12-06 03:34:59.983796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.110 [2024-12-06 03:34:59.983810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.110 qpair failed and we were unable to recover it. 00:26:40.110 [2024-12-06 03:34:59.983881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.110 [2024-12-06 03:34:59.983894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.110 qpair failed and we were unable to recover it. 00:26:40.110 [2024-12-06 03:34:59.984032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.110 [2024-12-06 03:34:59.984045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.110 qpair failed and we were unable to recover it. 00:26:40.110 [2024-12-06 03:34:59.984266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.110 [2024-12-06 03:34:59.984279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.110 qpair failed and we were unable to recover it. 00:26:40.110 [2024-12-06 03:34:59.984361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.110 [2024-12-06 03:34:59.984373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.110 qpair failed and we were unable to recover it. 00:26:40.110 [2024-12-06 03:34:59.984635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.110 [2024-12-06 03:34:59.984648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.110 qpair failed and we were unable to recover it. 00:26:40.110 [2024-12-06 03:34:59.984850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.110 [2024-12-06 03:34:59.984863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.110 qpair failed and we were unable to recover it. 00:26:40.110 [2024-12-06 03:34:59.984956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.110 [2024-12-06 03:34:59.984969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.110 qpair failed and we were unable to recover it. 00:26:40.110 [2024-12-06 03:34:59.985105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.110 [2024-12-06 03:34:59.985117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.110 qpair failed and we were unable to recover it. 00:26:40.110 [2024-12-06 03:34:59.985184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.110 [2024-12-06 03:34:59.985196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.110 qpair failed and we were unable to recover it. 00:26:40.110 [2024-12-06 03:34:59.985291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.110 [2024-12-06 03:34:59.985304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.110 qpair failed and we were unable to recover it. 00:26:40.110 [2024-12-06 03:34:59.985398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.110 [2024-12-06 03:34:59.985410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.110 qpair failed and we were unable to recover it. 00:26:40.110 [2024-12-06 03:34:59.985544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.110 [2024-12-06 03:34:59.985557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.110 qpair failed and we were unable to recover it. 00:26:40.110 [2024-12-06 03:34:59.985623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.110 [2024-12-06 03:34:59.985636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.110 qpair failed and we were unable to recover it. 00:26:40.110 [2024-12-06 03:34:59.985837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.110 [2024-12-06 03:34:59.985851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.110 qpair failed and we were unable to recover it. 00:26:40.110 [2024-12-06 03:34:59.985945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.110 [2024-12-06 03:34:59.985975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.110 qpair failed and we were unable to recover it. 00:26:40.110 [2024-12-06 03:34:59.986079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.110 [2024-12-06 03:34:59.986095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.110 qpair failed and we were unable to recover it. 00:26:40.110 [2024-12-06 03:34:59.986200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.110 [2024-12-06 03:34:59.986215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.110 qpair failed and we were unable to recover it. 00:26:40.110 [2024-12-06 03:34:59.986312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.110 [2024-12-06 03:34:59.986328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.110 qpair failed and we were unable to recover it. 00:26:40.110 [2024-12-06 03:34:59.986477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.110 [2024-12-06 03:34:59.986494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.110 qpair failed and we were unable to recover it. 00:26:40.110 [2024-12-06 03:34:59.986574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.111 [2024-12-06 03:34:59.986590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.111 qpair failed and we were unable to recover it. 00:26:40.111 [2024-12-06 03:34:59.986709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.111 [2024-12-06 03:34:59.986726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.111 qpair failed and we were unable to recover it. 00:26:40.111 [2024-12-06 03:34:59.986803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.111 [2024-12-06 03:34:59.986818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.111 qpair failed and we were unable to recover it. 00:26:40.111 [2024-12-06 03:34:59.986905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.111 [2024-12-06 03:34:59.986923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.111 qpair failed and we were unable to recover it. 00:26:40.111 [2024-12-06 03:34:59.987024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.111 [2024-12-06 03:34:59.987040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.111 qpair failed and we were unable to recover it. 00:26:40.111 [2024-12-06 03:34:59.987114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.111 [2024-12-06 03:34:59.987129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.111 qpair failed and we were unable to recover it. 00:26:40.111 [2024-12-06 03:34:59.987263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.111 [2024-12-06 03:34:59.987279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.111 qpair failed and we were unable to recover it. 00:26:40.111 [2024-12-06 03:34:59.987361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.111 [2024-12-06 03:34:59.987376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.111 qpair failed and we were unable to recover it. 00:26:40.111 [2024-12-06 03:34:59.987523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.111 [2024-12-06 03:34:59.987544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.111 qpair failed and we were unable to recover it. 00:26:40.111 [2024-12-06 03:34:59.987623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.111 [2024-12-06 03:34:59.987638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.111 qpair failed and we were unable to recover it. 00:26:40.111 [2024-12-06 03:34:59.987730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.111 [2024-12-06 03:34:59.987746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.111 qpair failed and we were unable to recover it. 00:26:40.111 [2024-12-06 03:34:59.987837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.111 [2024-12-06 03:34:59.987852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.111 qpair failed and we were unable to recover it. 00:26:40.111 [2024-12-06 03:34:59.987945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.111 [2024-12-06 03:34:59.987967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.111 qpair failed and we were unable to recover it. 00:26:40.111 [2024-12-06 03:34:59.988065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.111 [2024-12-06 03:34:59.988080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.111 qpair failed and we were unable to recover it. 00:26:40.111 [2024-12-06 03:34:59.988167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.111 [2024-12-06 03:34:59.988182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.111 qpair failed and we were unable to recover it. 00:26:40.111 [2024-12-06 03:34:59.988281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.111 [2024-12-06 03:34:59.988297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.111 qpair failed and we were unable to recover it. 00:26:40.111 [2024-12-06 03:34:59.988376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.111 [2024-12-06 03:34:59.988392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.111 qpair failed and we were unable to recover it. 00:26:40.111 [2024-12-06 03:34:59.988492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.111 [2024-12-06 03:34:59.988507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.111 qpair failed and we were unable to recover it. 00:26:40.111 [2024-12-06 03:34:59.988664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.111 [2024-12-06 03:34:59.988679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.111 qpair failed and we were unable to recover it. 00:26:40.111 [2024-12-06 03:34:59.988776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.111 [2024-12-06 03:34:59.988793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.111 qpair failed and we were unable to recover it. 00:26:40.111 [2024-12-06 03:34:59.988935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.111 [2024-12-06 03:34:59.988955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.111 qpair failed and we were unable to recover it. 00:26:40.111 [2024-12-06 03:34:59.989098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.111 [2024-12-06 03:34:59.989113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.111 qpair failed and we were unable to recover it. 00:26:40.111 [2024-12-06 03:34:59.989207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.111 [2024-12-06 03:34:59.989223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.111 qpair failed and we were unable to recover it. 00:26:40.111 [2024-12-06 03:34:59.989344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.111 [2024-12-06 03:34:59.989366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.111 qpair failed and we were unable to recover it. 00:26:40.111 [2024-12-06 03:34:59.989467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.111 [2024-12-06 03:34:59.989483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.111 qpair failed and we were unable to recover it. 00:26:40.111 [2024-12-06 03:34:59.989583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.111 [2024-12-06 03:34:59.989598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.111 qpair failed and we were unable to recover it. 00:26:40.111 [2024-12-06 03:34:59.989685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.111 [2024-12-06 03:34:59.989697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.111 qpair failed and we were unable to recover it. 00:26:40.111 [2024-12-06 03:34:59.989845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.111 [2024-12-06 03:34:59.989859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.111 qpair failed and we were unable to recover it. 00:26:40.111 [2024-12-06 03:34:59.990017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.111 [2024-12-06 03:34:59.990030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.111 qpair failed and we were unable to recover it. 00:26:40.111 [2024-12-06 03:34:59.990188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.111 [2024-12-06 03:34:59.990201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.111 qpair failed and we were unable to recover it. 00:26:40.111 [2024-12-06 03:34:59.990285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.111 [2024-12-06 03:34:59.990298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.111 qpair failed and we were unable to recover it. 00:26:40.111 [2024-12-06 03:34:59.990387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.111 [2024-12-06 03:34:59.990401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.111 qpair failed and we were unable to recover it. 00:26:40.111 [2024-12-06 03:34:59.990545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.111 [2024-12-06 03:34:59.990557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.111 qpair failed and we were unable to recover it. 00:26:40.111 [2024-12-06 03:34:59.990633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.111 [2024-12-06 03:34:59.990645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.111 qpair failed and we were unable to recover it. 00:26:40.111 [2024-12-06 03:34:59.990850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.111 [2024-12-06 03:34:59.990863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.111 qpair failed and we were unable to recover it. 00:26:40.111 [2024-12-06 03:34:59.990976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.111 [2024-12-06 03:34:59.991004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.111 qpair failed and we were unable to recover it. 00:26:40.111 [2024-12-06 03:34:59.991166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.111 [2024-12-06 03:34:59.991183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.111 qpair failed and we were unable to recover it. 00:26:40.111 [2024-12-06 03:34:59.991351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.111 [2024-12-06 03:34:59.991368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.111 qpair failed and we were unable to recover it. 00:26:40.111 [2024-12-06 03:34:59.991459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.112 [2024-12-06 03:34:59.991475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.112 qpair failed and we were unable to recover it. 00:26:40.112 [2024-12-06 03:34:59.991592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.112 [2024-12-06 03:34:59.991609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.112 qpair failed and we were unable to recover it. 00:26:40.112 [2024-12-06 03:34:59.991815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.112 [2024-12-06 03:34:59.991831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.112 qpair failed and we were unable to recover it. 00:26:40.112 [2024-12-06 03:34:59.991936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.112 [2024-12-06 03:34:59.991958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.112 qpair failed and we were unable to recover it. 00:26:40.112 [2024-12-06 03:34:59.992111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.112 [2024-12-06 03:34:59.992127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.112 qpair failed and we were unable to recover it. 00:26:40.112 [2024-12-06 03:34:59.992272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.112 [2024-12-06 03:34:59.992287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.112 qpair failed and we were unable to recover it. 00:26:40.112 [2024-12-06 03:34:59.992386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.112 [2024-12-06 03:34:59.992403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.112 qpair failed and we were unable to recover it. 00:26:40.112 [2024-12-06 03:34:59.992692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.112 [2024-12-06 03:34:59.992708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.112 qpair failed and we were unable to recover it. 00:26:40.112 [2024-12-06 03:34:59.992815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.112 [2024-12-06 03:34:59.992831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.112 qpair failed and we were unable to recover it. 00:26:40.112 [2024-12-06 03:34:59.993011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.112 [2024-12-06 03:34:59.993028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.112 qpair failed and we were unable to recover it. 00:26:40.112 [2024-12-06 03:34:59.993139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.112 [2024-12-06 03:34:59.993155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.112 qpair failed and we were unable to recover it. 00:26:40.112 [2024-12-06 03:34:59.993302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.112 [2024-12-06 03:34:59.993319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.112 qpair failed and we were unable to recover it. 00:26:40.112 [2024-12-06 03:34:59.993507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.112 [2024-12-06 03:34:59.993525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.112 qpair failed and we were unable to recover it. 00:26:40.112 [2024-12-06 03:34:59.993671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.112 [2024-12-06 03:34:59.993688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.112 qpair failed and we were unable to recover it. 00:26:40.112 [2024-12-06 03:34:59.993864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.112 [2024-12-06 03:34:59.993881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.112 qpair failed and we were unable to recover it. 00:26:40.112 [2024-12-06 03:34:59.993980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.112 [2024-12-06 03:34:59.993997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.112 qpair failed and we were unable to recover it. 00:26:40.112 [2024-12-06 03:34:59.994131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.112 [2024-12-06 03:34:59.994148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.112 qpair failed and we were unable to recover it. 00:26:40.112 [2024-12-06 03:34:59.994290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.112 [2024-12-06 03:34:59.994307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.112 qpair failed and we were unable to recover it. 00:26:40.112 [2024-12-06 03:34:59.994402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.112 [2024-12-06 03:34:59.994419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.112 qpair failed and we were unable to recover it. 00:26:40.112 [2024-12-06 03:34:59.994502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.112 [2024-12-06 03:34:59.994519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.112 qpair failed and we were unable to recover it. 00:26:40.112 [2024-12-06 03:34:59.994601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.112 [2024-12-06 03:34:59.994617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.112 qpair failed and we were unable to recover it. 00:26:40.112 [2024-12-06 03:34:59.994690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.112 [2024-12-06 03:34:59.994707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.112 qpair failed and we were unable to recover it. 00:26:40.112 [2024-12-06 03:34:59.994806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.112 [2024-12-06 03:34:59.994823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.112 qpair failed and we were unable to recover it. 00:26:40.112 [2024-12-06 03:34:59.994911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.112 [2024-12-06 03:34:59.994927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.112 qpair failed and we were unable to recover it. 00:26:40.112 [2024-12-06 03:34:59.995069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.112 [2024-12-06 03:34:59.995090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.112 qpair failed and we were unable to recover it. 00:26:40.112 [2024-12-06 03:34:59.995252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.112 [2024-12-06 03:34:59.995268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.112 qpair failed and we were unable to recover it. 00:26:40.112 [2024-12-06 03:34:59.995354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.112 [2024-12-06 03:34:59.995371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.112 qpair failed and we were unable to recover it. 00:26:40.112 [2024-12-06 03:34:59.995448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.112 [2024-12-06 03:34:59.995464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.112 qpair failed and we were unable to recover it. 00:26:40.112 [2024-12-06 03:34:59.995602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.112 [2024-12-06 03:34:59.995618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.112 qpair failed and we were unable to recover it. 00:26:40.112 [2024-12-06 03:34:59.995704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.112 [2024-12-06 03:34:59.995721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.112 qpair failed and we were unable to recover it. 00:26:40.112 [2024-12-06 03:34:59.995816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.112 [2024-12-06 03:34:59.995831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.112 qpair failed and we were unable to recover it. 00:26:40.112 [2024-12-06 03:34:59.995919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.112 [2024-12-06 03:34:59.995932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.112 qpair failed and we were unable to recover it. 00:26:40.112 [2024-12-06 03:34:59.996012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.112 [2024-12-06 03:34:59.996025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.112 qpair failed and we were unable to recover it. 00:26:40.112 [2024-12-06 03:34:59.996226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.112 [2024-12-06 03:34:59.996239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.112 qpair failed and we were unable to recover it. 00:26:40.112 [2024-12-06 03:34:59.996315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.112 [2024-12-06 03:34:59.996327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.112 qpair failed and we were unable to recover it. 00:26:40.112 [2024-12-06 03:34:59.996407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.112 [2024-12-06 03:34:59.996420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.112 qpair failed and we were unable to recover it. 00:26:40.112 [2024-12-06 03:34:59.996506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.112 [2024-12-06 03:34:59.996518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.112 qpair failed and we were unable to recover it. 00:26:40.112 [2024-12-06 03:34:59.996606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.112 [2024-12-06 03:34:59.996619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.112 qpair failed and we were unable to recover it. 00:26:40.112 [2024-12-06 03:34:59.996758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.113 [2024-12-06 03:34:59.996770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.113 qpair failed and we were unable to recover it. 00:26:40.113 [2024-12-06 03:34:59.996946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.113 [2024-12-06 03:34:59.996963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.113 qpair failed and we were unable to recover it. 00:26:40.113 [2024-12-06 03:34:59.997045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.113 [2024-12-06 03:34:59.997057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.113 qpair failed and we were unable to recover it. 00:26:40.113 [2024-12-06 03:34:59.997230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.113 [2024-12-06 03:34:59.997243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.113 qpair failed and we were unable to recover it. 00:26:40.113 [2024-12-06 03:34:59.997470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.113 [2024-12-06 03:34:59.997482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.113 qpair failed and we were unable to recover it. 00:26:40.113 [2024-12-06 03:34:59.997643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.113 [2024-12-06 03:34:59.997656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.113 qpair failed and we were unable to recover it. 00:26:40.113 [2024-12-06 03:34:59.997803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.113 [2024-12-06 03:34:59.997816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.113 qpair failed and we were unable to recover it. 00:26:40.113 [2024-12-06 03:34:59.997963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.113 [2024-12-06 03:34:59.997976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.113 qpair failed and we were unable to recover it. 00:26:40.113 [2024-12-06 03:34:59.998128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.113 [2024-12-06 03:34:59.998140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.113 qpair failed and we were unable to recover it. 00:26:40.113 [2024-12-06 03:34:59.998280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.113 [2024-12-06 03:34:59.998292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.113 qpair failed and we were unable to recover it. 00:26:40.113 [2024-12-06 03:34:59.998383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.113 [2024-12-06 03:34:59.998396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.113 qpair failed and we were unable to recover it. 00:26:40.113 [2024-12-06 03:34:59.998630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.113 [2024-12-06 03:34:59.998644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.113 qpair failed and we were unable to recover it. 00:26:40.113 [2024-12-06 03:34:59.998833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.113 [2024-12-06 03:34:59.998846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.113 qpair failed and we were unable to recover it. 00:26:40.113 [2024-12-06 03:34:59.998930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.113 [2024-12-06 03:34:59.998943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.113 qpair failed and we were unable to recover it. 00:26:40.113 [2024-12-06 03:34:59.999083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.113 [2024-12-06 03:34:59.999095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.113 qpair failed and we were unable to recover it. 00:26:40.113 [2024-12-06 03:34:59.999242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.113 [2024-12-06 03:34:59.999254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.113 qpair failed and we were unable to recover it. 00:26:40.113 [2024-12-06 03:34:59.999344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.113 [2024-12-06 03:34:59.999357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.113 qpair failed and we were unable to recover it. 00:26:40.113 [2024-12-06 03:34:59.999423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.113 [2024-12-06 03:34:59.999436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.113 qpair failed and we were unable to recover it. 00:26:40.113 [2024-12-06 03:34:59.999583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.113 [2024-12-06 03:34:59.999596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.113 qpair failed and we were unable to recover it. 00:26:40.113 [2024-12-06 03:34:59.999740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.113 [2024-12-06 03:34:59.999753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.113 qpair failed and we were unable to recover it. 00:26:40.113 [2024-12-06 03:34:59.999879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.113 [2024-12-06 03:34:59.999891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.113 qpair failed and we were unable to recover it. 00:26:40.113 [2024-12-06 03:35:00.000072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.113 [2024-12-06 03:35:00.000087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.113 qpair failed and we were unable to recover it. 00:26:40.113 [2024-12-06 03:35:00.000224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.113 [2024-12-06 03:35:00.000238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.113 qpair failed and we were unable to recover it. 00:26:40.113 [2024-12-06 03:35:00.000336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.113 [2024-12-06 03:35:00.000349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.113 qpair failed and we were unable to recover it. 00:26:40.113 [2024-12-06 03:35:00.000491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.113 [2024-12-06 03:35:00.000505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.113 qpair failed and we were unable to recover it. 00:26:40.113 [2024-12-06 03:35:00.000695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.113 [2024-12-06 03:35:00.000710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.113 qpair failed and we were unable to recover it. 00:26:40.113 [2024-12-06 03:35:00.000901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.113 [2024-12-06 03:35:00.000921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.113 qpair failed and we were unable to recover it. 00:26:40.113 [2024-12-06 03:35:00.001049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.113 [2024-12-06 03:35:00.001065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.113 qpair failed and we were unable to recover it. 00:26:40.113 [2024-12-06 03:35:00.001149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.113 [2024-12-06 03:35:00.001164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.113 qpair failed and we were unable to recover it. 00:26:40.113 [2024-12-06 03:35:00.001269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.113 [2024-12-06 03:35:00.001282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.113 qpair failed and we were unable to recover it. 00:26:40.113 [2024-12-06 03:35:00.001372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.113 [2024-12-06 03:35:00.001384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.113 qpair failed and we were unable to recover it. 00:26:40.113 [2024-12-06 03:35:00.001496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.113 [2024-12-06 03:35:00.001508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.113 qpair failed and we were unable to recover it. 00:26:40.113 [2024-12-06 03:35:00.001592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.113 [2024-12-06 03:35:00.001605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.113 qpair failed and we were unable to recover it. 00:26:40.113 [2024-12-06 03:35:00.001776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.114 [2024-12-06 03:35:00.001789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.114 qpair failed and we were unable to recover it. 00:26:40.114 [2024-12-06 03:35:00.001932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.114 [2024-12-06 03:35:00.001944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.114 qpair failed and we were unable to recover it. 00:26:40.114 [2024-12-06 03:35:00.002042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.114 [2024-12-06 03:35:00.002055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.114 qpair failed and we were unable to recover it. 00:26:40.114 [2024-12-06 03:35:00.002213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.114 [2024-12-06 03:35:00.002226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.114 qpair failed and we were unable to recover it. 00:26:40.114 [2024-12-06 03:35:00.002373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.114 [2024-12-06 03:35:00.002386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.114 qpair failed and we were unable to recover it. 00:26:40.114 [2024-12-06 03:35:00.002489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.114 [2024-12-06 03:35:00.002502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.114 qpair failed and we were unable to recover it. 00:26:40.114 [2024-12-06 03:35:00.002662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.114 [2024-12-06 03:35:00.002674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.114 qpair failed and we were unable to recover it. 00:26:40.114 [2024-12-06 03:35:00.002813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.114 [2024-12-06 03:35:00.002826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.114 qpair failed and we were unable to recover it. 00:26:40.114 [2024-12-06 03:35:00.002914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.114 [2024-12-06 03:35:00.002927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.114 qpair failed and we were unable to recover it. 00:26:40.114 [2024-12-06 03:35:00.003021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.114 [2024-12-06 03:35:00.003035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.114 qpair failed and we were unable to recover it. 00:26:40.114 [2024-12-06 03:35:00.003209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.114 [2024-12-06 03:35:00.003221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.114 qpair failed and we were unable to recover it. 00:26:40.114 [2024-12-06 03:35:00.003311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.114 [2024-12-06 03:35:00.003323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.114 qpair failed and we were unable to recover it. 00:26:40.114 [2024-12-06 03:35:00.003407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.114 [2024-12-06 03:35:00.003420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.114 qpair failed and we were unable to recover it. 00:26:40.114 [2024-12-06 03:35:00.003514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.114 [2024-12-06 03:35:00.003527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.114 qpair failed and we were unable to recover it. 00:26:40.114 [2024-12-06 03:35:00.003599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.114 [2024-12-06 03:35:00.003612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.114 qpair failed and we were unable to recover it. 00:26:40.114 [2024-12-06 03:35:00.003758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.114 [2024-12-06 03:35:00.003770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.114 qpair failed and we were unable to recover it. 00:26:40.114 [2024-12-06 03:35:00.003963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.114 [2024-12-06 03:35:00.003976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.114 qpair failed and we were unable to recover it. 00:26:40.114 [2024-12-06 03:35:00.004218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.114 [2024-12-06 03:35:00.004232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.114 qpair failed and we were unable to recover it. 00:26:40.114 [2024-12-06 03:35:00.004314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.114 [2024-12-06 03:35:00.004326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.114 qpair failed and we were unable to recover it. 00:26:40.114 [2024-12-06 03:35:00.004474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.114 [2024-12-06 03:35:00.004487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.114 qpair failed and we were unable to recover it. 00:26:40.114 [2024-12-06 03:35:00.004654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.114 [2024-12-06 03:35:00.004666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.114 qpair failed and we were unable to recover it. 00:26:40.114 [2024-12-06 03:35:00.004768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.114 [2024-12-06 03:35:00.004781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.114 qpair failed and we were unable to recover it. 00:26:40.114 [2024-12-06 03:35:00.004896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.114 [2024-12-06 03:35:00.004908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.114 qpair failed and we were unable to recover it. 00:26:40.114 [2024-12-06 03:35:00.004989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.114 [2024-12-06 03:35:00.005019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.114 qpair failed and we were unable to recover it. 00:26:40.114 [2024-12-06 03:35:00.005132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.114 [2024-12-06 03:35:00.005145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.114 qpair failed and we were unable to recover it. 00:26:40.114 [2024-12-06 03:35:00.005345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.114 [2024-12-06 03:35:00.005358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.114 qpair failed and we were unable to recover it. 00:26:40.114 [2024-12-06 03:35:00.005464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.114 [2024-12-06 03:35:00.005478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.114 qpair failed and we were unable to recover it. 00:26:40.114 [2024-12-06 03:35:00.005623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.114 [2024-12-06 03:35:00.005636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.114 qpair failed and we were unable to recover it. 00:26:40.114 [2024-12-06 03:35:00.005847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.114 [2024-12-06 03:35:00.005860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.114 qpair failed and we were unable to recover it. 00:26:40.114 [2024-12-06 03:35:00.006021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.114 [2024-12-06 03:35:00.006034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.114 qpair failed and we were unable to recover it. 00:26:40.114 [2024-12-06 03:35:00.006160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.114 [2024-12-06 03:35:00.006173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.114 qpair failed and we were unable to recover it. 00:26:40.114 [2024-12-06 03:35:00.006273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.114 [2024-12-06 03:35:00.006286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.114 qpair failed and we were unable to recover it. 00:26:40.114 [2024-12-06 03:35:00.006474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.114 [2024-12-06 03:35:00.006487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.114 qpair failed and we were unable to recover it. 00:26:40.114 [2024-12-06 03:35:00.006565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.114 [2024-12-06 03:35:00.006580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.114 qpair failed and we were unable to recover it. 00:26:40.114 [2024-12-06 03:35:00.006672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.114 [2024-12-06 03:35:00.006684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.114 qpair failed and we were unable to recover it. 00:26:40.114 [2024-12-06 03:35:00.006823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.114 [2024-12-06 03:35:00.006836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.114 qpair failed and we were unable to recover it. 00:26:40.114 [2024-12-06 03:35:00.006973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.114 [2024-12-06 03:35:00.006987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.114 qpair failed and we were unable to recover it. 00:26:40.114 [2024-12-06 03:35:00.007073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.114 [2024-12-06 03:35:00.007086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.114 qpair failed and we were unable to recover it. 00:26:40.114 [2024-12-06 03:35:00.007193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.115 [2024-12-06 03:35:00.007206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.115 qpair failed and we were unable to recover it. 00:26:40.115 [2024-12-06 03:35:00.007393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.115 [2024-12-06 03:35:00.007405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.115 qpair failed and we were unable to recover it. 00:26:40.115 [2024-12-06 03:35:00.007493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.115 [2024-12-06 03:35:00.007506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.115 qpair failed and we were unable to recover it. 00:26:40.115 [2024-12-06 03:35:00.007713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.115 [2024-12-06 03:35:00.007726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.115 qpair failed and we were unable to recover it. 00:26:40.115 [2024-12-06 03:35:00.007830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.115 [2024-12-06 03:35:00.007844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.115 qpair failed and we were unable to recover it. 00:26:40.115 [2024-12-06 03:35:00.008008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.115 [2024-12-06 03:35:00.008022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.115 qpair failed and we were unable to recover it. 00:26:40.115 [2024-12-06 03:35:00.008114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.115 [2024-12-06 03:35:00.008127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.115 qpair failed and we were unable to recover it. 00:26:40.115 [2024-12-06 03:35:00.008221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.115 [2024-12-06 03:35:00.008234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.115 qpair failed and we were unable to recover it. 00:26:40.115 [2024-12-06 03:35:00.008346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.115 [2024-12-06 03:35:00.008359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.115 qpair failed and we were unable to recover it. 00:26:40.115 [2024-12-06 03:35:00.008483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.115 [2024-12-06 03:35:00.008497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.115 qpair failed and we were unable to recover it. 00:26:40.115 [2024-12-06 03:35:00.008587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.115 [2024-12-06 03:35:00.008601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.115 qpair failed and we were unable to recover it. 00:26:40.115 [2024-12-06 03:35:00.008799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.115 [2024-12-06 03:35:00.008812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.115 qpair failed and we were unable to recover it. 00:26:40.115 [2024-12-06 03:35:00.008906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.115 [2024-12-06 03:35:00.008919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.115 qpair failed and we were unable to recover it. 00:26:40.115 [2024-12-06 03:35:00.009048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.115 [2024-12-06 03:35:00.009063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.115 qpair failed and we were unable to recover it. 00:26:40.115 [2024-12-06 03:35:00.009175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.115 [2024-12-06 03:35:00.009189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.115 qpair failed and we were unable to recover it. 00:26:40.115 [2024-12-06 03:35:00.009280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.115 [2024-12-06 03:35:00.009293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.115 qpair failed and we were unable to recover it. 00:26:40.115 [2024-12-06 03:35:00.009382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.115 [2024-12-06 03:35:00.009396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.115 qpair failed and we were unable to recover it. 00:26:40.115 [2024-12-06 03:35:00.009487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.115 [2024-12-06 03:35:00.009501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.115 qpair failed and we were unable to recover it. 00:26:40.115 [2024-12-06 03:35:00.009592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.115 [2024-12-06 03:35:00.009612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.115 qpair failed and we were unable to recover it. 00:26:40.115 [2024-12-06 03:35:00.009774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.115 [2024-12-06 03:35:00.009796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.115 qpair failed and we were unable to recover it. 00:26:40.115 [2024-12-06 03:35:00.010009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.115 [2024-12-06 03:35:00.010059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.115 qpair failed and we were unable to recover it. 00:26:40.115 [2024-12-06 03:35:00.010314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.115 [2024-12-06 03:35:00.010355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.115 qpair failed and we were unable to recover it. 00:26:40.115 [2024-12-06 03:35:00.010538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.115 [2024-12-06 03:35:00.010609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.115 qpair failed and we were unable to recover it. 00:26:40.115 [2024-12-06 03:35:00.010744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.115 [2024-12-06 03:35:00.010789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.115 qpair failed and we were unable to recover it. 00:26:40.115 [2024-12-06 03:35:00.011050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.115 [2024-12-06 03:35:00.011114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.115 qpair failed and we were unable to recover it. 00:26:40.115 [2024-12-06 03:35:00.011268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.115 [2024-12-06 03:35:00.011291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.115 qpair failed and we were unable to recover it. 00:26:40.115 [2024-12-06 03:35:00.011394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.115 [2024-12-06 03:35:00.011410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.115 qpair failed and we were unable to recover it. 00:26:40.115 [2024-12-06 03:35:00.011506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.115 [2024-12-06 03:35:00.011520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.115 qpair failed and we were unable to recover it. 00:26:40.115 [2024-12-06 03:35:00.011610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.115 [2024-12-06 03:35:00.011623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.115 qpair failed and we were unable to recover it. 00:26:40.115 [2024-12-06 03:35:00.011777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.115 [2024-12-06 03:35:00.011791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.115 qpair failed and we were unable to recover it. 00:26:40.115 [2024-12-06 03:35:00.011940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.115 [2024-12-06 03:35:00.011960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.115 qpair failed and we were unable to recover it. 00:26:40.115 [2024-12-06 03:35:00.012035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.115 [2024-12-06 03:35:00.012048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.115 qpair failed and we were unable to recover it. 00:26:40.115 [2024-12-06 03:35:00.012151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.115 [2024-12-06 03:35:00.012165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.115 qpair failed and we were unable to recover it. 00:26:40.115 [2024-12-06 03:35:00.012270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.115 [2024-12-06 03:35:00.012283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.115 qpair failed and we were unable to recover it. 00:26:40.115 [2024-12-06 03:35:00.012369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.115 [2024-12-06 03:35:00.012384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.115 qpair failed and we were unable to recover it. 00:26:40.115 [2024-12-06 03:35:00.012461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.115 [2024-12-06 03:35:00.012479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.115 qpair failed and we were unable to recover it. 00:26:40.115 [2024-12-06 03:35:00.012580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.115 [2024-12-06 03:35:00.012593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.115 qpair failed and we were unable to recover it. 00:26:40.115 [2024-12-06 03:35:00.012751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.115 [2024-12-06 03:35:00.012765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.115 qpair failed and we were unable to recover it. 00:26:40.116 [2024-12-06 03:35:00.012968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.116 [2024-12-06 03:35:00.012981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.116 qpair failed and we were unable to recover it. 00:26:40.116 [2024-12-06 03:35:00.013074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.116 [2024-12-06 03:35:00.013086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.116 qpair failed and we were unable to recover it. 00:26:40.116 [2024-12-06 03:35:00.013193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.116 [2024-12-06 03:35:00.013205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.116 qpair failed and we were unable to recover it. 00:26:40.116 [2024-12-06 03:35:00.013349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.116 [2024-12-06 03:35:00.013361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.116 qpair failed and we were unable to recover it. 00:26:40.116 [2024-12-06 03:35:00.013427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.116 [2024-12-06 03:35:00.013440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.116 qpair failed and we were unable to recover it. 00:26:40.116 [2024-12-06 03:35:00.013532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.116 [2024-12-06 03:35:00.013545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.116 qpair failed and we were unable to recover it. 00:26:40.116 [2024-12-06 03:35:00.013751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.116 [2024-12-06 03:35:00.013764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.116 qpair failed and we were unable to recover it. 00:26:40.116 [2024-12-06 03:35:00.013966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.116 [2024-12-06 03:35:00.013979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.116 qpair failed and we were unable to recover it. 00:26:40.116 [2024-12-06 03:35:00.014059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.116 [2024-12-06 03:35:00.014072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.116 qpair failed and we were unable to recover it. 00:26:40.116 [2024-12-06 03:35:00.014172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.116 [2024-12-06 03:35:00.014185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.116 qpair failed and we were unable to recover it. 00:26:40.116 [2024-12-06 03:35:00.014264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.116 [2024-12-06 03:35:00.014276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.116 qpair failed and we were unable to recover it. 00:26:40.116 [2024-12-06 03:35:00.014386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.116 [2024-12-06 03:35:00.014400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.116 qpair failed and we were unable to recover it. 00:26:40.116 [2024-12-06 03:35:00.014495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.116 [2024-12-06 03:35:00.014508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.116 qpair failed and we were unable to recover it. 00:26:40.116 [2024-12-06 03:35:00.014709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.116 [2024-12-06 03:35:00.014722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.116 qpair failed and we were unable to recover it. 00:26:40.116 [2024-12-06 03:35:00.014925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.116 [2024-12-06 03:35:00.014938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.116 qpair failed and we were unable to recover it. 00:26:40.116 [2024-12-06 03:35:00.015063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.116 [2024-12-06 03:35:00.015075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.116 qpair failed and we were unable to recover it. 00:26:40.116 [2024-12-06 03:35:00.015160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.116 [2024-12-06 03:35:00.015174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.116 qpair failed and we were unable to recover it. 00:26:40.116 [2024-12-06 03:35:00.015252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.116 [2024-12-06 03:35:00.015265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.116 qpair failed and we were unable to recover it. 00:26:40.116 [2024-12-06 03:35:00.015451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.116 [2024-12-06 03:35:00.015464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.116 qpair failed and we were unable to recover it. 00:26:40.116 [2024-12-06 03:35:00.015605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.116 [2024-12-06 03:35:00.015618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.116 qpair failed and we were unable to recover it. 00:26:40.116 [2024-12-06 03:35:00.015806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.116 [2024-12-06 03:35:00.015819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.116 qpair failed and we were unable to recover it. 00:26:40.116 [2024-12-06 03:35:00.015969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.116 [2024-12-06 03:35:00.015982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.116 qpair failed and we were unable to recover it. 00:26:40.116 [2024-12-06 03:35:00.016109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.116 [2024-12-06 03:35:00.016122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.116 qpair failed and we were unable to recover it. 00:26:40.116 [2024-12-06 03:35:00.016209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.116 [2024-12-06 03:35:00.016221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.116 qpair failed and we were unable to recover it. 00:26:40.116 [2024-12-06 03:35:00.016315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.116 [2024-12-06 03:35:00.016327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.116 qpair failed and we were unable to recover it. 00:26:40.116 [2024-12-06 03:35:00.016416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.116 [2024-12-06 03:35:00.016428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.116 qpair failed and we were unable to recover it. 00:26:40.116 [2024-12-06 03:35:00.016514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.116 [2024-12-06 03:35:00.016527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.116 qpair failed and we were unable to recover it. 00:26:40.116 [2024-12-06 03:35:00.016688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.116 [2024-12-06 03:35:00.016700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.116 qpair failed and we were unable to recover it. 00:26:40.116 [2024-12-06 03:35:00.016784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.116 [2024-12-06 03:35:00.016796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.116 qpair failed and we were unable to recover it. 00:26:40.116 [2024-12-06 03:35:00.017020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.116 [2024-12-06 03:35:00.017033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.116 qpair failed and we were unable to recover it. 00:26:40.116 [2024-12-06 03:35:00.017132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.116 [2024-12-06 03:35:00.017146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.116 qpair failed and we were unable to recover it. 00:26:40.116 [2024-12-06 03:35:00.017235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.116 [2024-12-06 03:35:00.017247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.116 qpair failed and we were unable to recover it. 00:26:40.116 [2024-12-06 03:35:00.017381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.116 [2024-12-06 03:35:00.017394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.116 qpair failed and we were unable to recover it. 00:26:40.116 [2024-12-06 03:35:00.017494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.116 [2024-12-06 03:35:00.017507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.116 qpair failed and we were unable to recover it. 00:26:40.116 [2024-12-06 03:35:00.017667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.116 [2024-12-06 03:35:00.017679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.116 qpair failed and we were unable to recover it. 00:26:40.116 [2024-12-06 03:35:00.017764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.116 [2024-12-06 03:35:00.017777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.116 qpair failed and we were unable to recover it. 00:26:40.116 [2024-12-06 03:35:00.017980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.116 [2024-12-06 03:35:00.017994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.116 qpair failed and we were unable to recover it. 00:26:40.116 [2024-12-06 03:35:00.018073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.116 [2024-12-06 03:35:00.018089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.116 qpair failed and we were unable to recover it. 00:26:40.116 [2024-12-06 03:35:00.018308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.116 [2024-12-06 03:35:00.018320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.116 qpair failed and we were unable to recover it. 00:26:40.116 [2024-12-06 03:35:00.019046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.116 [2024-12-06 03:35:00.019074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.117 qpair failed and we were unable to recover it. 00:26:40.117 [2024-12-06 03:35:00.019246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.117 [2024-12-06 03:35:00.019259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.117 qpair failed and we were unable to recover it. 00:26:40.117 [2024-12-06 03:35:00.019461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.117 [2024-12-06 03:35:00.019473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.117 qpair failed and we were unable to recover it. 00:26:40.117 [2024-12-06 03:35:00.019628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.117 [2024-12-06 03:35:00.019641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.117 qpair failed and we were unable to recover it. 00:26:40.117 [2024-12-06 03:35:00.019727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.117 [2024-12-06 03:35:00.019740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.117 qpair failed and we were unable to recover it. 00:26:40.117 [2024-12-06 03:35:00.019957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.117 [2024-12-06 03:35:00.019971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.117 qpair failed and we were unable to recover it. 00:26:40.117 [2024-12-06 03:35:00.020083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.117 [2024-12-06 03:35:00.020096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.117 qpair failed and we were unable to recover it. 00:26:40.117 [2024-12-06 03:35:00.020196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.117 [2024-12-06 03:35:00.020209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.117 qpair failed and we were unable to recover it. 00:26:40.117 [2024-12-06 03:35:00.020367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.117 [2024-12-06 03:35:00.020379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.117 qpair failed and we were unable to recover it. 00:26:40.117 [2024-12-06 03:35:00.020535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.117 [2024-12-06 03:35:00.020548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.117 qpair failed and we were unable to recover it. 00:26:40.117 [2024-12-06 03:35:00.020691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.117 [2024-12-06 03:35:00.020703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.117 qpair failed and we were unable to recover it. 00:26:40.117 [2024-12-06 03:35:00.020844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.117 [2024-12-06 03:35:00.020857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.117 qpair failed and we were unable to recover it. 00:26:40.117 [2024-12-06 03:35:00.021025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.117 [2024-12-06 03:35:00.021038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.117 qpair failed and we were unable to recover it. 00:26:40.117 [2024-12-06 03:35:00.021112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.117 [2024-12-06 03:35:00.021125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.117 qpair failed and we were unable to recover it. 00:26:40.117 [2024-12-06 03:35:00.021209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.117 [2024-12-06 03:35:00.021222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.117 qpair failed and we were unable to recover it. 00:26:40.117 [2024-12-06 03:35:00.021312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.117 [2024-12-06 03:35:00.021324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.117 qpair failed and we were unable to recover it. 00:26:40.117 [2024-12-06 03:35:00.021511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.117 [2024-12-06 03:35:00.021524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.117 qpair failed and we were unable to recover it. 00:26:40.117 [2024-12-06 03:35:00.021788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.117 [2024-12-06 03:35:00.021801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.117 qpair failed and we were unable to recover it. 00:26:40.117 [2024-12-06 03:35:00.022055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.117 [2024-12-06 03:35:00.022068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.117 qpair failed and we were unable to recover it. 00:26:40.117 [2024-12-06 03:35:00.022214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.117 [2024-12-06 03:35:00.022226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.117 qpair failed and we were unable to recover it. 00:26:40.117 [2024-12-06 03:35:00.022312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.117 [2024-12-06 03:35:00.022325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.117 qpair failed and we were unable to recover it. 00:26:40.117 [2024-12-06 03:35:00.022484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.117 [2024-12-06 03:35:00.022496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.117 qpair failed and we were unable to recover it. 00:26:40.117 [2024-12-06 03:35:00.022572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.117 [2024-12-06 03:35:00.022585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.117 qpair failed and we were unable to recover it. 00:26:40.117 [2024-12-06 03:35:00.022750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.117 [2024-12-06 03:35:00.022763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.117 qpair failed and we were unable to recover it. 00:26:40.117 [2024-12-06 03:35:00.022863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.117 [2024-12-06 03:35:00.022875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.117 qpair failed and we were unable to recover it. 00:26:40.117 [2024-12-06 03:35:00.023027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.117 [2024-12-06 03:35:00.023041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.117 qpair failed and we were unable to recover it. 00:26:40.117 [2024-12-06 03:35:00.023218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.117 [2024-12-06 03:35:00.023231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.117 qpair failed and we were unable to recover it. 00:26:40.117 [2024-12-06 03:35:00.023400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.117 [2024-12-06 03:35:00.023412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.117 qpair failed and we were unable to recover it. 00:26:40.117 [2024-12-06 03:35:00.023654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.117 [2024-12-06 03:35:00.023667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.117 qpair failed and we were unable to recover it. 00:26:40.117 [2024-12-06 03:35:00.023835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.117 [2024-12-06 03:35:00.023848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.117 qpair failed and we were unable to recover it. 00:26:40.117 [2024-12-06 03:35:00.024041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.117 [2024-12-06 03:35:00.024053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.117 qpair failed and we were unable to recover it. 00:26:40.117 [2024-12-06 03:35:00.024203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.117 [2024-12-06 03:35:00.024217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.117 qpair failed and we were unable to recover it. 00:26:40.117 [2024-12-06 03:35:00.024406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.117 [2024-12-06 03:35:00.024420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.117 qpair failed and we were unable to recover it. 00:26:40.117 [2024-12-06 03:35:00.024523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.117 [2024-12-06 03:35:00.024536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.117 qpair failed and we were unable to recover it. 00:26:40.117 [2024-12-06 03:35:00.024623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.117 [2024-12-06 03:35:00.024635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.117 qpair failed and we were unable to recover it. 00:26:40.117 [2024-12-06 03:35:00.024770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.117 [2024-12-06 03:35:00.024785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.117 qpair failed and we were unable to recover it. 00:26:40.117 [2024-12-06 03:35:00.025019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.117 [2024-12-06 03:35:00.025032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.117 qpair failed and we were unable to recover it. 00:26:40.117 [2024-12-06 03:35:00.025119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.117 [2024-12-06 03:35:00.025130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.117 qpair failed and we were unable to recover it. 00:26:40.118 [2024-12-06 03:35:00.025276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.118 [2024-12-06 03:35:00.025289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.118 qpair failed and we were unable to recover it. 00:26:40.118 [2024-12-06 03:35:00.025372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.118 [2024-12-06 03:35:00.025383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.118 qpair failed and we were unable to recover it. 00:26:40.118 [2024-12-06 03:35:00.025520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.118 [2024-12-06 03:35:00.025530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.118 qpair failed and we were unable to recover it. 00:26:40.118 [2024-12-06 03:35:00.025746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.118 [2024-12-06 03:35:00.025757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.118 qpair failed and we were unable to recover it. 00:26:40.118 [2024-12-06 03:35:00.025834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.118 [2024-12-06 03:35:00.025845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.118 qpair failed and we were unable to recover it. 00:26:40.118 [2024-12-06 03:35:00.026008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.118 [2024-12-06 03:35:00.026020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.118 qpair failed and we were unable to recover it. 00:26:40.118 [2024-12-06 03:35:00.026180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.118 [2024-12-06 03:35:00.026191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.118 qpair failed and we were unable to recover it. 00:26:40.118 [2024-12-06 03:35:00.026327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.118 [2024-12-06 03:35:00.026338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.118 qpair failed and we were unable to recover it. 00:26:40.118 [2024-12-06 03:35:00.026432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.118 [2024-12-06 03:35:00.026443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.118 qpair failed and we were unable to recover it. 00:26:40.118 [2024-12-06 03:35:00.026532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.118 [2024-12-06 03:35:00.026544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.118 qpair failed and we were unable to recover it. 00:26:40.118 [2024-12-06 03:35:00.026715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.118 [2024-12-06 03:35:00.026727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.118 qpair failed and we were unable to recover it. 00:26:40.118 [2024-12-06 03:35:00.026933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.118 [2024-12-06 03:35:00.026944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.118 qpair failed and we were unable to recover it. 00:26:40.118 [2024-12-06 03:35:00.027156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.118 [2024-12-06 03:35:00.027166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.118 qpair failed and we were unable to recover it. 00:26:40.118 [2024-12-06 03:35:00.027325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.118 [2024-12-06 03:35:00.027337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.118 qpair failed and we were unable to recover it. 00:26:40.118 [2024-12-06 03:35:00.027409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.118 [2024-12-06 03:35:00.027419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.118 qpair failed and we were unable to recover it. 00:26:40.118 [2024-12-06 03:35:00.027585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.118 [2024-12-06 03:35:00.027596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.118 qpair failed and we were unable to recover it. 00:26:40.118 [2024-12-06 03:35:00.027751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.118 [2024-12-06 03:35:00.027761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.118 qpair failed and we were unable to recover it. 00:26:40.118 [2024-12-06 03:35:00.027840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.118 [2024-12-06 03:35:00.027850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.118 qpair failed and we were unable to recover it. 00:26:40.118 [2024-12-06 03:35:00.028008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.118 [2024-12-06 03:35:00.028019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.118 qpair failed and we were unable to recover it. 00:26:40.118 [2024-12-06 03:35:00.028127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.118 [2024-12-06 03:35:00.028138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.118 qpair failed and we were unable to recover it. 00:26:40.118 [2024-12-06 03:35:00.028214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.118 [2024-12-06 03:35:00.028224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.118 qpair failed and we were unable to recover it. 00:26:40.118 [2024-12-06 03:35:00.028332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.118 [2024-12-06 03:35:00.028342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.118 qpair failed and we were unable to recover it. 00:26:40.118 [2024-12-06 03:35:00.028424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.118 [2024-12-06 03:35:00.028435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.118 qpair failed and we were unable to recover it. 00:26:40.118 [2024-12-06 03:35:00.028638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.118 [2024-12-06 03:35:00.028650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.118 qpair failed and we were unable to recover it. 00:26:40.118 [2024-12-06 03:35:00.028720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.118 [2024-12-06 03:35:00.028731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.118 qpair failed and we were unable to recover it. 00:26:40.118 [2024-12-06 03:35:00.028809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.118 [2024-12-06 03:35:00.028820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.118 qpair failed and we were unable to recover it. 00:26:40.118 [2024-12-06 03:35:00.028969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.118 [2024-12-06 03:35:00.028982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.119 qpair failed and we were unable to recover it. 00:26:40.119 [2024-12-06 03:35:00.029071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.119 [2024-12-06 03:35:00.029083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.119 qpair failed and we were unable to recover it. 00:26:40.119 [2024-12-06 03:35:00.029175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.119 [2024-12-06 03:35:00.029186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.119 qpair failed and we were unable to recover it. 00:26:40.119 [2024-12-06 03:35:00.029270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.119 [2024-12-06 03:35:00.029281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.119 qpair failed and we were unable to recover it. 00:26:40.119 [2024-12-06 03:35:00.029457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.119 [2024-12-06 03:35:00.029468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.119 qpair failed and we were unable to recover it. 00:26:40.119 [2024-12-06 03:35:00.029601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.119 [2024-12-06 03:35:00.029612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.119 qpair failed and we were unable to recover it. 00:26:40.119 [2024-12-06 03:35:00.029689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.119 [2024-12-06 03:35:00.029699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.119 qpair failed and we were unable to recover it. 00:26:40.119 [2024-12-06 03:35:00.029841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.119 [2024-12-06 03:35:00.029852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.119 qpair failed and we were unable to recover it. 00:26:40.119 [2024-12-06 03:35:00.029936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.119 [2024-12-06 03:35:00.029968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.119 qpair failed and we were unable to recover it. 00:26:40.119 [2024-12-06 03:35:00.030130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.119 [2024-12-06 03:35:00.030141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.119 qpair failed and we were unable to recover it. 00:26:40.119 [2024-12-06 03:35:00.030244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.119 [2024-12-06 03:35:00.030256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.119 qpair failed and we were unable to recover it. 00:26:40.119 [2024-12-06 03:35:00.030332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.119 [2024-12-06 03:35:00.030342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.119 qpair failed and we were unable to recover it. 00:26:40.119 [2024-12-06 03:35:00.030420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.119 [2024-12-06 03:35:00.030431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.119 qpair failed and we were unable to recover it. 00:26:40.119 [2024-12-06 03:35:00.030502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.119 [2024-12-06 03:35:00.030513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.119 qpair failed and we were unable to recover it. 00:26:40.119 [2024-12-06 03:35:00.030598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.119 [2024-12-06 03:35:00.030610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.119 qpair failed and we were unable to recover it. 00:26:40.119 [2024-12-06 03:35:00.030749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.119 [2024-12-06 03:35:00.030760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.119 qpair failed and we were unable to recover it. 00:26:40.119 [2024-12-06 03:35:00.030848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.119 [2024-12-06 03:35:00.030859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.119 qpair failed and we were unable to recover it. 00:26:40.119 [2024-12-06 03:35:00.030921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.119 [2024-12-06 03:35:00.030931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.119 qpair failed and we were unable to recover it. 00:26:40.119 [2024-12-06 03:35:00.031003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.119 [2024-12-06 03:35:00.031014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.119 qpair failed and we were unable to recover it. 00:26:40.119 [2024-12-06 03:35:00.031073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.119 [2024-12-06 03:35:00.031084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.119 qpair failed and we were unable to recover it. 00:26:40.119 [2024-12-06 03:35:00.031180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.119 [2024-12-06 03:35:00.031192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.119 qpair failed and we were unable to recover it. 00:26:40.119 [2024-12-06 03:35:00.031266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.119 [2024-12-06 03:35:00.031277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.119 qpair failed and we were unable to recover it. 00:26:40.119 [2024-12-06 03:35:00.031348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.119 [2024-12-06 03:35:00.031359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.119 qpair failed and we were unable to recover it. 00:26:40.119 [2024-12-06 03:35:00.031426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.119 [2024-12-06 03:35:00.031436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.119 qpair failed and we were unable to recover it. 00:26:40.119 [2024-12-06 03:35:00.031599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.119 [2024-12-06 03:35:00.031610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.119 qpair failed and we were unable to recover it. 00:26:40.119 [2024-12-06 03:35:00.031682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.119 [2024-12-06 03:35:00.031693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.119 qpair failed and we were unable to recover it. 00:26:40.119 [2024-12-06 03:35:00.031781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.119 [2024-12-06 03:35:00.031791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.119 qpair failed and we were unable to recover it. 00:26:40.119 [2024-12-06 03:35:00.031858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.119 [2024-12-06 03:35:00.031869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.119 qpair failed and we were unable to recover it. 00:26:40.119 [2024-12-06 03:35:00.031944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.119 [2024-12-06 03:35:00.031959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.119 qpair failed and we were unable to recover it. 00:26:40.119 [2024-12-06 03:35:00.032035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.119 [2024-12-06 03:35:00.032046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.119 qpair failed and we were unable to recover it. 00:26:40.119 [2024-12-06 03:35:00.032146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.119 [2024-12-06 03:35:00.032157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.119 qpair failed and we were unable to recover it. 00:26:40.119 [2024-12-06 03:35:00.032309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.119 [2024-12-06 03:35:00.032319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.119 qpair failed and we were unable to recover it. 00:26:40.119 [2024-12-06 03:35:00.032397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.119 [2024-12-06 03:35:00.032408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.119 qpair failed and we were unable to recover it. 00:26:40.119 [2024-12-06 03:35:00.032487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.119 [2024-12-06 03:35:00.032498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.119 qpair failed and we were unable to recover it. 00:26:40.119 [2024-12-06 03:35:00.032589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.119 [2024-12-06 03:35:00.032601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.119 qpair failed and we were unable to recover it. 00:26:40.119 [2024-12-06 03:35:00.032687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.119 [2024-12-06 03:35:00.032698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.119 qpair failed and we were unable to recover it. 00:26:40.119 [2024-12-06 03:35:00.032760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.119 [2024-12-06 03:35:00.032771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.119 qpair failed and we were unable to recover it. 00:26:40.119 [2024-12-06 03:35:00.032946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.119 [2024-12-06 03:35:00.032961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.119 qpair failed and we were unable to recover it. 00:26:40.119 [2024-12-06 03:35:00.033048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.119 [2024-12-06 03:35:00.033059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.120 qpair failed and we were unable to recover it. 00:26:40.120 [2024-12-06 03:35:00.033202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.120 [2024-12-06 03:35:00.033213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.120 qpair failed and we were unable to recover it. 00:26:40.120 [2024-12-06 03:35:00.033289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.120 [2024-12-06 03:35:00.033300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.120 qpair failed and we were unable to recover it. 00:26:40.120 [2024-12-06 03:35:00.033418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.120 [2024-12-06 03:35:00.033454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.120 qpair failed and we were unable to recover it. 00:26:40.120 [2024-12-06 03:35:00.033572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.120 [2024-12-06 03:35:00.033604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.120 qpair failed and we were unable to recover it. 00:26:40.120 [2024-12-06 03:35:00.033720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.120 [2024-12-06 03:35:00.033738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.120 qpair failed and we were unable to recover it. 00:26:40.120 [2024-12-06 03:35:00.033834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.120 [2024-12-06 03:35:00.033849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.120 qpair failed and we were unable to recover it. 00:26:40.120 [2024-12-06 03:35:00.033972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.120 [2024-12-06 03:35:00.033989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.120 qpair failed and we were unable to recover it. 00:26:40.120 [2024-12-06 03:35:00.034079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.120 [2024-12-06 03:35:00.034095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.120 qpair failed and we were unable to recover it. 00:26:40.120 [2024-12-06 03:35:00.034243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.120 [2024-12-06 03:35:00.034258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.120 qpair failed and we were unable to recover it. 00:26:40.120 [2024-12-06 03:35:00.034324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.120 [2024-12-06 03:35:00.034339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.120 qpair failed and we were unable to recover it. 00:26:40.120 [2024-12-06 03:35:00.034436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.120 [2024-12-06 03:35:00.034451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.120 qpair failed and we were unable to recover it. 00:26:40.120 [2024-12-06 03:35:00.034544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.120 [2024-12-06 03:35:00.034559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.120 qpair failed and we were unable to recover it. 00:26:40.120 [2024-12-06 03:35:00.034645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.120 [2024-12-06 03:35:00.034659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.120 qpair failed and we were unable to recover it. 00:26:40.120 [2024-12-06 03:35:00.034740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.120 [2024-12-06 03:35:00.034755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.120 qpair failed and we were unable to recover it. 00:26:40.120 [2024-12-06 03:35:00.034900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.120 [2024-12-06 03:35:00.034913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.120 qpair failed and we were unable to recover it. 00:26:40.120 [2024-12-06 03:35:00.035008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.120 [2024-12-06 03:35:00.035019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.120 qpair failed and we were unable to recover it. 00:26:40.120 [2024-12-06 03:35:00.035118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.120 [2024-12-06 03:35:00.035128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.120 qpair failed and we were unable to recover it. 00:26:40.120 [2024-12-06 03:35:00.035216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.120 [2024-12-06 03:35:00.035227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.120 qpair failed and we were unable to recover it. 00:26:40.120 [2024-12-06 03:35:00.035287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.120 [2024-12-06 03:35:00.035298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.120 qpair failed and we were unable to recover it. 00:26:40.120 [2024-12-06 03:35:00.035370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.120 [2024-12-06 03:35:00.035380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.120 qpair failed and we were unable to recover it. 00:26:40.120 [2024-12-06 03:35:00.035471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.120 [2024-12-06 03:35:00.035482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.120 qpair failed and we were unable to recover it. 00:26:40.120 [2024-12-06 03:35:00.035617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.120 [2024-12-06 03:35:00.035628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.120 qpair failed and we were unable to recover it. 00:26:40.120 [2024-12-06 03:35:00.035695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.120 [2024-12-06 03:35:00.035705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.120 qpair failed and we were unable to recover it. 00:26:40.120 [2024-12-06 03:35:00.035785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.120 [2024-12-06 03:35:00.035795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.120 qpair failed and we were unable to recover it. 00:26:40.120 [2024-12-06 03:35:00.035867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.120 [2024-12-06 03:35:00.035878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.120 qpair failed and we were unable to recover it. 00:26:40.120 [2024-12-06 03:35:00.035952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.120 [2024-12-06 03:35:00.035964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.120 qpair failed and we were unable to recover it. 00:26:40.120 [2024-12-06 03:35:00.036046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.120 [2024-12-06 03:35:00.036057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.120 qpair failed and we were unable to recover it. 00:26:40.120 [2024-12-06 03:35:00.036129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.120 [2024-12-06 03:35:00.036139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.120 qpair failed and we were unable to recover it. 00:26:40.120 [2024-12-06 03:35:00.036217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.120 [2024-12-06 03:35:00.036228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.120 qpair failed and we were unable to recover it. 00:26:40.120 [2024-12-06 03:35:00.036314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.120 [2024-12-06 03:35:00.036324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.120 qpair failed and we were unable to recover it. 00:26:40.120 [2024-12-06 03:35:00.036471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.120 [2024-12-06 03:35:00.036482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.120 qpair failed and we were unable to recover it. 00:26:40.120 [2024-12-06 03:35:00.036561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.120 [2024-12-06 03:35:00.036572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.120 qpair failed and we were unable to recover it. 00:26:40.120 [2024-12-06 03:35:00.036648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.120 [2024-12-06 03:35:00.036658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.120 qpair failed and we were unable to recover it. 00:26:40.120 [2024-12-06 03:35:00.036717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.120 [2024-12-06 03:35:00.036728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.120 qpair failed and we were unable to recover it. 00:26:40.120 [2024-12-06 03:35:00.036960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.120 [2024-12-06 03:35:00.036972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.120 qpair failed and we were unable to recover it. 00:26:40.120 [2024-12-06 03:35:00.037039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.120 [2024-12-06 03:35:00.037050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.120 qpair failed and we were unable to recover it. 00:26:40.120 [2024-12-06 03:35:00.037121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.120 [2024-12-06 03:35:00.037132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.120 qpair failed and we were unable to recover it. 00:26:40.120 [2024-12-06 03:35:00.037215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.121 [2024-12-06 03:35:00.037225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.121 qpair failed and we were unable to recover it. 00:26:40.121 [2024-12-06 03:35:00.037306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.121 [2024-12-06 03:35:00.037316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.121 qpair failed and we were unable to recover it. 00:26:40.121 [2024-12-06 03:35:00.037455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.121 [2024-12-06 03:35:00.037466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.121 qpair failed and we were unable to recover it. 00:26:40.121 [2024-12-06 03:35:00.037616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.121 [2024-12-06 03:35:00.037627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.121 qpair failed and we were unable to recover it. 00:26:40.121 [2024-12-06 03:35:00.037707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.121 [2024-12-06 03:35:00.037717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.121 qpair failed and we were unable to recover it. 00:26:40.121 [2024-12-06 03:35:00.037798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.121 [2024-12-06 03:35:00.037811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.121 qpair failed and we were unable to recover it. 00:26:40.121 [2024-12-06 03:35:00.037995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.121 [2024-12-06 03:35:00.038006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.121 qpair failed and we were unable to recover it. 00:26:40.121 [2024-12-06 03:35:00.038141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.121 [2024-12-06 03:35:00.038151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.121 qpair failed and we were unable to recover it. 00:26:40.121 [2024-12-06 03:35:00.038240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.121 [2024-12-06 03:35:00.038251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.121 qpair failed and we were unable to recover it. 00:26:40.121 [2024-12-06 03:35:00.038334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.121 [2024-12-06 03:35:00.038345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.121 qpair failed and we were unable to recover it. 00:26:40.121 [2024-12-06 03:35:00.038414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.121 [2024-12-06 03:35:00.038424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.121 qpair failed and we were unable to recover it. 00:26:40.121 [2024-12-06 03:35:00.038495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.121 [2024-12-06 03:35:00.038506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.121 qpair failed and we were unable to recover it. 00:26:40.121 [2024-12-06 03:35:00.038574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.121 [2024-12-06 03:35:00.038584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.121 qpair failed and we were unable to recover it. 00:26:40.121 [2024-12-06 03:35:00.038651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.121 [2024-12-06 03:35:00.038661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.121 qpair failed and we were unable to recover it. 00:26:40.121 [2024-12-06 03:35:00.038731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.121 [2024-12-06 03:35:00.038741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.121 qpair failed and we were unable to recover it. 00:26:40.121 [2024-12-06 03:35:00.038834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.121 [2024-12-06 03:35:00.038845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.121 qpair failed and we were unable to recover it. 00:26:40.121 [2024-12-06 03:35:00.038920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.121 [2024-12-06 03:35:00.038930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.121 qpair failed and we were unable to recover it. 00:26:40.121 [2024-12-06 03:35:00.039029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.121 [2024-12-06 03:35:00.039040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.121 qpair failed and we were unable to recover it. 00:26:40.121 [2024-12-06 03:35:00.039121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.121 [2024-12-06 03:35:00.039131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.121 qpair failed and we were unable to recover it. 00:26:40.121 [2024-12-06 03:35:00.039194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.121 [2024-12-06 03:35:00.039205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.121 qpair failed and we were unable to recover it. 00:26:40.121 [2024-12-06 03:35:00.039347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.121 [2024-12-06 03:35:00.039357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.121 qpair failed and we were unable to recover it. 00:26:40.121 [2024-12-06 03:35:00.039423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.121 [2024-12-06 03:35:00.039435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.121 qpair failed and we were unable to recover it. 00:26:40.121 [2024-12-06 03:35:00.039515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.121 [2024-12-06 03:35:00.039525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.121 qpair failed and we were unable to recover it. 00:26:40.121 [2024-12-06 03:35:00.039595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.121 [2024-12-06 03:35:00.039606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.121 qpair failed and we were unable to recover it. 00:26:40.121 [2024-12-06 03:35:00.039741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.121 [2024-12-06 03:35:00.039752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.121 qpair failed and we were unable to recover it. 00:26:40.121 [2024-12-06 03:35:00.039825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.121 [2024-12-06 03:35:00.039835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.121 qpair failed and we were unable to recover it. 00:26:40.121 [2024-12-06 03:35:00.039921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.121 [2024-12-06 03:35:00.039932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.121 qpair failed and we were unable to recover it. 00:26:40.121 [2024-12-06 03:35:00.040010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.121 [2024-12-06 03:35:00.040021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.121 qpair failed and we were unable to recover it. 00:26:40.121 [2024-12-06 03:35:00.040096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.121 [2024-12-06 03:35:00.040107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.121 qpair failed and we were unable to recover it. 00:26:40.121 [2024-12-06 03:35:00.040183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.121 [2024-12-06 03:35:00.040193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.121 qpair failed and we were unable to recover it. 00:26:40.121 [2024-12-06 03:35:00.040271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.121 [2024-12-06 03:35:00.040283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.121 qpair failed and we were unable to recover it. 00:26:40.121 [2024-12-06 03:35:00.040348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.121 [2024-12-06 03:35:00.040359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.121 qpair failed and we were unable to recover it. 00:26:40.121 [2024-12-06 03:35:00.040427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.121 [2024-12-06 03:35:00.040437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.121 qpair failed and we were unable to recover it. 00:26:40.121 [2024-12-06 03:35:00.040512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.121 [2024-12-06 03:35:00.040523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.121 qpair failed and we were unable to recover it. 00:26:40.121 [2024-12-06 03:35:00.040589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.121 [2024-12-06 03:35:00.040599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.121 qpair failed and we were unable to recover it. 00:26:40.121 [2024-12-06 03:35:00.040666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.121 [2024-12-06 03:35:00.040677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.121 qpair failed and we were unable to recover it. 00:26:40.121 [2024-12-06 03:35:00.040777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.121 [2024-12-06 03:35:00.040788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.121 qpair failed and we were unable to recover it. 00:26:40.121 [2024-12-06 03:35:00.040865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.121 [2024-12-06 03:35:00.040876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.121 qpair failed and we were unable to recover it. 00:26:40.121 [2024-12-06 03:35:00.040961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.122 [2024-12-06 03:35:00.040972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.122 qpair failed and we were unable to recover it. 00:26:40.122 [2024-12-06 03:35:00.041050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.122 [2024-12-06 03:35:00.041060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.122 qpair failed and we were unable to recover it. 00:26:40.122 [2024-12-06 03:35:00.041131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.122 [2024-12-06 03:35:00.041141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.122 qpair failed and we were unable to recover it. 00:26:40.122 [2024-12-06 03:35:00.041279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.122 [2024-12-06 03:35:00.041290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.122 qpair failed and we were unable to recover it. 00:26:40.122 [2024-12-06 03:35:00.041379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.122 [2024-12-06 03:35:00.041390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.122 qpair failed and we were unable to recover it. 00:26:40.122 [2024-12-06 03:35:00.041465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.122 [2024-12-06 03:35:00.041476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.122 qpair failed and we were unable to recover it. 00:26:40.122 [2024-12-06 03:35:00.041545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.122 [2024-12-06 03:35:00.041556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.122 qpair failed and we were unable to recover it. 00:26:40.122 [2024-12-06 03:35:00.041655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.122 [2024-12-06 03:35:00.041668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.122 qpair failed and we were unable to recover it. 00:26:40.122 [2024-12-06 03:35:00.041734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.122 [2024-12-06 03:35:00.041745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.122 qpair failed and we were unable to recover it. 00:26:40.122 [2024-12-06 03:35:00.041814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.122 [2024-12-06 03:35:00.041824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.122 qpair failed and we were unable to recover it. 00:26:40.122 [2024-12-06 03:35:00.041891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.122 [2024-12-06 03:35:00.041902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.122 qpair failed and we were unable to recover it. 00:26:40.122 [2024-12-06 03:35:00.042043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.122 [2024-12-06 03:35:00.042055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.122 qpair failed and we were unable to recover it. 00:26:40.122 [2024-12-06 03:35:00.042139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.122 [2024-12-06 03:35:00.042150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.122 qpair failed and we were unable to recover it. 00:26:40.122 [2024-12-06 03:35:00.042236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.122 [2024-12-06 03:35:00.042247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.122 qpair failed and we were unable to recover it. 00:26:40.122 [2024-12-06 03:35:00.042344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.122 [2024-12-06 03:35:00.042355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.122 qpair failed and we were unable to recover it. 00:26:40.122 [2024-12-06 03:35:00.042541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.122 [2024-12-06 03:35:00.042559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.122 qpair failed and we were unable to recover it. 00:26:40.122 [2024-12-06 03:35:00.042648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.122 [2024-12-06 03:35:00.042666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.122 qpair failed and we were unable to recover it. 00:26:40.122 [2024-12-06 03:35:00.042747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.122 [2024-12-06 03:35:00.042776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.122 qpair failed and we were unable to recover it. 00:26:40.122 [2024-12-06 03:35:00.042864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.122 [2024-12-06 03:35:00.042894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.122 qpair failed and we were unable to recover it. 00:26:40.122 [2024-12-06 03:35:00.043022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.122 [2024-12-06 03:35:00.043051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.122 qpair failed and we were unable to recover it. 00:26:40.122 [2024-12-06 03:35:00.043204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.122 [2024-12-06 03:35:00.043224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.122 qpair failed and we were unable to recover it. 00:26:40.122 [2024-12-06 03:35:00.043350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.122 [2024-12-06 03:35:00.043390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.122 qpair failed and we were unable to recover it. 00:26:40.122 [2024-12-06 03:35:00.043490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.122 [2024-12-06 03:35:00.043534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.122 qpair failed and we were unable to recover it. 00:26:40.122 [2024-12-06 03:35:00.043680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.122 [2024-12-06 03:35:00.043748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.122 qpair failed and we were unable to recover it. 00:26:40.122 [2024-12-06 03:35:00.043952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.122 [2024-12-06 03:35:00.044026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.122 qpair failed and we were unable to recover it. 00:26:40.122 [2024-12-06 03:35:00.044283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.122 [2024-12-06 03:35:00.044432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.122 qpair failed and we were unable to recover it. 00:26:40.122 [2024-12-06 03:35:00.044676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.122 [2024-12-06 03:35:00.044728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.122 qpair failed and we were unable to recover it. 00:26:40.122 [2024-12-06 03:35:00.044975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.122 [2024-12-06 03:35:00.045061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.122 qpair failed and we were unable to recover it. 00:26:40.122 [2024-12-06 03:35:00.045290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.122 [2024-12-06 03:35:00.045345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.122 qpair failed and we were unable to recover it. 00:26:40.122 [2024-12-06 03:35:00.045500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.122 [2024-12-06 03:35:00.045532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.122 qpair failed and we were unable to recover it. 00:26:40.122 [2024-12-06 03:35:00.045669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.122 [2024-12-06 03:35:00.045719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.122 qpair failed and we were unable to recover it. 00:26:40.122 [2024-12-06 03:35:00.045869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.122 [2024-12-06 03:35:00.045913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.122 qpair failed and we were unable to recover it. 00:26:40.122 [2024-12-06 03:35:00.046060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.122 [2024-12-06 03:35:00.046077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.122 qpair failed and we were unable to recover it. 00:26:40.123 [2024-12-06 03:35:00.046176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.123 [2024-12-06 03:35:00.046193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.123 qpair failed and we were unable to recover it. 00:26:40.123 [2024-12-06 03:35:00.046293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.123 [2024-12-06 03:35:00.046308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.123 qpair failed and we were unable to recover it. 00:26:40.123 [2024-12-06 03:35:00.046383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.123 [2024-12-06 03:35:00.046399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.123 qpair failed and we were unable to recover it. 00:26:40.123 [2024-12-06 03:35:00.046492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.123 [2024-12-06 03:35:00.046507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.123 qpair failed and we were unable to recover it. 00:26:40.123 [2024-12-06 03:35:00.046596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.123 [2024-12-06 03:35:00.046611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.123 qpair failed and we were unable to recover it. 00:26:40.123 [2024-12-06 03:35:00.046702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.123 [2024-12-06 03:35:00.046718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.123 qpair failed and we were unable to recover it. 00:26:40.123 [2024-12-06 03:35:00.046815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.123 [2024-12-06 03:35:00.046834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.123 qpair failed and we were unable to recover it. 00:26:40.123 [2024-12-06 03:35:00.046909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.123 [2024-12-06 03:35:00.046922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.123 qpair failed and we were unable to recover it. 00:26:40.123 [2024-12-06 03:35:00.047021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.123 [2024-12-06 03:35:00.047034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.123 qpair failed and we were unable to recover it. 00:26:40.123 [2024-12-06 03:35:00.047112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.123 [2024-12-06 03:35:00.047124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.123 qpair failed and we were unable to recover it. 00:26:40.123 [2024-12-06 03:35:00.047185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.123 [2024-12-06 03:35:00.047197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.123 qpair failed and we were unable to recover it. 00:26:40.123 [2024-12-06 03:35:00.047341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.123 [2024-12-06 03:35:00.047353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.123 qpair failed and we were unable to recover it. 00:26:40.123 [2024-12-06 03:35:00.047435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.123 [2024-12-06 03:35:00.047446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.123 qpair failed and we were unable to recover it. 00:26:40.123 [2024-12-06 03:35:00.047524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.123 [2024-12-06 03:35:00.047535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.123 qpair failed and we were unable to recover it. 00:26:40.123 [2024-12-06 03:35:00.047619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.123 [2024-12-06 03:35:00.047634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.123 qpair failed and we were unable to recover it. 00:26:40.123 [2024-12-06 03:35:00.047711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.123 [2024-12-06 03:35:00.047722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.123 qpair failed and we were unable to recover it. 00:26:40.123 [2024-12-06 03:35:00.047889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.123 [2024-12-06 03:35:00.047899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.123 qpair failed and we were unable to recover it. 00:26:40.123 [2024-12-06 03:35:00.047984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.123 [2024-12-06 03:35:00.047996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.123 qpair failed and we were unable to recover it. 00:26:40.123 [2024-12-06 03:35:00.048063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.123 [2024-12-06 03:35:00.048074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.123 qpair failed and we were unable to recover it. 00:26:40.123 [2024-12-06 03:35:00.048131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.123 [2024-12-06 03:35:00.048142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.123 qpair failed and we were unable to recover it. 00:26:40.123 [2024-12-06 03:35:00.048212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.123 [2024-12-06 03:35:00.048223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.123 qpair failed and we were unable to recover it. 00:26:40.123 [2024-12-06 03:35:00.048378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.123 [2024-12-06 03:35:00.048390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.123 qpair failed and we were unable to recover it. 00:26:40.123 [2024-12-06 03:35:00.048464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.123 [2024-12-06 03:35:00.048475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.123 qpair failed and we were unable to recover it. 00:26:40.123 [2024-12-06 03:35:00.048546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.123 [2024-12-06 03:35:00.048557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.123 qpair failed and we were unable to recover it. 00:26:40.123 [2024-12-06 03:35:00.048626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.123 [2024-12-06 03:35:00.048637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.123 qpair failed and we were unable to recover it. 00:26:40.123 [2024-12-06 03:35:00.048772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.123 [2024-12-06 03:35:00.048782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.123 qpair failed and we were unable to recover it. 00:26:40.123 [2024-12-06 03:35:00.048856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.123 [2024-12-06 03:35:00.048867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.123 qpair failed and we were unable to recover it. 00:26:40.123 [2024-12-06 03:35:00.048934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.123 [2024-12-06 03:35:00.048945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.123 qpair failed and we were unable to recover it. 00:26:40.123 [2024-12-06 03:35:00.049042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.123 [2024-12-06 03:35:00.049053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.123 qpair failed and we were unable to recover it. 00:26:40.123 [2024-12-06 03:35:00.049122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.123 [2024-12-06 03:35:00.049133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.123 qpair failed and we were unable to recover it. 00:26:40.123 [2024-12-06 03:35:00.049208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.123 [2024-12-06 03:35:00.049220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.123 qpair failed and we were unable to recover it. 00:26:40.123 [2024-12-06 03:35:00.049309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.123 [2024-12-06 03:35:00.049320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.123 qpair failed and we were unable to recover it. 00:26:40.123 [2024-12-06 03:35:00.049448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.123 [2024-12-06 03:35:00.049459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.123 qpair failed and we were unable to recover it. 00:26:40.123 [2024-12-06 03:35:00.049525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.123 [2024-12-06 03:35:00.049537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.123 qpair failed and we were unable to recover it. 00:26:40.123 [2024-12-06 03:35:00.049621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.123 [2024-12-06 03:35:00.049632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.123 qpair failed and we were unable to recover it. 00:26:40.123 [2024-12-06 03:35:00.049707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.123 [2024-12-06 03:35:00.049717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.123 qpair failed and we were unable to recover it. 00:26:40.123 [2024-12-06 03:35:00.049801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.123 [2024-12-06 03:35:00.049812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.123 qpair failed and we were unable to recover it. 00:26:40.123 [2024-12-06 03:35:00.049885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.124 [2024-12-06 03:35:00.049897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.124 qpair failed and we were unable to recover it. 00:26:40.124 [2024-12-06 03:35:00.050106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.124 [2024-12-06 03:35:00.050118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.124 qpair failed and we were unable to recover it. 00:26:40.124 [2024-12-06 03:35:00.050182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.124 [2024-12-06 03:35:00.050194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.124 qpair failed and we were unable to recover it. 00:26:40.124 03:35:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:40.124 [2024-12-06 03:35:00.050339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.124 [2024-12-06 03:35:00.050351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.124 qpair failed and we were unable to recover it. 00:26:40.124 [2024-12-06 03:35:00.050426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.124 [2024-12-06 03:35:00.050437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.124 qpair failed and we were unable to recover it. 00:26:40.124 [2024-12-06 03:35:00.050511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.124 [2024-12-06 03:35:00.050522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.124 qpair failed and we were unable to recover it. 00:26:40.124 03:35:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:26:40.124 [2024-12-06 03:35:00.050607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.124 [2024-12-06 03:35:00.050619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.124 qpair failed and we were unable to recover it. 00:26:40.124 [2024-12-06 03:35:00.050771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.124 [2024-12-06 03:35:00.050782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.124 qpair failed and we were unable to recover it. 00:26:40.124 [2024-12-06 03:35:00.050857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.124 [2024-12-06 03:35:00.050869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.124 qpair failed and we were unable to recover it. 00:26:40.124 03:35:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:40.124 [2024-12-06 03:35:00.050940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.124 [2024-12-06 03:35:00.050957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.124 qpair failed and we were unable to recover it. 00:26:40.124 [2024-12-06 03:35:00.051031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.124 [2024-12-06 03:35:00.051043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.124 qpair failed and we were unable to recover it. 00:26:40.124 [2024-12-06 03:35:00.051122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.124 [2024-12-06 03:35:00.051133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.124 qpair failed and we were unable to recover it. 00:26:40.124 03:35:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:40.124 [2024-12-06 03:35:00.051275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.124 [2024-12-06 03:35:00.051287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.124 qpair failed and we were unable to recover it. 00:26:40.124 [2024-12-06 03:35:00.051427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.124 [2024-12-06 03:35:00.051438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.124 qpair failed and we were unable to recover it. 00:26:40.124 03:35:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:40.124 [2024-12-06 03:35:00.051517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.124 [2024-12-06 03:35:00.051530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.124 qpair failed and we were unable to recover it. 00:26:40.124 [2024-12-06 03:35:00.051606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.124 [2024-12-06 03:35:00.051619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.124 qpair failed and we were unable to recover it. 00:26:40.124 [2024-12-06 03:35:00.051688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.124 [2024-12-06 03:35:00.051699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.124 qpair failed and we were unable to recover it. 00:26:40.124 [2024-12-06 03:35:00.051774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.124 [2024-12-06 03:35:00.051786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.124 qpair failed and we were unable to recover it. 00:26:40.124 [2024-12-06 03:35:00.051912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.124 [2024-12-06 03:35:00.051923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.124 qpair failed and we were unable to recover it. 00:26:40.124 [2024-12-06 03:35:00.052003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.124 [2024-12-06 03:35:00.052015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.124 qpair failed and we were unable to recover it. 00:26:40.124 [2024-12-06 03:35:00.052082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.124 [2024-12-06 03:35:00.052093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.124 qpair failed and we were unable to recover it. 00:26:40.124 [2024-12-06 03:35:00.052234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.124 [2024-12-06 03:35:00.052245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.124 qpair failed and we were unable to recover it. 00:26:40.124 [2024-12-06 03:35:00.052324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.124 [2024-12-06 03:35:00.052335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.124 qpair failed and we were unable to recover it. 00:26:40.124 [2024-12-06 03:35:00.052464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.124 [2024-12-06 03:35:00.052475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.124 qpair failed and we were unable to recover it. 00:26:40.124 [2024-12-06 03:35:00.052562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.124 [2024-12-06 03:35:00.052573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.124 qpair failed and we were unable to recover it. 00:26:40.124 [2024-12-06 03:35:00.052712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.124 [2024-12-06 03:35:00.052723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.124 qpair failed and we were unable to recover it. 00:26:40.124 [2024-12-06 03:35:00.052802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.124 [2024-12-06 03:35:00.052813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.124 qpair failed and we were unable to recover it. 00:26:40.124 [2024-12-06 03:35:00.052897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.124 [2024-12-06 03:35:00.052909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.124 qpair failed and we were unable to recover it. 00:26:40.124 [2024-12-06 03:35:00.052988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.124 [2024-12-06 03:35:00.053001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.124 qpair failed and we were unable to recover it. 00:26:40.124 [2024-12-06 03:35:00.053091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.124 [2024-12-06 03:35:00.053102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.124 qpair failed and we were unable to recover it. 00:26:40.124 [2024-12-06 03:35:00.053190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.124 [2024-12-06 03:35:00.053201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.124 qpair failed and we were unable to recover it. 00:26:40.124 [2024-12-06 03:35:00.053277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.124 [2024-12-06 03:35:00.053288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.124 qpair failed and we were unable to recover it. 00:26:40.124 [2024-12-06 03:35:00.053367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.124 [2024-12-06 03:35:00.053378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.124 qpair failed and we were unable to recover it. 00:26:40.124 [2024-12-06 03:35:00.053534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.124 [2024-12-06 03:35:00.053545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.124 qpair failed and we were unable to recover it. 00:26:40.124 [2024-12-06 03:35:00.053624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.124 [2024-12-06 03:35:00.053635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.124 qpair failed and we were unable to recover it. 00:26:40.124 [2024-12-06 03:35:00.053708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.124 [2024-12-06 03:35:00.053719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.124 qpair failed and we were unable to recover it. 00:26:40.124 [2024-12-06 03:35:00.053861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.125 [2024-12-06 03:35:00.053872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.125 qpair failed and we were unable to recover it. 00:26:40.125 [2024-12-06 03:35:00.053937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.125 [2024-12-06 03:35:00.053953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.125 qpair failed and we were unable to recover it. 00:26:40.125 [2024-12-06 03:35:00.054032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.125 [2024-12-06 03:35:00.054044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.125 qpair failed and we were unable to recover it. 00:26:40.125 [2024-12-06 03:35:00.054124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.125 [2024-12-06 03:35:00.054135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.125 qpair failed and we were unable to recover it. 00:26:40.125 [2024-12-06 03:35:00.054217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.125 [2024-12-06 03:35:00.054227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.125 qpair failed and we were unable to recover it. 00:26:40.125 [2024-12-06 03:35:00.054305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.125 [2024-12-06 03:35:00.054317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.125 qpair failed and we were unable to recover it. 00:26:40.125 [2024-12-06 03:35:00.054413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.125 [2024-12-06 03:35:00.054424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.125 qpair failed and we were unable to recover it. 00:26:40.125 [2024-12-06 03:35:00.054501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.125 [2024-12-06 03:35:00.054512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.125 qpair failed and we were unable to recover it. 00:26:40.125 [2024-12-06 03:35:00.054589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.125 [2024-12-06 03:35:00.054600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.125 qpair failed and we were unable to recover it. 00:26:40.125 [2024-12-06 03:35:00.054664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.125 [2024-12-06 03:35:00.054675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.125 qpair failed and we were unable to recover it. 00:26:40.125 [2024-12-06 03:35:00.054745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.125 [2024-12-06 03:35:00.054756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.125 qpair failed and we were unable to recover it. 00:26:40.125 [2024-12-06 03:35:00.054828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.125 [2024-12-06 03:35:00.054839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.125 qpair failed and we were unable to recover it. 00:26:40.125 [2024-12-06 03:35:00.054897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.125 [2024-12-06 03:35:00.054910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.125 qpair failed and we were unable to recover it. 00:26:40.125 [2024-12-06 03:35:00.055005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.125 [2024-12-06 03:35:00.055016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.125 qpair failed and we were unable to recover it. 00:26:40.125 [2024-12-06 03:35:00.055101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.125 [2024-12-06 03:35:00.055111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.125 qpair failed and we were unable to recover it. 00:26:40.125 [2024-12-06 03:35:00.055173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.125 [2024-12-06 03:35:00.055184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.125 qpair failed and we were unable to recover it. 00:26:40.125 [2024-12-06 03:35:00.055243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.125 [2024-12-06 03:35:00.055254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.125 qpair failed and we were unable to recover it. 00:26:40.125 [2024-12-06 03:35:00.055397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.125 [2024-12-06 03:35:00.055408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.125 qpair failed and we were unable to recover it. 00:26:40.125 [2024-12-06 03:35:00.055476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.125 [2024-12-06 03:35:00.055486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.125 qpair failed and we were unable to recover it. 00:26:40.125 [2024-12-06 03:35:00.055637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.125 [2024-12-06 03:35:00.055650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.125 qpair failed and we were unable to recover it. 00:26:40.125 [2024-12-06 03:35:00.055784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.125 [2024-12-06 03:35:00.055794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.125 qpair failed and we were unable to recover it. 00:26:40.125 [2024-12-06 03:35:00.055877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.125 [2024-12-06 03:35:00.055887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.125 qpair failed and we were unable to recover it. 00:26:40.125 [2024-12-06 03:35:00.055958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.125 [2024-12-06 03:35:00.055969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.125 qpair failed and we were unable to recover it. 00:26:40.125 [2024-12-06 03:35:00.056032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.125 [2024-12-06 03:35:00.056042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.125 qpair failed and we were unable to recover it. 00:26:40.125 [2024-12-06 03:35:00.056097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.125 [2024-12-06 03:35:00.056108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.125 qpair failed and we were unable to recover it. 00:26:40.125 [2024-12-06 03:35:00.056258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.125 [2024-12-06 03:35:00.056270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.125 qpair failed and we were unable to recover it. 00:26:40.125 [2024-12-06 03:35:00.056344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.125 [2024-12-06 03:35:00.056355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.125 qpair failed and we were unable to recover it. 00:26:40.125 [2024-12-06 03:35:00.056422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.125 [2024-12-06 03:35:00.056433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.125 qpair failed and we were unable to recover it. 00:26:40.125 [2024-12-06 03:35:00.056501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.125 [2024-12-06 03:35:00.056512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.125 qpair failed and we were unable to recover it. 00:26:40.125 [2024-12-06 03:35:00.056596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.125 [2024-12-06 03:35:00.056607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.125 qpair failed and we were unable to recover it. 00:26:40.125 [2024-12-06 03:35:00.056691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.125 [2024-12-06 03:35:00.056702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.125 qpair failed and we were unable to recover it. 00:26:40.125 [2024-12-06 03:35:00.056778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.125 [2024-12-06 03:35:00.056789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.125 qpair failed and we were unable to recover it. 00:26:40.125 [2024-12-06 03:35:00.056886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.125 [2024-12-06 03:35:00.056897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.125 qpair failed and we were unable to recover it. 00:26:40.125 [2024-12-06 03:35:00.056966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.125 [2024-12-06 03:35:00.056978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.125 qpair failed and we were unable to recover it. 00:26:40.125 [2024-12-06 03:35:00.057050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.125 [2024-12-06 03:35:00.057061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.125 qpair failed and we were unable to recover it. 00:26:40.125 [2024-12-06 03:35:00.057146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.125 [2024-12-06 03:35:00.057158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.125 qpair failed and we were unable to recover it. 00:26:40.125 [2024-12-06 03:35:00.057232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.125 [2024-12-06 03:35:00.057244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.125 qpair failed and we were unable to recover it. 00:26:40.125 [2024-12-06 03:35:00.057325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.125 [2024-12-06 03:35:00.057337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.125 qpair failed and we were unable to recover it. 00:26:40.125 [2024-12-06 03:35:00.057498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.126 [2024-12-06 03:35:00.057509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.126 qpair failed and we were unable to recover it. 00:26:40.126 [2024-12-06 03:35:00.057593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.126 [2024-12-06 03:35:00.057604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.126 qpair failed and we were unable to recover it. 00:26:40.126 [2024-12-06 03:35:00.057673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.126 [2024-12-06 03:35:00.057684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.126 qpair failed and we were unable to recover it. 00:26:40.126 [2024-12-06 03:35:00.057747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.126 [2024-12-06 03:35:00.057757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.126 qpair failed and we were unable to recover it. 00:26:40.126 [2024-12-06 03:35:00.057888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.126 [2024-12-06 03:35:00.057899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.126 qpair failed and we were unable to recover it. 00:26:40.126 [2024-12-06 03:35:00.057958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.126 [2024-12-06 03:35:00.057970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.126 qpair failed and we were unable to recover it. 00:26:40.126 [2024-12-06 03:35:00.058039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.126 [2024-12-06 03:35:00.058051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.126 qpair failed and we were unable to recover it. 00:26:40.126 [2024-12-06 03:35:00.058126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.126 [2024-12-06 03:35:00.058137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.126 qpair failed and we were unable to recover it. 00:26:40.126 [2024-12-06 03:35:00.058210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.126 [2024-12-06 03:35:00.058222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.126 qpair failed and we were unable to recover it. 00:26:40.126 [2024-12-06 03:35:00.058302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.126 [2024-12-06 03:35:00.058322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.126 qpair failed and we were unable to recover it. 00:26:40.126 [2024-12-06 03:35:00.058456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.126 [2024-12-06 03:35:00.058468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.126 qpair failed and we were unable to recover it. 00:26:40.126 [2024-12-06 03:35:00.058538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.126 [2024-12-06 03:35:00.058549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.126 qpair failed and we were unable to recover it. 00:26:40.126 [2024-12-06 03:35:00.058635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.126 [2024-12-06 03:35:00.058646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.126 qpair failed and we were unable to recover it. 00:26:40.126 [2024-12-06 03:35:00.058722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.126 [2024-12-06 03:35:00.058732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.126 qpair failed and we were unable to recover it. 00:26:40.126 [2024-12-06 03:35:00.058857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.126 [2024-12-06 03:35:00.058868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.126 qpair failed and we were unable to recover it. 00:26:40.126 [2024-12-06 03:35:00.058998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.126 [2024-12-06 03:35:00.059010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.126 qpair failed and we were unable to recover it. 00:26:40.126 [2024-12-06 03:35:00.059158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.126 [2024-12-06 03:35:00.059169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.126 qpair failed and we were unable to recover it. 00:26:40.126 [2024-12-06 03:35:00.059340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.126 [2024-12-06 03:35:00.059352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.126 qpair failed and we were unable to recover it. 00:26:40.126 [2024-12-06 03:35:00.059450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.126 [2024-12-06 03:35:00.059461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.126 qpair failed and we were unable to recover it. 00:26:40.126 [2024-12-06 03:35:00.059626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.126 [2024-12-06 03:35:00.059637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.126 qpair failed and we were unable to recover it. 00:26:40.126 [2024-12-06 03:35:00.059730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.126 [2024-12-06 03:35:00.059741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.126 qpair failed and we were unable to recover it. 00:26:40.126 [2024-12-06 03:35:00.059827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.126 [2024-12-06 03:35:00.059840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.126 qpair failed and we were unable to recover it. 00:26:40.126 [2024-12-06 03:35:00.059928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.126 [2024-12-06 03:35:00.059939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.126 qpair failed and we were unable to recover it. 00:26:40.126 [2024-12-06 03:35:00.060066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.126 [2024-12-06 03:35:00.060077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.126 qpair failed and we were unable to recover it. 00:26:40.126 [2024-12-06 03:35:00.060153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.126 [2024-12-06 03:35:00.060165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.126 qpair failed and we were unable to recover it. 00:26:40.126 [2024-12-06 03:35:00.060236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.126 [2024-12-06 03:35:00.060247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.126 qpair failed and we were unable to recover it. 00:26:40.126 [2024-12-06 03:35:00.060369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.126 [2024-12-06 03:35:00.060381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.126 qpair failed and we were unable to recover it. 00:26:40.126 [2024-12-06 03:35:00.060468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.126 [2024-12-06 03:35:00.060479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.126 qpair failed and we were unable to recover it. 00:26:40.126 [2024-12-06 03:35:00.060576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.126 [2024-12-06 03:35:00.060588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.126 qpair failed and we were unable to recover it. 00:26:40.126 [2024-12-06 03:35:00.060751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.126 [2024-12-06 03:35:00.060762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.126 qpair failed and we were unable to recover it. 00:26:40.126 [2024-12-06 03:35:00.060849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.126 [2024-12-06 03:35:00.060860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.126 qpair failed and we were unable to recover it. 00:26:40.126 [2024-12-06 03:35:00.061035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.126 [2024-12-06 03:35:00.061047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.126 qpair failed and we were unable to recover it. 00:26:40.126 [2024-12-06 03:35:00.061234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.126 [2024-12-06 03:35:00.061245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.126 qpair failed and we were unable to recover it. 00:26:40.126 [2024-12-06 03:35:00.061331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.126 [2024-12-06 03:35:00.061343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.126 qpair failed and we were unable to recover it. 00:26:40.126 [2024-12-06 03:35:00.061448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.126 [2024-12-06 03:35:00.061459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.126 qpair failed and we were unable to recover it. 00:26:40.126 [2024-12-06 03:35:00.061669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.126 [2024-12-06 03:35:00.061681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.126 qpair failed and we were unable to recover it. 00:26:40.126 [2024-12-06 03:35:00.061761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.126 [2024-12-06 03:35:00.061774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.126 qpair failed and we were unable to recover it. 00:26:40.126 [2024-12-06 03:35:00.061925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.126 [2024-12-06 03:35:00.061936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.126 qpair failed and we were unable to recover it. 00:26:40.126 [2024-12-06 03:35:00.062085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.127 [2024-12-06 03:35:00.062096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.127 qpair failed and we were unable to recover it. 00:26:40.127 [2024-12-06 03:35:00.062262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.127 [2024-12-06 03:35:00.062275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.127 qpair failed and we were unable to recover it. 00:26:40.127 [2024-12-06 03:35:00.062412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.127 [2024-12-06 03:35:00.062423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.127 qpair failed and we were unable to recover it. 00:26:40.127 [2024-12-06 03:35:00.062509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.127 [2024-12-06 03:35:00.062521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.127 qpair failed and we were unable to recover it. 00:26:40.127 [2024-12-06 03:35:00.062741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.127 [2024-12-06 03:35:00.062754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.127 qpair failed and we were unable to recover it. 00:26:40.127 [2024-12-06 03:35:00.062831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.127 [2024-12-06 03:35:00.062842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.127 qpair failed and we were unable to recover it. 00:26:40.127 [2024-12-06 03:35:00.062931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.127 [2024-12-06 03:35:00.062942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.127 qpair failed and we were unable to recover it. 00:26:40.127 [2024-12-06 03:35:00.063026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.127 [2024-12-06 03:35:00.063038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.127 qpair failed and we were unable to recover it. 00:26:40.127 [2024-12-06 03:35:00.063185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.127 [2024-12-06 03:35:00.063197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.127 qpair failed and we were unable to recover it. 00:26:40.127 [2024-12-06 03:35:00.063286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.127 [2024-12-06 03:35:00.063297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.127 qpair failed and we were unable to recover it. 00:26:40.127 [2024-12-06 03:35:00.063384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.127 [2024-12-06 03:35:00.063394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.127 qpair failed and we were unable to recover it. 00:26:40.127 [2024-12-06 03:35:00.063463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.127 [2024-12-06 03:35:00.063474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.127 qpair failed and we were unable to recover it. 00:26:40.127 [2024-12-06 03:35:00.063649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.127 [2024-12-06 03:35:00.063661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.127 qpair failed and we were unable to recover it. 00:26:40.127 [2024-12-06 03:35:00.063846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.127 [2024-12-06 03:35:00.063858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.127 qpair failed and we were unable to recover it. 00:26:40.127 [2024-12-06 03:35:00.063944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.127 [2024-12-06 03:35:00.063960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.127 qpair failed and we were unable to recover it. 00:26:40.127 [2024-12-06 03:35:00.064050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.127 [2024-12-06 03:35:00.064062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.127 qpair failed and we were unable to recover it. 00:26:40.127 [2024-12-06 03:35:00.064203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.127 [2024-12-06 03:35:00.064214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.127 qpair failed and we were unable to recover it. 00:26:40.127 [2024-12-06 03:35:00.064299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.127 [2024-12-06 03:35:00.064310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.127 qpair failed and we were unable to recover it. 00:26:40.127 [2024-12-06 03:35:00.064441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.127 [2024-12-06 03:35:00.064452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.127 qpair failed and we were unable to recover it. 00:26:40.127 [2024-12-06 03:35:00.064550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.127 [2024-12-06 03:35:00.064562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.127 qpair failed and we were unable to recover it. 00:26:40.127 [2024-12-06 03:35:00.064846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.127 [2024-12-06 03:35:00.064857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.127 qpair failed and we were unable to recover it. 00:26:40.127 [2024-12-06 03:35:00.065032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.127 [2024-12-06 03:35:00.065045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.127 qpair failed and we were unable to recover it. 00:26:40.127 [2024-12-06 03:35:00.065148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.127 [2024-12-06 03:35:00.065161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.127 qpair failed and we were unable to recover it. 00:26:40.127 [2024-12-06 03:35:00.065303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.127 [2024-12-06 03:35:00.065317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.127 qpair failed and we were unable to recover it. 00:26:40.127 [2024-12-06 03:35:00.065416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.127 [2024-12-06 03:35:00.065427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.127 qpair failed and we were unable to recover it. 00:26:40.127 [2024-12-06 03:35:00.065563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.127 [2024-12-06 03:35:00.065574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.127 qpair failed and we were unable to recover it. 00:26:40.127 [2024-12-06 03:35:00.065713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.127 [2024-12-06 03:35:00.065724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.127 qpair failed and we were unable to recover it. 00:26:40.127 [2024-12-06 03:35:00.065856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.127 [2024-12-06 03:35:00.065868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.127 qpair failed and we were unable to recover it. 00:26:40.127 [2024-12-06 03:35:00.065966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.127 [2024-12-06 03:35:00.065977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.127 qpair failed and we were unable to recover it. 00:26:40.127 [2024-12-06 03:35:00.066058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.127 [2024-12-06 03:35:00.066069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.127 qpair failed and we were unable to recover it. 00:26:40.127 [2024-12-06 03:35:00.066200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.127 [2024-12-06 03:35:00.066212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.127 qpair failed and we were unable to recover it. 00:26:40.127 [2024-12-06 03:35:00.066303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.127 [2024-12-06 03:35:00.066314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.127 qpair failed and we were unable to recover it. 00:26:40.127 [2024-12-06 03:35:00.066452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.127 [2024-12-06 03:35:00.066463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.127 qpair failed and we were unable to recover it. 00:26:40.127 [2024-12-06 03:35:00.066618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.128 [2024-12-06 03:35:00.066629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.128 qpair failed and we were unable to recover it. 00:26:40.128 [2024-12-06 03:35:00.066770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.128 [2024-12-06 03:35:00.066785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.128 qpair failed and we were unable to recover it. 00:26:40.128 [2024-12-06 03:35:00.066882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.128 [2024-12-06 03:35:00.066894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.128 qpair failed and we were unable to recover it. 00:26:40.128 [2024-12-06 03:35:00.067029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.128 [2024-12-06 03:35:00.067040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.128 qpair failed and we were unable to recover it. 00:26:40.128 [2024-12-06 03:35:00.067176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.128 [2024-12-06 03:35:00.067187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.128 qpair failed and we were unable to recover it. 00:26:40.128 [2024-12-06 03:35:00.067315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.128 [2024-12-06 03:35:00.067327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.128 qpair failed and we were unable to recover it. 00:26:40.128 [2024-12-06 03:35:00.067489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.128 [2024-12-06 03:35:00.067500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.128 qpair failed and we were unable to recover it. 00:26:40.128 [2024-12-06 03:35:00.067707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.128 [2024-12-06 03:35:00.067718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.128 qpair failed and we were unable to recover it. 00:26:40.128 [2024-12-06 03:35:00.067879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.128 [2024-12-06 03:35:00.067891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.128 qpair failed and we were unable to recover it. 00:26:40.128 [2024-12-06 03:35:00.067979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.128 [2024-12-06 03:35:00.067991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.128 qpair failed and we were unable to recover it. 00:26:40.128 [2024-12-06 03:35:00.068155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.128 [2024-12-06 03:35:00.068166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.128 qpair failed and we were unable to recover it. 00:26:40.128 [2024-12-06 03:35:00.068325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.128 [2024-12-06 03:35:00.068336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.128 qpair failed and we were unable to recover it. 00:26:40.128 [2024-12-06 03:35:00.068433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.128 [2024-12-06 03:35:00.068444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.128 qpair failed and we were unable to recover it. 00:26:40.128 [2024-12-06 03:35:00.068528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.128 [2024-12-06 03:35:00.068539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.128 qpair failed and we were unable to recover it. 00:26:40.128 [2024-12-06 03:35:00.068698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.128 [2024-12-06 03:35:00.068708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.128 qpair failed and we were unable to recover it. 00:26:40.128 [2024-12-06 03:35:00.068839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.128 [2024-12-06 03:35:00.068850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.128 qpair failed and we were unable to recover it. 00:26:40.128 [2024-12-06 03:35:00.069055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.128 [2024-12-06 03:35:00.069067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.128 qpair failed and we were unable to recover it. 00:26:40.128 [2024-12-06 03:35:00.069216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.128 [2024-12-06 03:35:00.069228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.128 qpair failed and we were unable to recover it. 00:26:40.128 [2024-12-06 03:35:00.069444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.128 [2024-12-06 03:35:00.069456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.128 qpair failed and we were unable to recover it. 00:26:40.128 [2024-12-06 03:35:00.069543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.128 [2024-12-06 03:35:00.069554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.128 qpair failed and we were unable to recover it. 00:26:40.128 [2024-12-06 03:35:00.069651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.128 [2024-12-06 03:35:00.069662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.128 qpair failed and we were unable to recover it. 00:26:40.128 [2024-12-06 03:35:00.069754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.128 [2024-12-06 03:35:00.069765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.128 qpair failed and we were unable to recover it. 00:26:40.128 [2024-12-06 03:35:00.069829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.128 [2024-12-06 03:35:00.069844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.128 qpair failed and we were unable to recover it. 00:26:40.128 [2024-12-06 03:35:00.070011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.128 [2024-12-06 03:35:00.070028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.128 qpair failed and we were unable to recover it. 00:26:40.128 [2024-12-06 03:35:00.070104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.128 [2024-12-06 03:35:00.070119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.128 qpair failed and we were unable to recover it. 00:26:40.128 [2024-12-06 03:35:00.070203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.128 [2024-12-06 03:35:00.070218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.128 qpair failed and we were unable to recover it. 00:26:40.128 [2024-12-06 03:35:00.070364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.128 [2024-12-06 03:35:00.070378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.128 qpair failed and we were unable to recover it. 00:26:40.128 [2024-12-06 03:35:00.070472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.128 [2024-12-06 03:35:00.070488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.128 qpair failed and we were unable to recover it. 00:26:40.128 [2024-12-06 03:35:00.070587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.128 [2024-12-06 03:35:00.070601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.128 qpair failed and we were unable to recover it. 00:26:40.128 [2024-12-06 03:35:00.070698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.128 [2024-12-06 03:35:00.070712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.128 qpair failed and we were unable to recover it. 00:26:40.128 [2024-12-06 03:35:00.070776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.128 [2024-12-06 03:35:00.070790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.128 qpair failed and we were unable to recover it. 00:26:40.128 [2024-12-06 03:35:00.070854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.128 [2024-12-06 03:35:00.070866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.128 qpair failed and we were unable to recover it. 00:26:40.128 [2024-12-06 03:35:00.070999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.128 [2024-12-06 03:35:00.071010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.128 qpair failed and we were unable to recover it. 00:26:40.128 [2024-12-06 03:35:00.071103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.128 [2024-12-06 03:35:00.071114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.128 qpair failed and we were unable to recover it. 00:26:40.128 [2024-12-06 03:35:00.071191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.128 [2024-12-06 03:35:00.071202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.128 qpair failed and we were unable to recover it. 00:26:40.128 [2024-12-06 03:35:00.071369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.128 [2024-12-06 03:35:00.071380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.128 qpair failed and we were unable to recover it. 00:26:40.128 [2024-12-06 03:35:00.071451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.128 [2024-12-06 03:35:00.071462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.128 qpair failed and we were unable to recover it. 00:26:40.128 [2024-12-06 03:35:00.071538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.128 [2024-12-06 03:35:00.071549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.128 qpair failed and we were unable to recover it. 00:26:40.128 [2024-12-06 03:35:00.071643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.129 [2024-12-06 03:35:00.071654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.129 qpair failed and we were unable to recover it. 00:26:40.129 [2024-12-06 03:35:00.071812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.129 [2024-12-06 03:35:00.071823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.129 qpair failed and we were unable to recover it. 00:26:40.129 [2024-12-06 03:35:00.071886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.129 [2024-12-06 03:35:00.071897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.129 qpair failed and we were unable to recover it. 00:26:40.129 [2024-12-06 03:35:00.071985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.129 [2024-12-06 03:35:00.071996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.129 qpair failed and we were unable to recover it. 00:26:40.129 [2024-12-06 03:35:00.072092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.129 [2024-12-06 03:35:00.072103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.129 qpair failed and we were unable to recover it. 00:26:40.129 [2024-12-06 03:35:00.072219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.129 [2024-12-06 03:35:00.072230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.129 qpair failed and we were unable to recover it. 00:26:40.129 [2024-12-06 03:35:00.072315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.129 [2024-12-06 03:35:00.072326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.129 qpair failed and we were unable to recover it. 00:26:40.129 [2024-12-06 03:35:00.072414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.129 [2024-12-06 03:35:00.072424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.129 qpair failed and we were unable to recover it. 00:26:40.129 [2024-12-06 03:35:00.072514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.129 [2024-12-06 03:35:00.072525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.129 qpair failed and we were unable to recover it. 00:26:40.129 [2024-12-06 03:35:00.072585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.129 [2024-12-06 03:35:00.072596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.129 qpair failed and we were unable to recover it. 00:26:40.129 [2024-12-06 03:35:00.072674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.129 [2024-12-06 03:35:00.072685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.129 qpair failed and we were unable to recover it. 00:26:40.129 [2024-12-06 03:35:00.072837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.129 [2024-12-06 03:35:00.072848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.129 qpair failed and we were unable to recover it. 00:26:40.129 [2024-12-06 03:35:00.072913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.129 [2024-12-06 03:35:00.072924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.129 qpair failed and we were unable to recover it. 00:26:40.129 [2024-12-06 03:35:00.073084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.129 [2024-12-06 03:35:00.073096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.129 qpair failed and we were unable to recover it. 00:26:40.129 [2024-12-06 03:35:00.073193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.129 [2024-12-06 03:35:00.073205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.129 qpair failed and we were unable to recover it. 00:26:40.129 [2024-12-06 03:35:00.073282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.129 [2024-12-06 03:35:00.073292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.129 qpair failed and we were unable to recover it. 00:26:40.129 [2024-12-06 03:35:00.073361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.129 [2024-12-06 03:35:00.073373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.129 qpair failed and we were unable to recover it. 00:26:40.129 [2024-12-06 03:35:00.073444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.129 [2024-12-06 03:35:00.073456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.129 qpair failed and we were unable to recover it. 00:26:40.129 [2024-12-06 03:35:00.073561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.129 [2024-12-06 03:35:00.073572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.129 qpair failed and we were unable to recover it. 00:26:40.129 [2024-12-06 03:35:00.073651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.129 [2024-12-06 03:35:00.073662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.129 qpair failed and we were unable to recover it. 00:26:40.129 [2024-12-06 03:35:00.073742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.129 [2024-12-06 03:35:00.073753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.129 qpair failed and we were unable to recover it. 00:26:40.129 [2024-12-06 03:35:00.073866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.129 [2024-12-06 03:35:00.073878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.129 qpair failed and we were unable to recover it. 00:26:40.129 [2024-12-06 03:35:00.073954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.129 [2024-12-06 03:35:00.073965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.129 qpair failed and we were unable to recover it. 00:26:40.129 [2024-12-06 03:35:00.074038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.129 [2024-12-06 03:35:00.074050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.129 qpair failed and we were unable to recover it. 00:26:40.129 [2024-12-06 03:35:00.074197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.129 [2024-12-06 03:35:00.074208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.129 qpair failed and we were unable to recover it. 00:26:40.129 [2024-12-06 03:35:00.074278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.129 [2024-12-06 03:35:00.074289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.129 qpair failed and we were unable to recover it. 00:26:40.129 [2024-12-06 03:35:00.074361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.129 [2024-12-06 03:35:00.074372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.129 qpair failed and we were unable to recover it. 00:26:40.129 [2024-12-06 03:35:00.074462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.129 [2024-12-06 03:35:00.074475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.129 qpair failed and we were unable to recover it. 00:26:40.129 [2024-12-06 03:35:00.074566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.129 [2024-12-06 03:35:00.074577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.129 qpair failed and we were unable to recover it. 00:26:40.129 [2024-12-06 03:35:00.074648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.129 [2024-12-06 03:35:00.074660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.129 qpair failed and we were unable to recover it. 00:26:40.129 [2024-12-06 03:35:00.074729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.129 [2024-12-06 03:35:00.074739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.129 qpair failed and we were unable to recover it. 00:26:40.129 [2024-12-06 03:35:00.074972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.129 [2024-12-06 03:35:00.074984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.129 qpair failed and we were unable to recover it. 00:26:40.129 [2024-12-06 03:35:00.075072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.129 [2024-12-06 03:35:00.075086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.129 qpair failed and we were unable to recover it. 00:26:40.129 [2024-12-06 03:35:00.075210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.129 [2024-12-06 03:35:00.075222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.129 qpair failed and we were unable to recover it. 00:26:40.129 [2024-12-06 03:35:00.075359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.129 [2024-12-06 03:35:00.075370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.129 qpair failed and we were unable to recover it. 00:26:40.129 [2024-12-06 03:35:00.075452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.129 [2024-12-06 03:35:00.075462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.129 qpair failed and we were unable to recover it. 00:26:40.129 [2024-12-06 03:35:00.075536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.129 [2024-12-06 03:35:00.075547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.129 qpair failed and we were unable to recover it. 00:26:40.129 [2024-12-06 03:35:00.075615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.129 [2024-12-06 03:35:00.075626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.129 qpair failed and we were unable to recover it. 00:26:40.129 [2024-12-06 03:35:00.075704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.130 [2024-12-06 03:35:00.075716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.130 qpair failed and we were unable to recover it. 00:26:40.130 [2024-12-06 03:35:00.075852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.130 [2024-12-06 03:35:00.075863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.130 qpair failed and we were unable to recover it. 00:26:40.130 [2024-12-06 03:35:00.075926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.130 [2024-12-06 03:35:00.075937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.130 qpair failed and we were unable to recover it. 00:26:40.130 [2024-12-06 03:35:00.076018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.130 [2024-12-06 03:35:00.076029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.130 qpair failed and we were unable to recover it. 00:26:40.130 [2024-12-06 03:35:00.076113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.130 [2024-12-06 03:35:00.076126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.130 qpair failed and we were unable to recover it. 00:26:40.130 [2024-12-06 03:35:00.076196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.130 [2024-12-06 03:35:00.076206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.130 qpair failed and we were unable to recover it. 00:26:40.130 [2024-12-06 03:35:00.076292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.130 [2024-12-06 03:35:00.076303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.130 qpair failed and we were unable to recover it. 00:26:40.130 [2024-12-06 03:35:00.076377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.130 [2024-12-06 03:35:00.076389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.130 qpair failed and we were unable to recover it. 00:26:40.130 [2024-12-06 03:35:00.076547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.130 [2024-12-06 03:35:00.076557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.130 qpair failed and we were unable to recover it. 00:26:40.130 [2024-12-06 03:35:00.076641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.130 [2024-12-06 03:35:00.076652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.130 qpair failed and we were unable to recover it. 00:26:40.130 [2024-12-06 03:35:00.076790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.130 [2024-12-06 03:35:00.076801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.130 qpair failed and we were unable to recover it. 00:26:40.130 [2024-12-06 03:35:00.076877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.130 [2024-12-06 03:35:00.076888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.130 qpair failed and we were unable to recover it. 00:26:40.130 [2024-12-06 03:35:00.076979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.130 [2024-12-06 03:35:00.076990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.130 qpair failed and we were unable to recover it. 00:26:40.130 [2024-12-06 03:35:00.077089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.130 [2024-12-06 03:35:00.077100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.130 qpair failed and we were unable to recover it. 00:26:40.130 [2024-12-06 03:35:00.077188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.130 [2024-12-06 03:35:00.077199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.130 qpair failed and we were unable to recover it. 00:26:40.130 [2024-12-06 03:35:00.077278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.130 [2024-12-06 03:35:00.077289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.130 qpair failed and we were unable to recover it. 00:26:40.130 [2024-12-06 03:35:00.077366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.130 [2024-12-06 03:35:00.077377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.130 qpair failed and we were unable to recover it. 00:26:40.130 [2024-12-06 03:35:00.077464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.130 [2024-12-06 03:35:00.077475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.130 qpair failed and we were unable to recover it. 00:26:40.130 [2024-12-06 03:35:00.077559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.130 [2024-12-06 03:35:00.077571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.130 qpair failed and we were unable to recover it. 00:26:40.130 [2024-12-06 03:35:00.077714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.130 [2024-12-06 03:35:00.077726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.130 qpair failed and we were unable to recover it. 00:26:40.130 [2024-12-06 03:35:00.077812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.130 [2024-12-06 03:35:00.077823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.130 qpair failed and we were unable to recover it. 00:26:40.130 [2024-12-06 03:35:00.077975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.130 [2024-12-06 03:35:00.077987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.130 qpair failed and we were unable to recover it. 00:26:40.130 [2024-12-06 03:35:00.078087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.130 [2024-12-06 03:35:00.078098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.130 qpair failed and we were unable to recover it. 00:26:40.130 [2024-12-06 03:35:00.078170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.130 [2024-12-06 03:35:00.078181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.130 qpair failed and we were unable to recover it. 00:26:40.130 [2024-12-06 03:35:00.078245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.130 [2024-12-06 03:35:00.078256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.130 qpair failed and we were unable to recover it. 00:26:40.130 [2024-12-06 03:35:00.078331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.130 [2024-12-06 03:35:00.078343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.130 qpair failed and we were unable to recover it. 00:26:40.130 [2024-12-06 03:35:00.078418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.130 [2024-12-06 03:35:00.078429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.130 qpair failed and we were unable to recover it. 00:26:40.130 [2024-12-06 03:35:00.078513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.130 [2024-12-06 03:35:00.078524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.130 qpair failed and we were unable to recover it. 00:26:40.130 [2024-12-06 03:35:00.078592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.130 [2024-12-06 03:35:00.078602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.130 qpair failed and we were unable to recover it. 00:26:40.130 [2024-12-06 03:35:00.078672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.130 [2024-12-06 03:35:00.078683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.130 qpair failed and we were unable to recover it. 00:26:40.130 [2024-12-06 03:35:00.078760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.130 [2024-12-06 03:35:00.078772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.130 qpair failed and we were unable to recover it. 00:26:40.130 [2024-12-06 03:35:00.078903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.130 [2024-12-06 03:35:00.078914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.130 qpair failed and we were unable to recover it. 00:26:40.130 [2024-12-06 03:35:00.078985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.130 [2024-12-06 03:35:00.078996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.130 qpair failed and we were unable to recover it. 00:26:40.130 [2024-12-06 03:35:00.079069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.130 [2024-12-06 03:35:00.079079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.130 qpair failed and we were unable to recover it. 00:26:40.130 [2024-12-06 03:35:00.079158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.130 [2024-12-06 03:35:00.079171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.130 qpair failed and we were unable to recover it. 00:26:40.130 [2024-12-06 03:35:00.079315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.130 [2024-12-06 03:35:00.079327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.130 qpair failed and we were unable to recover it. 00:26:40.130 [2024-12-06 03:35:00.079392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.130 [2024-12-06 03:35:00.079410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.130 qpair failed and we were unable to recover it. 00:26:40.130 [2024-12-06 03:35:00.079487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.130 [2024-12-06 03:35:00.079498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.130 qpair failed and we were unable to recover it. 00:26:40.130 [2024-12-06 03:35:00.079650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.131 [2024-12-06 03:35:00.079661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.131 qpair failed and we were unable to recover it. 00:26:40.131 [2024-12-06 03:35:00.079725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.131 [2024-12-06 03:35:00.079735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.131 qpair failed and we were unable to recover it. 00:26:40.131 [2024-12-06 03:35:00.079956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.131 [2024-12-06 03:35:00.079967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.131 qpair failed and we were unable to recover it. 00:26:40.131 [2024-12-06 03:35:00.080062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.131 [2024-12-06 03:35:00.080073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.131 qpair failed and we were unable to recover it. 00:26:40.131 [2024-12-06 03:35:00.080170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.131 [2024-12-06 03:35:00.080181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.131 qpair failed and we were unable to recover it. 00:26:40.131 [2024-12-06 03:35:00.080277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.131 [2024-12-06 03:35:00.080287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.131 qpair failed and we were unable to recover it. 00:26:40.131 [2024-12-06 03:35:00.080367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.131 [2024-12-06 03:35:00.080382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.131 qpair failed and we were unable to recover it. 00:26:40.131 [2024-12-06 03:35:00.080474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.131 [2024-12-06 03:35:00.080487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.131 qpair failed and we were unable to recover it. 00:26:40.131 [2024-12-06 03:35:00.080568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.131 [2024-12-06 03:35:00.080579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.131 qpair failed and we were unable to recover it. 00:26:40.131 [2024-12-06 03:35:00.080652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.131 [2024-12-06 03:35:00.080663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.131 qpair failed and we were unable to recover it. 00:26:40.131 [2024-12-06 03:35:00.080754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.131 [2024-12-06 03:35:00.080766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.131 qpair failed and we were unable to recover it. 00:26:40.131 [2024-12-06 03:35:00.080894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.131 [2024-12-06 03:35:00.080906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.131 qpair failed and we were unable to recover it. 00:26:40.131 [2024-12-06 03:35:00.080969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.131 [2024-12-06 03:35:00.080980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.131 qpair failed and we were unable to recover it. 00:26:40.131 [2024-12-06 03:35:00.081071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.131 [2024-12-06 03:35:00.081083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.131 qpair failed and we were unable to recover it. 00:26:40.131 [2024-12-06 03:35:00.081172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.131 [2024-12-06 03:35:00.081182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.131 qpair failed and we were unable to recover it. 00:26:40.131 [2024-12-06 03:35:00.081249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.131 [2024-12-06 03:35:00.081260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.131 qpair failed and we were unable to recover it. 00:26:40.131 [2024-12-06 03:35:00.081404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.131 [2024-12-06 03:35:00.081415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.131 qpair failed and we were unable to recover it. 00:26:40.131 [2024-12-06 03:35:00.081502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.131 [2024-12-06 03:35:00.081512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.131 qpair failed and we were unable to recover it. 00:26:40.131 [2024-12-06 03:35:00.081582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.131 [2024-12-06 03:35:00.081593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.131 qpair failed and we were unable to recover it. 00:26:40.131 [2024-12-06 03:35:00.081704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.131 [2024-12-06 03:35:00.081716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.131 qpair failed and we were unable to recover it. 00:26:40.131 [2024-12-06 03:35:00.081790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.131 [2024-12-06 03:35:00.081800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.131 qpair failed and we were unable to recover it. 00:26:40.131 [2024-12-06 03:35:00.081882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.131 [2024-12-06 03:35:00.081894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.131 qpair failed and we were unable to recover it. 00:26:40.131 [2024-12-06 03:35:00.081969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.131 [2024-12-06 03:35:00.081980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.131 qpair failed and we were unable to recover it. 00:26:40.131 [2024-12-06 03:35:00.082081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.131 [2024-12-06 03:35:00.082111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.131 qpair failed and we were unable to recover it. 00:26:40.131 [2024-12-06 03:35:00.082208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.131 [2024-12-06 03:35:00.082230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.131 qpair failed and we were unable to recover it. 00:26:40.131 [2024-12-06 03:35:00.082345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.131 [2024-12-06 03:35:00.082360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.131 qpair failed and we were unable to recover it. 00:26:40.131 [2024-12-06 03:35:00.082441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.131 [2024-12-06 03:35:00.082456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.131 qpair failed and we were unable to recover it. 00:26:40.131 [2024-12-06 03:35:00.082545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.131 [2024-12-06 03:35:00.082559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.131 qpair failed and we were unable to recover it. 00:26:40.131 [2024-12-06 03:35:00.082712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.131 [2024-12-06 03:35:00.082727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.131 qpair failed and we were unable to recover it. 00:26:40.131 [2024-12-06 03:35:00.082907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.131 [2024-12-06 03:35:00.082922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.131 qpair failed and we were unable to recover it. 00:26:40.131 [2024-12-06 03:35:00.083028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.131 [2024-12-06 03:35:00.083043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.131 qpair failed and we were unable to recover it. 00:26:40.131 [2024-12-06 03:35:00.083131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.131 [2024-12-06 03:35:00.083147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.131 qpair failed and we were unable to recover it. 00:26:40.131 [2024-12-06 03:35:00.083234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.131 [2024-12-06 03:35:00.083250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.131 qpair failed and we were unable to recover it. 00:26:40.131 [2024-12-06 03:35:00.083386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.131 [2024-12-06 03:35:00.083401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.131 qpair failed and we were unable to recover it. 00:26:40.131 [2024-12-06 03:35:00.083477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.132 [2024-12-06 03:35:00.083491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.132 qpair failed and we were unable to recover it. 00:26:40.132 [2024-12-06 03:35:00.083573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.132 [2024-12-06 03:35:00.083588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.132 qpair failed and we were unable to recover it. 00:26:40.132 [2024-12-06 03:35:00.083667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.132 [2024-12-06 03:35:00.083685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.132 qpair failed and we were unable to recover it. 00:26:40.132 [2024-12-06 03:35:00.083774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.132 [2024-12-06 03:35:00.083789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.132 qpair failed and we were unable to recover it. 00:26:40.132 [2024-12-06 03:35:00.083867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.132 [2024-12-06 03:35:00.083882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.132 qpair failed and we were unable to recover it. 00:26:40.132 [2024-12-06 03:35:00.083968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.132 [2024-12-06 03:35:00.083984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.132 qpair failed and we were unable to recover it. 00:26:40.132 [2024-12-06 03:35:00.084063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.132 [2024-12-06 03:35:00.084078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.132 qpair failed and we were unable to recover it. 00:26:40.132 [2024-12-06 03:35:00.084170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.132 [2024-12-06 03:35:00.084186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.132 qpair failed and we were unable to recover it. 00:26:40.132 [2024-12-06 03:35:00.084271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.132 [2024-12-06 03:35:00.084286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.132 qpair failed and we were unable to recover it. 00:26:40.132 [2024-12-06 03:35:00.084361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.132 [2024-12-06 03:35:00.084376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.132 qpair failed and we were unable to recover it. 00:26:40.132 [2024-12-06 03:35:00.084463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.132 [2024-12-06 03:35:00.084478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.132 qpair failed and we were unable to recover it. 00:26:40.132 [2024-12-06 03:35:00.084648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.132 [2024-12-06 03:35:00.084663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.132 qpair failed and we were unable to recover it. 00:26:40.132 [2024-12-06 03:35:00.084736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.132 [2024-12-06 03:35:00.084751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.132 qpair failed and we were unable to recover it. 00:26:40.132 [2024-12-06 03:35:00.084831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.132 [2024-12-06 03:35:00.084846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.132 qpair failed and we were unable to recover it. 00:26:40.132 [2024-12-06 03:35:00.084943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.132 [2024-12-06 03:35:00.084965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.132 qpair failed and we were unable to recover it. 00:26:40.132 [2024-12-06 03:35:00.085043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.132 [2024-12-06 03:35:00.085058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.132 qpair failed and we were unable to recover it. 00:26:40.132 [2024-12-06 03:35:00.085142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.132 [2024-12-06 03:35:00.085157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.132 qpair failed and we were unable to recover it. 00:26:40.132 [2024-12-06 03:35:00.085320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.132 [2024-12-06 03:35:00.085335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.132 qpair failed and we were unable to recover it. 00:26:40.132 [2024-12-06 03:35:00.085423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.132 [2024-12-06 03:35:00.085438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.132 qpair failed and we were unable to recover it. 00:26:40.132 [2024-12-06 03:35:00.085512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.132 [2024-12-06 03:35:00.085528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.132 qpair failed and we were unable to recover it. 00:26:40.132 03:35:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:40.132 [2024-12-06 03:35:00.085689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.132 [2024-12-06 03:35:00.085704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.132 qpair failed and we were unable to recover it. 00:26:40.132 [2024-12-06 03:35:00.085855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.132 [2024-12-06 03:35:00.085870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.132 qpair failed and we were unable to recover it. 00:26:40.132 [2024-12-06 03:35:00.085964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.132 [2024-12-06 03:35:00.085981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.132 qpair failed and we were unable to recover it. 00:26:40.132 03:35:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:40.132 [2024-12-06 03:35:00.086143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.132 [2024-12-06 03:35:00.086158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.132 qpair failed and we were unable to recover it. 00:26:40.132 [2024-12-06 03:35:00.086264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.132 [2024-12-06 03:35:00.086280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.132 qpair failed and we were unable to recover it. 00:26:40.132 [2024-12-06 03:35:00.086370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.132 03:35:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.132 [2024-12-06 03:35:00.086384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.132 qpair failed and we were unable to recover it. 00:26:40.132 [2024-12-06 03:35:00.086497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.132 [2024-12-06 03:35:00.086513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.132 qpair failed and we were unable to recover it. 00:26:40.132 [2024-12-06 03:35:00.086594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.132 [2024-12-06 03:35:00.086609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.132 qpair failed and we were unable to recover it. 00:26:40.132 03:35:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:40.132 [2024-12-06 03:35:00.086701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.132 [2024-12-06 03:35:00.086717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.132 qpair failed and we were unable to recover it. 00:26:40.132 [2024-12-06 03:35:00.086801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.132 [2024-12-06 03:35:00.086816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.132 qpair failed and we were unable to recover it. 00:26:40.132 [2024-12-06 03:35:00.086907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.132 [2024-12-06 03:35:00.086922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.132 qpair failed and we were unable to recover it. 00:26:40.132 [2024-12-06 03:35:00.087066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.132 [2024-12-06 03:35:00.087082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.132 qpair failed and we were unable to recover it. 00:26:40.132 [2024-12-06 03:35:00.087175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.132 [2024-12-06 03:35:00.087190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.132 qpair failed and we were unable to recover it. 00:26:40.132 [2024-12-06 03:35:00.087271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.132 [2024-12-06 03:35:00.087285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.132 qpair failed and we were unable to recover it. 00:26:40.132 [2024-12-06 03:35:00.087372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.132 [2024-12-06 03:35:00.087388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.132 qpair failed and we were unable to recover it. 00:26:40.132 [2024-12-06 03:35:00.087465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.132 [2024-12-06 03:35:00.087480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.132 qpair failed and we were unable to recover it. 00:26:40.132 [2024-12-06 03:35:00.087672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.133 [2024-12-06 03:35:00.087688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.133 qpair failed and we were unable to recover it. 00:26:40.133 [2024-12-06 03:35:00.087780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.133 [2024-12-06 03:35:00.087795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.133 qpair failed and we were unable to recover it. 00:26:40.133 [2024-12-06 03:35:00.087881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.133 [2024-12-06 03:35:00.087895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.133 qpair failed and we were unable to recover it. 00:26:40.133 [2024-12-06 03:35:00.087995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.133 [2024-12-06 03:35:00.088010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.133 qpair failed and we were unable to recover it. 00:26:40.133 [2024-12-06 03:35:00.088141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.133 [2024-12-06 03:35:00.088161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.133 qpair failed and we were unable to recover it. 00:26:40.133 [2024-12-06 03:35:00.088304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.133 [2024-12-06 03:35:00.088320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.133 qpair failed and we were unable to recover it. 00:26:40.133 [2024-12-06 03:35:00.088438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.133 [2024-12-06 03:35:00.088453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.133 qpair failed and we were unable to recover it. 00:26:40.133 [2024-12-06 03:35:00.088544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.133 [2024-12-06 03:35:00.088558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.133 qpair failed and we were unable to recover it. 00:26:40.133 [2024-12-06 03:35:00.088646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.133 [2024-12-06 03:35:00.088660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.133 qpair failed and we were unable to recover it. 00:26:40.133 [2024-12-06 03:35:00.088750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.133 [2024-12-06 03:35:00.088765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.133 qpair failed and we were unable to recover it. 00:26:40.133 [2024-12-06 03:35:00.088857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.133 [2024-12-06 03:35:00.088870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.133 qpair failed and we were unable to recover it. 00:26:40.133 [2024-12-06 03:35:00.089045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.133 [2024-12-06 03:35:00.089056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.133 qpair failed and we were unable to recover it. 00:26:40.133 [2024-12-06 03:35:00.089254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.133 [2024-12-06 03:35:00.089265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.133 qpair failed and we were unable to recover it. 00:26:40.133 [2024-12-06 03:35:00.089411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.133 [2024-12-06 03:35:00.089422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.133 qpair failed and we were unable to recover it. 00:26:40.133 [2024-12-06 03:35:00.089531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.133 [2024-12-06 03:35:00.089542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.133 qpair failed and we were unable to recover it. 00:26:40.133 [2024-12-06 03:35:00.089646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.133 [2024-12-06 03:35:00.089657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.133 qpair failed and we were unable to recover it. 00:26:40.133 [2024-12-06 03:35:00.089726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.133 [2024-12-06 03:35:00.089737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.133 qpair failed and we were unable to recover it. 00:26:40.133 [2024-12-06 03:35:00.089816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.133 [2024-12-06 03:35:00.089828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.133 qpair failed and we were unable to recover it. 00:26:40.133 [2024-12-06 03:35:00.089917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.133 [2024-12-06 03:35:00.089928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.133 qpair failed and we were unable to recover it. 00:26:40.133 [2024-12-06 03:35:00.090066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.133 [2024-12-06 03:35:00.090078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.133 qpair failed and we were unable to recover it. 00:26:40.133 [2024-12-06 03:35:00.090168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.133 [2024-12-06 03:35:00.090179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.133 qpair failed and we were unable to recover it. 00:26:40.133 [2024-12-06 03:35:00.090273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.133 [2024-12-06 03:35:00.090284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.133 qpair failed and we were unable to recover it. 00:26:40.133 [2024-12-06 03:35:00.090361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.133 [2024-12-06 03:35:00.090372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.133 qpair failed and we were unable to recover it. 00:26:40.133 [2024-12-06 03:35:00.090508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.133 [2024-12-06 03:35:00.090518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.133 qpair failed and we were unable to recover it. 00:26:40.133 [2024-12-06 03:35:00.090602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.133 [2024-12-06 03:35:00.090613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.133 qpair failed and we were unable to recover it. 00:26:40.133 [2024-12-06 03:35:00.090770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.133 [2024-12-06 03:35:00.090781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.133 qpair failed and we were unable to recover it. 00:26:40.133 [2024-12-06 03:35:00.090938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.133 [2024-12-06 03:35:00.090952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.133 qpair failed and we were unable to recover it. 00:26:40.133 [2024-12-06 03:35:00.091042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.133 [2024-12-06 03:35:00.091053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.133 qpair failed and we were unable to recover it. 00:26:40.133 [2024-12-06 03:35:00.091133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.133 [2024-12-06 03:35:00.091143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.133 qpair failed and we were unable to recover it. 00:26:40.133 [2024-12-06 03:35:00.091237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.133 [2024-12-06 03:35:00.091248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.133 qpair failed and we were unable to recover it. 00:26:40.133 [2024-12-06 03:35:00.091382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.133 [2024-12-06 03:35:00.091394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.133 qpair failed and we were unable to recover it. 00:26:40.133 [2024-12-06 03:35:00.091472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.133 [2024-12-06 03:35:00.091483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.133 qpair failed and we were unable to recover it. 00:26:40.133 [2024-12-06 03:35:00.091556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.133 [2024-12-06 03:35:00.091567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.133 qpair failed and we were unable to recover it. 00:26:40.133 [2024-12-06 03:35:00.091638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.133 [2024-12-06 03:35:00.091649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.133 qpair failed and we were unable to recover it. 00:26:40.133 [2024-12-06 03:35:00.091778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.133 [2024-12-06 03:35:00.091790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.133 qpair failed and we were unable to recover it. 00:26:40.133 [2024-12-06 03:35:00.091863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.133 [2024-12-06 03:35:00.091875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.133 qpair failed and we were unable to recover it. 00:26:40.133 [2024-12-06 03:35:00.091937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.133 [2024-12-06 03:35:00.091951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.133 qpair failed and we were unable to recover it. 00:26:40.133 [2024-12-06 03:35:00.092030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.133 [2024-12-06 03:35:00.092041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.133 qpair failed and we were unable to recover it. 00:26:40.134 [2024-12-06 03:35:00.092123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.134 [2024-12-06 03:35:00.092134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.134 qpair failed and we were unable to recover it. 00:26:40.134 [2024-12-06 03:35:00.092225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.134 [2024-12-06 03:35:00.092236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.134 qpair failed and we were unable to recover it. 00:26:40.134 [2024-12-06 03:35:00.092303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.134 [2024-12-06 03:35:00.092313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.134 qpair failed and we were unable to recover it. 00:26:40.134 [2024-12-06 03:35:00.092448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.134 [2024-12-06 03:35:00.092460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.134 qpair failed and we were unable to recover it. 00:26:40.134 [2024-12-06 03:35:00.092551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.134 [2024-12-06 03:35:00.092562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.134 qpair failed and we were unable to recover it. 00:26:40.134 [2024-12-06 03:35:00.092625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.134 [2024-12-06 03:35:00.092637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.134 qpair failed and we were unable to recover it. 00:26:40.134 [2024-12-06 03:35:00.092733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.134 [2024-12-06 03:35:00.092746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.134 qpair failed and we were unable to recover it. 00:26:40.134 [2024-12-06 03:35:00.092820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.134 [2024-12-06 03:35:00.092832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.134 qpair failed and we were unable to recover it. 00:26:40.134 [2024-12-06 03:35:00.092911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.134 [2024-12-06 03:35:00.092922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.134 qpair failed and we were unable to recover it. 00:26:40.134 [2024-12-06 03:35:00.093012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.134 [2024-12-06 03:35:00.093023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.134 qpair failed and we were unable to recover it. 00:26:40.134 [2024-12-06 03:35:00.093100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.134 [2024-12-06 03:35:00.093110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.134 qpair failed and we were unable to recover it. 00:26:40.134 [2024-12-06 03:35:00.093167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.134 [2024-12-06 03:35:00.093177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.134 qpair failed and we were unable to recover it. 00:26:40.134 [2024-12-06 03:35:00.093314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.134 [2024-12-06 03:35:00.093325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.134 qpair failed and we were unable to recover it. 00:26:40.134 [2024-12-06 03:35:00.093400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.134 [2024-12-06 03:35:00.093411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.134 qpair failed and we were unable to recover it. 00:26:40.134 [2024-12-06 03:35:00.093490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.134 [2024-12-06 03:35:00.093501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.134 qpair failed and we were unable to recover it. 00:26:40.134 [2024-12-06 03:35:00.093594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.134 [2024-12-06 03:35:00.093605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.134 qpair failed and we were unable to recover it. 00:26:40.134 [2024-12-06 03:35:00.093682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.134 [2024-12-06 03:35:00.093692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.134 qpair failed and we were unable to recover it. 00:26:40.134 [2024-12-06 03:35:00.093760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.134 [2024-12-06 03:35:00.093771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.134 qpair failed and we were unable to recover it. 00:26:40.134 [2024-12-06 03:35:00.093845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.134 [2024-12-06 03:35:00.093857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.134 qpair failed and we were unable to recover it. 00:26:40.134 [2024-12-06 03:35:00.093931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.134 [2024-12-06 03:35:00.093941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.134 qpair failed and we were unable to recover it. 00:26:40.134 [2024-12-06 03:35:00.094016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.134 [2024-12-06 03:35:00.094028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.134 qpair failed and we were unable to recover it. 00:26:40.134 [2024-12-06 03:35:00.094111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.134 [2024-12-06 03:35:00.094122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.134 qpair failed and we were unable to recover it. 00:26:40.134 [2024-12-06 03:35:00.094193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.134 [2024-12-06 03:35:00.094204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.134 qpair failed and we were unable to recover it. 00:26:40.134 [2024-12-06 03:35:00.094272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.134 [2024-12-06 03:35:00.094283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.134 qpair failed and we were unable to recover it. 00:26:40.134 [2024-12-06 03:35:00.094356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.134 [2024-12-06 03:35:00.094367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.134 qpair failed and we were unable to recover it. 00:26:40.134 [2024-12-06 03:35:00.094442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.134 [2024-12-06 03:35:00.094454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.134 qpair failed and we were unable to recover it. 00:26:40.134 [2024-12-06 03:35:00.094519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.134 [2024-12-06 03:35:00.094530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.134 qpair failed and we were unable to recover it. 00:26:40.134 [2024-12-06 03:35:00.094600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.134 [2024-12-06 03:35:00.094611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.134 qpair failed and we were unable to recover it. 00:26:40.134 [2024-12-06 03:35:00.094678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.134 [2024-12-06 03:35:00.094689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.134 qpair failed and we were unable to recover it. 00:26:40.134 [2024-12-06 03:35:00.094765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.134 [2024-12-06 03:35:00.094776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.134 qpair failed and we were unable to recover it. 00:26:40.134 [2024-12-06 03:35:00.094859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.134 [2024-12-06 03:35:00.094870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.134 qpair failed and we were unable to recover it. 00:26:40.134 [2024-12-06 03:35:00.095001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.134 [2024-12-06 03:35:00.095013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.134 qpair failed and we were unable to recover it. 00:26:40.134 [2024-12-06 03:35:00.095103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.134 [2024-12-06 03:35:00.095114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.134 qpair failed and we were unable to recover it. 00:26:40.134 [2024-12-06 03:35:00.095184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.134 [2024-12-06 03:35:00.095195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.134 qpair failed and we were unable to recover it. 00:26:40.134 [2024-12-06 03:35:00.095258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.134 [2024-12-06 03:35:00.095269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.134 qpair failed and we were unable to recover it. 00:26:40.134 [2024-12-06 03:35:00.095333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.134 [2024-12-06 03:35:00.095344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.134 qpair failed and we were unable to recover it. 00:26:40.134 [2024-12-06 03:35:00.095416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.134 [2024-12-06 03:35:00.095427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.134 qpair failed and we were unable to recover it. 00:26:40.134 [2024-12-06 03:35:00.095505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.134 [2024-12-06 03:35:00.095515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.134 qpair failed and we were unable to recover it. 00:26:40.135 [2024-12-06 03:35:00.095589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.135 [2024-12-06 03:35:00.095600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.135 qpair failed and we were unable to recover it. 00:26:40.135 [2024-12-06 03:35:00.095692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.135 [2024-12-06 03:35:00.095703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.135 qpair failed and we were unable to recover it. 00:26:40.135 [2024-12-06 03:35:00.095772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.135 [2024-12-06 03:35:00.095783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.135 qpair failed and we were unable to recover it. 00:26:40.135 [2024-12-06 03:35:00.095873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.135 [2024-12-06 03:35:00.095884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.135 qpair failed and we were unable to recover it. 00:26:40.135 [2024-12-06 03:35:00.095954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.135 [2024-12-06 03:35:00.095965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.135 qpair failed and we were unable to recover it. 00:26:40.135 [2024-12-06 03:35:00.096033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.135 [2024-12-06 03:35:00.096044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.135 qpair failed and we were unable to recover it. 00:26:40.135 [2024-12-06 03:35:00.096121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.135 [2024-12-06 03:35:00.096132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.135 qpair failed and we were unable to recover it. 00:26:40.135 [2024-12-06 03:35:00.096204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.135 [2024-12-06 03:35:00.096215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.135 qpair failed and we were unable to recover it. 00:26:40.135 [2024-12-06 03:35:00.096301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.135 [2024-12-06 03:35:00.096312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.135 qpair failed and we were unable to recover it. 00:26:40.135 [2024-12-06 03:35:00.096459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.135 [2024-12-06 03:35:00.096470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.135 qpair failed and we were unable to recover it. 00:26:40.135 [2024-12-06 03:35:00.096542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.135 [2024-12-06 03:35:00.096553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.135 qpair failed and we were unable to recover it. 00:26:40.135 [2024-12-06 03:35:00.096684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.135 [2024-12-06 03:35:00.096694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.135 qpair failed and we were unable to recover it. 00:26:40.135 [2024-12-06 03:35:00.096765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.135 [2024-12-06 03:35:00.096775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.135 qpair failed and we were unable to recover it. 00:26:40.135 [2024-12-06 03:35:00.096840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.135 [2024-12-06 03:35:00.096851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.135 qpair failed and we were unable to recover it. 00:26:40.135 [2024-12-06 03:35:00.097023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.135 [2024-12-06 03:35:00.097035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.135 qpair failed and we were unable to recover it. 00:26:40.135 [2024-12-06 03:35:00.097131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.135 [2024-12-06 03:35:00.097143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.135 qpair failed and we were unable to recover it. 00:26:40.135 [2024-12-06 03:35:00.097210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.135 [2024-12-06 03:35:00.097221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.135 qpair failed and we were unable to recover it. 00:26:40.135 [2024-12-06 03:35:00.097286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.135 [2024-12-06 03:35:00.097296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.135 qpair failed and we were unable to recover it. 00:26:40.135 [2024-12-06 03:35:00.097364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.135 [2024-12-06 03:35:00.097374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.135 qpair failed and we were unable to recover it. 00:26:40.135 [2024-12-06 03:35:00.097444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.135 [2024-12-06 03:35:00.097454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.135 qpair failed and we were unable to recover it. 00:26:40.135 [2024-12-06 03:35:00.097526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.135 [2024-12-06 03:35:00.097538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.135 qpair failed and we were unable to recover it. 00:26:40.135 [2024-12-06 03:35:00.097614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.135 [2024-12-06 03:35:00.097624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.135 qpair failed and we were unable to recover it. 00:26:40.135 [2024-12-06 03:35:00.097705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.135 [2024-12-06 03:35:00.097717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.135 qpair failed and we were unable to recover it. 00:26:40.135 [2024-12-06 03:35:00.097852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.135 [2024-12-06 03:35:00.097864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.135 qpair failed and we were unable to recover it. 00:26:40.135 [2024-12-06 03:35:00.097934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.135 [2024-12-06 03:35:00.097944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.135 qpair failed and we were unable to recover it. 00:26:40.135 [2024-12-06 03:35:00.098021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.135 [2024-12-06 03:35:00.098032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.135 qpair failed and we were unable to recover it. 00:26:40.135 [2024-12-06 03:35:00.098112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.135 [2024-12-06 03:35:00.098123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.135 qpair failed and we were unable to recover it. 00:26:40.135 [2024-12-06 03:35:00.098194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.135 [2024-12-06 03:35:00.098204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.135 qpair failed and we were unable to recover it. 00:26:40.135 [2024-12-06 03:35:00.098278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.135 [2024-12-06 03:35:00.098289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.135 qpair failed and we were unable to recover it. 00:26:40.135 [2024-12-06 03:35:00.098357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.135 [2024-12-06 03:35:00.098369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.135 qpair failed and we were unable to recover it. 00:26:40.135 [2024-12-06 03:35:00.098435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.135 [2024-12-06 03:35:00.098451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.135 qpair failed and we were unable to recover it. 00:26:40.135 [2024-12-06 03:35:00.098601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.135 [2024-12-06 03:35:00.098612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.135 qpair failed and we were unable to recover it. 00:26:40.135 [2024-12-06 03:35:00.098695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.135 [2024-12-06 03:35:00.098706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.135 qpair failed and we were unable to recover it. 00:26:40.135 [2024-12-06 03:35:00.098839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.135 [2024-12-06 03:35:00.098850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.135 qpair failed and we were unable to recover it. 00:26:40.135 [2024-12-06 03:35:00.098920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.135 [2024-12-06 03:35:00.098930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.135 qpair failed and we were unable to recover it. 00:26:40.135 [2024-12-06 03:35:00.099027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.135 [2024-12-06 03:35:00.099040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.135 qpair failed and we were unable to recover it. 00:26:40.135 [2024-12-06 03:35:00.099107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.135 [2024-12-06 03:35:00.099117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.135 qpair failed and we were unable to recover it. 00:26:40.135 [2024-12-06 03:35:00.099203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.135 [2024-12-06 03:35:00.099214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.135 qpair failed and we were unable to recover it. 00:26:40.136 [2024-12-06 03:35:00.099283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.136 [2024-12-06 03:35:00.099294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.136 qpair failed and we were unable to recover it. 00:26:40.136 [2024-12-06 03:35:00.099369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.136 [2024-12-06 03:35:00.099380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.136 qpair failed and we were unable to recover it. 00:26:40.136 [2024-12-06 03:35:00.099524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.136 [2024-12-06 03:35:00.099535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.136 qpair failed and we were unable to recover it. 00:26:40.136 [2024-12-06 03:35:00.099611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.136 [2024-12-06 03:35:00.099622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.136 qpair failed and we were unable to recover it. 00:26:40.136 [2024-12-06 03:35:00.099694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.136 [2024-12-06 03:35:00.099704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.136 qpair failed and we were unable to recover it. 00:26:40.136 [2024-12-06 03:35:00.099789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.136 [2024-12-06 03:35:00.099800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.136 qpair failed and we were unable to recover it. 00:26:40.136 [2024-12-06 03:35:00.099942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.136 [2024-12-06 03:35:00.099957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.136 qpair failed and we were unable to recover it. 00:26:40.136 [2024-12-06 03:35:00.100034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.136 [2024-12-06 03:35:00.100046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.136 qpair failed and we were unable to recover it. 00:26:40.136 [2024-12-06 03:35:00.100112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.136 [2024-12-06 03:35:00.100124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.136 qpair failed and we were unable to recover it. 00:26:40.136 [2024-12-06 03:35:00.100204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.136 [2024-12-06 03:35:00.100214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.136 qpair failed and we were unable to recover it. 00:26:40.136 [2024-12-06 03:35:00.100279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.136 [2024-12-06 03:35:00.100289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.136 qpair failed and we were unable to recover it. 00:26:40.136 [2024-12-06 03:35:00.100373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.136 [2024-12-06 03:35:00.100384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.136 qpair failed and we were unable to recover it. 00:26:40.136 [2024-12-06 03:35:00.100561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.136 [2024-12-06 03:35:00.100573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.136 qpair failed and we were unable to recover it. 00:26:40.136 [2024-12-06 03:35:00.100648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.136 [2024-12-06 03:35:00.100658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.136 qpair failed and we were unable to recover it. 00:26:40.136 [2024-12-06 03:35:00.100742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.136 [2024-12-06 03:35:00.100754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.136 qpair failed and we were unable to recover it. 00:26:40.136 [2024-12-06 03:35:00.100859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.136 [2024-12-06 03:35:00.100874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.136 qpair failed and we were unable to recover it. 00:26:40.136 [2024-12-06 03:35:00.100971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.136 [2024-12-06 03:35:00.100989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.136 qpair failed and we were unable to recover it. 00:26:40.136 [2024-12-06 03:35:00.101075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.136 [2024-12-06 03:35:00.101091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.136 qpair failed and we were unable to recover it. 00:26:40.136 [2024-12-06 03:35:00.101184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.136 [2024-12-06 03:35:00.101206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.136 qpair failed and we were unable to recover it. 00:26:40.136 [2024-12-06 03:35:00.101404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.136 [2024-12-06 03:35:00.101442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.136 qpair failed and we were unable to recover it. 00:26:40.136 [2024-12-06 03:35:00.101588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.136 [2024-12-06 03:35:00.101615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ce0000b90 with addr=10.0.0.2, port=4420 00:26:40.136 qpair failed and we were unable to recover it. 00:26:40.136 [2024-12-06 03:35:00.101734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.136 [2024-12-06 03:35:00.101768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.136 qpair failed and we were unable to recover it. 00:26:40.136 [2024-12-06 03:35:00.101869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.136 [2024-12-06 03:35:00.101884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.136 qpair failed and we were unable to recover it. 00:26:40.136 [2024-12-06 03:35:00.101986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.136 [2024-12-06 03:35:00.102002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.136 qpair failed and we were unable to recover it. 00:26:40.136 [2024-12-06 03:35:00.102099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.136 [2024-12-06 03:35:00.102113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.136 qpair failed and we were unable to recover it. 00:26:40.136 [2024-12-06 03:35:00.102297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.136 [2024-12-06 03:35:00.102312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.136 qpair failed and we were unable to recover it. 00:26:40.136 [2024-12-06 03:35:00.102387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.136 [2024-12-06 03:35:00.102402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.136 qpair failed and we were unable to recover it. 00:26:40.136 [2024-12-06 03:35:00.102547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.136 [2024-12-06 03:35:00.102632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd4000b90 with addr=10.0.0.2, port=4420 00:26:40.136 qpair failed and we were unable to recover it. 00:26:40.136 [2024-12-06 03:35:00.102819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.136 [2024-12-06 03:35:00.102849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.136 qpair failed and we were unable to recover it. 00:26:40.136 [2024-12-06 03:35:00.102944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.136 [2024-12-06 03:35:00.102969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.136 qpair failed and we were unable to recover it. 00:26:40.136 [2024-12-06 03:35:00.103042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.136 [2024-12-06 03:35:00.103057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.136 qpair failed and we were unable to recover it. 00:26:40.136 [2024-12-06 03:35:00.103141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.136 [2024-12-06 03:35:00.103155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.136 qpair failed and we were unable to recover it. 00:26:40.136 [2024-12-06 03:35:00.103243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.136 [2024-12-06 03:35:00.103258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.136 qpair failed and we were unable to recover it. 00:26:40.136 [2024-12-06 03:35:00.103344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.136 [2024-12-06 03:35:00.103358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.136 qpair failed and we were unable to recover it. 00:26:40.137 [2024-12-06 03:35:00.103441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.137 [2024-12-06 03:35:00.103456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.137 qpair failed and we were unable to recover it. 00:26:40.137 [2024-12-06 03:35:00.103545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.137 [2024-12-06 03:35:00.103560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.137 qpair failed and we were unable to recover it. 00:26:40.137 [2024-12-06 03:35:00.103664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.137 [2024-12-06 03:35:00.103679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.137 qpair failed and we were unable to recover it. 00:26:40.137 [2024-12-06 03:35:00.103760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.137 [2024-12-06 03:35:00.103779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.137 qpair failed and we were unable to recover it. 00:26:40.137 [2024-12-06 03:35:00.103862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.137 [2024-12-06 03:35:00.103877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.137 qpair failed and we were unable to recover it. 00:26:40.137 [2024-12-06 03:35:00.103968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.137 [2024-12-06 03:35:00.103984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.137 qpair failed and we were unable to recover it. 00:26:40.137 [2024-12-06 03:35:00.104071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.137 [2024-12-06 03:35:00.104086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.137 qpair failed and we were unable to recover it. 00:26:40.137 [2024-12-06 03:35:00.104161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.137 [2024-12-06 03:35:00.104177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.137 qpair failed and we were unable to recover it. 00:26:40.137 [2024-12-06 03:35:00.104256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.137 [2024-12-06 03:35:00.104271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.137 qpair failed and we were unable to recover it. 00:26:40.137 [2024-12-06 03:35:00.104364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.137 [2024-12-06 03:35:00.104379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.137 qpair failed and we were unable to recover it. 00:26:40.137 [2024-12-06 03:35:00.104472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.137 [2024-12-06 03:35:00.104487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.137 qpair failed and we were unable to recover it. 00:26:40.137 [2024-12-06 03:35:00.104567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.137 [2024-12-06 03:35:00.104582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.137 qpair failed and we were unable to recover it. 00:26:40.137 [2024-12-06 03:35:00.104667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.137 [2024-12-06 03:35:00.104682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.137 qpair failed and we were unable to recover it. 00:26:40.137 [2024-12-06 03:35:00.104769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.137 [2024-12-06 03:35:00.104785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.137 qpair failed and we were unable to recover it. 00:26:40.137 [2024-12-06 03:35:00.104868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.137 [2024-12-06 03:35:00.104880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.137 qpair failed and we were unable to recover it. 00:26:40.137 [2024-12-06 03:35:00.104952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.137 [2024-12-06 03:35:00.104965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.137 qpair failed and we were unable to recover it. 00:26:40.137 [2024-12-06 03:35:00.105119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.137 [2024-12-06 03:35:00.105130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.137 qpair failed and we were unable to recover it. 00:26:40.137 [2024-12-06 03:35:00.105199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.137 [2024-12-06 03:35:00.105210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.137 qpair failed and we were unable to recover it. 00:26:40.137 [2024-12-06 03:35:00.105303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.137 [2024-12-06 03:35:00.105314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.137 qpair failed and we were unable to recover it. 00:26:40.137 [2024-12-06 03:35:00.105408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.137 [2024-12-06 03:35:00.105419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.137 qpair failed and we were unable to recover it. 00:26:40.137 [2024-12-06 03:35:00.105482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.137 [2024-12-06 03:35:00.105493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.137 qpair failed and we were unable to recover it. 00:26:40.137 [2024-12-06 03:35:00.105558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.137 [2024-12-06 03:35:00.105568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.137 qpair failed and we were unable to recover it. 00:26:40.137 [2024-12-06 03:35:00.105638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.137 [2024-12-06 03:35:00.105649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.137 qpair failed and we were unable to recover it. 00:26:40.137 [2024-12-06 03:35:00.105739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.137 [2024-12-06 03:35:00.105750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.137 qpair failed and we were unable to recover it. 00:26:40.137 [2024-12-06 03:35:00.105821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.137 [2024-12-06 03:35:00.105831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.137 qpair failed and we were unable to recover it. 00:26:40.137 [2024-12-06 03:35:00.105895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.137 [2024-12-06 03:35:00.105905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.137 qpair failed and we were unable to recover it. 00:26:40.137 [2024-12-06 03:35:00.105994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.137 [2024-12-06 03:35:00.106007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.137 qpair failed and we were unable to recover it. 00:26:40.137 [2024-12-06 03:35:00.106082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.137 [2024-12-06 03:35:00.106093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.137 qpair failed and we were unable to recover it. 00:26:40.137 [2024-12-06 03:35:00.106159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.137 [2024-12-06 03:35:00.106170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.137 qpair failed and we were unable to recover it. 00:26:40.137 [2024-12-06 03:35:00.106256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.137 [2024-12-06 03:35:00.106266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.137 qpair failed and we were unable to recover it. 00:26:40.137 [2024-12-06 03:35:00.106333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.137 [2024-12-06 03:35:00.106346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.137 qpair failed and we were unable to recover it. 00:26:40.137 [2024-12-06 03:35:00.106421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.137 [2024-12-06 03:35:00.106433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.137 qpair failed and we were unable to recover it. 00:26:40.137 [2024-12-06 03:35:00.106502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.137 [2024-12-06 03:35:00.106513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.137 qpair failed and we were unable to recover it. 00:26:40.137 [2024-12-06 03:35:00.106606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.137 [2024-12-06 03:35:00.106617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.137 qpair failed and we were unable to recover it. 00:26:40.137 [2024-12-06 03:35:00.106686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.137 [2024-12-06 03:35:00.106697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.137 qpair failed and we were unable to recover it. 00:26:40.137 [2024-12-06 03:35:00.106782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.137 [2024-12-06 03:35:00.106792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.137 qpair failed and we were unable to recover it. 00:26:40.137 [2024-12-06 03:35:00.106921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.137 [2024-12-06 03:35:00.106932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.137 qpair failed and we were unable to recover it. 00:26:40.137 [2024-12-06 03:35:00.107008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.138 [2024-12-06 03:35:00.107020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.138 qpair failed and we were unable to recover it. 00:26:40.138 [2024-12-06 03:35:00.107098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.138 [2024-12-06 03:35:00.107109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.138 qpair failed and we were unable to recover it. 00:26:40.138 [2024-12-06 03:35:00.107185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.138 [2024-12-06 03:35:00.107195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.138 qpair failed and we were unable to recover it. 00:26:40.138 [2024-12-06 03:35:00.107261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.138 [2024-12-06 03:35:00.107271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.138 qpair failed and we were unable to recover it. 00:26:40.138 [2024-12-06 03:35:00.107340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.138 [2024-12-06 03:35:00.107350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.138 qpair failed and we were unable to recover it. 00:26:40.138 [2024-12-06 03:35:00.107511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.138 [2024-12-06 03:35:00.107522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.138 qpair failed and we were unable to recover it. 00:26:40.138 [2024-12-06 03:35:00.107593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.138 [2024-12-06 03:35:00.107603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.138 qpair failed and we were unable to recover it. 00:26:40.138 [2024-12-06 03:35:00.107740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.138 [2024-12-06 03:35:00.107750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.138 qpair failed and we were unable to recover it. 00:26:40.138 [2024-12-06 03:35:00.107824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.138 [2024-12-06 03:35:00.107836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.138 qpair failed and we were unable to recover it. 00:26:40.138 [2024-12-06 03:35:00.107920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.138 [2024-12-06 03:35:00.107931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.138 qpair failed and we were unable to recover it. 00:26:40.138 [2024-12-06 03:35:00.107999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.138 [2024-12-06 03:35:00.108009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.138 qpair failed and we were unable to recover it. 00:26:40.138 [2024-12-06 03:35:00.108076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.138 [2024-12-06 03:35:00.108087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.138 qpair failed and we were unable to recover it. 00:26:40.138 [2024-12-06 03:35:00.108155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.138 [2024-12-06 03:35:00.108166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.138 qpair failed and we were unable to recover it. 00:26:40.138 [2024-12-06 03:35:00.108236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.138 [2024-12-06 03:35:00.108247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.138 qpair failed and we were unable to recover it. 00:26:40.138 [2024-12-06 03:35:00.108320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.138 [2024-12-06 03:35:00.108331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.138 qpair failed and we were unable to recover it. 00:26:40.138 [2024-12-06 03:35:00.108414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.138 [2024-12-06 03:35:00.108425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.138 qpair failed and we were unable to recover it. 00:26:40.138 [2024-12-06 03:35:00.108503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.138 [2024-12-06 03:35:00.108514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.138 qpair failed and we were unable to recover it. 00:26:40.138 [2024-12-06 03:35:00.108577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.138 [2024-12-06 03:35:00.108589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.138 qpair failed and we were unable to recover it. 00:26:40.138 [2024-12-06 03:35:00.108668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.138 [2024-12-06 03:35:00.108678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.138 qpair failed and we were unable to recover it. 00:26:40.138 [2024-12-06 03:35:00.108741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.138 [2024-12-06 03:35:00.108752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.138 qpair failed and we were unable to recover it. 00:26:40.138 [2024-12-06 03:35:00.108819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.138 [2024-12-06 03:35:00.108829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.138 qpair failed and we were unable to recover it. 00:26:40.138 [2024-12-06 03:35:00.108901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.138 [2024-12-06 03:35:00.108911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.138 qpair failed and we were unable to recover it. 00:26:40.138 [2024-12-06 03:35:00.109048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.138 [2024-12-06 03:35:00.109059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.138 qpair failed and we were unable to recover it. 00:26:40.138 [2024-12-06 03:35:00.109192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.138 [2024-12-06 03:35:00.109203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.138 qpair failed and we were unable to recover it. 00:26:40.138 [2024-12-06 03:35:00.109268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.138 [2024-12-06 03:35:00.109279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.138 qpair failed and we were unable to recover it. 00:26:40.138 [2024-12-06 03:35:00.109345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.138 [2024-12-06 03:35:00.109356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.138 qpair failed and we were unable to recover it. 00:26:40.138 [2024-12-06 03:35:00.109489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.138 [2024-12-06 03:35:00.109500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.138 qpair failed and we were unable to recover it. 00:26:40.138 [2024-12-06 03:35:00.109653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.138 [2024-12-06 03:35:00.109665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.138 qpair failed and we were unable to recover it. 00:26:40.138 [2024-12-06 03:35:00.109735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.138 [2024-12-06 03:35:00.109747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.138 qpair failed and we were unable to recover it. 00:26:40.138 [2024-12-06 03:35:00.109905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.138 [2024-12-06 03:35:00.109915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.138 qpair failed and we were unable to recover it. 00:26:40.138 [2024-12-06 03:35:00.110018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.138 [2024-12-06 03:35:00.110030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.138 qpair failed and we were unable to recover it. 00:26:40.138 [2024-12-06 03:35:00.110177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.138 [2024-12-06 03:35:00.110187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.138 qpair failed and we were unable to recover it. 00:26:40.138 [2024-12-06 03:35:00.110258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.138 [2024-12-06 03:35:00.110268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.138 qpair failed and we were unable to recover it. 00:26:40.138 [2024-12-06 03:35:00.110368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.138 [2024-12-06 03:35:00.110381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.138 qpair failed and we were unable to recover it. 00:26:40.138 [2024-12-06 03:35:00.110513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.138 [2024-12-06 03:35:00.110524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.138 qpair failed and we were unable to recover it. 00:26:40.138 [2024-12-06 03:35:00.110650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.138 [2024-12-06 03:35:00.110661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.138 qpair failed and we were unable to recover it. 00:26:40.138 [2024-12-06 03:35:00.110752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.138 [2024-12-06 03:35:00.110763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.138 qpair failed and we were unable to recover it. 00:26:40.138 [2024-12-06 03:35:00.110827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.138 [2024-12-06 03:35:00.110837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.138 qpair failed and we were unable to recover it. 00:26:40.138 [2024-12-06 03:35:00.110917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.139 [2024-12-06 03:35:00.110927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.139 qpair failed and we were unable to recover it. 00:26:40.139 [2024-12-06 03:35:00.111008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.139 [2024-12-06 03:35:00.111019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.139 qpair failed and we were unable to recover it. 00:26:40.139 [2024-12-06 03:35:00.111085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.139 [2024-12-06 03:35:00.111095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.139 qpair failed and we were unable to recover it. 00:26:40.139 [2024-12-06 03:35:00.111236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.139 [2024-12-06 03:35:00.111246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.139 qpair failed and we were unable to recover it. 00:26:40.139 [2024-12-06 03:35:00.111331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.139 [2024-12-06 03:35:00.111342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.139 qpair failed and we were unable to recover it. 00:26:40.139 [2024-12-06 03:35:00.111419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.139 [2024-12-06 03:35:00.111430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.139 qpair failed and we were unable to recover it. 00:26:40.139 [2024-12-06 03:35:00.111563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.139 [2024-12-06 03:35:00.111574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.139 qpair failed and we were unable to recover it. 00:26:40.139 [2024-12-06 03:35:00.111710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.139 [2024-12-06 03:35:00.111721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.139 qpair failed and we were unable to recover it. 00:26:40.139 [2024-12-06 03:35:00.111798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.139 [2024-12-06 03:35:00.111809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.139 qpair failed and we were unable to recover it. 00:26:40.139 [2024-12-06 03:35:00.111942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.139 [2024-12-06 03:35:00.111957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.139 qpair failed and we were unable to recover it. 00:26:40.139 [2024-12-06 03:35:00.112026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.139 [2024-12-06 03:35:00.112037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.139 qpair failed and we were unable to recover it. 00:26:40.139 [2024-12-06 03:35:00.112116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.139 [2024-12-06 03:35:00.112127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.139 qpair failed and we were unable to recover it. 00:26:40.139 [2024-12-06 03:35:00.112233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.139 [2024-12-06 03:35:00.112244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.139 qpair failed and we were unable to recover it. 00:26:40.139 [2024-12-06 03:35:00.112429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.139 [2024-12-06 03:35:00.112440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.139 qpair failed and we were unable to recover it. 00:26:40.139 [2024-12-06 03:35:00.112518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.139 [2024-12-06 03:35:00.112528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.139 qpair failed and we were unable to recover it. 00:26:40.139 [2024-12-06 03:35:00.112673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.139 [2024-12-06 03:35:00.112683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.139 qpair failed and we were unable to recover it. 00:26:40.139 [2024-12-06 03:35:00.112879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.139 [2024-12-06 03:35:00.112890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.139 qpair failed and we were unable to recover it. 00:26:40.139 [2024-12-06 03:35:00.113076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.139 [2024-12-06 03:35:00.113088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.139 qpair failed and we were unable to recover it. 00:26:40.139 [2024-12-06 03:35:00.113165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.139 [2024-12-06 03:35:00.113176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.139 qpair failed and we were unable to recover it. 00:26:40.139 [2024-12-06 03:35:00.113268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.139 [2024-12-06 03:35:00.113278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.139 qpair failed and we were unable to recover it. 00:26:40.139 [2024-12-06 03:35:00.113359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.139 [2024-12-06 03:35:00.113370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.139 qpair failed and we were unable to recover it. 00:26:40.139 [2024-12-06 03:35:00.113473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.139 [2024-12-06 03:35:00.113484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.139 qpair failed and we were unable to recover it. 00:26:40.139 [2024-12-06 03:35:00.113628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.139 [2024-12-06 03:35:00.113638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.139 qpair failed and we were unable to recover it. 00:26:40.139 [2024-12-06 03:35:00.113790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.139 [2024-12-06 03:35:00.113800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.139 qpair failed and we were unable to recover it. 00:26:40.139 [2024-12-06 03:35:00.113959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.139 [2024-12-06 03:35:00.113972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.139 qpair failed and we were unable to recover it. 00:26:40.139 [2024-12-06 03:35:00.114041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.139 [2024-12-06 03:35:00.114052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.139 qpair failed and we were unable to recover it. 00:26:40.139 [2024-12-06 03:35:00.114136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.139 [2024-12-06 03:35:00.114146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.139 qpair failed and we were unable to recover it. 00:26:40.139 [2024-12-06 03:35:00.114221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.139 [2024-12-06 03:35:00.114232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.139 qpair failed and we were unable to recover it. 00:26:40.139 [2024-12-06 03:35:00.114307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.139 [2024-12-06 03:35:00.114318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.139 qpair failed and we were unable to recover it. 00:26:40.139 [2024-12-06 03:35:00.114397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.139 [2024-12-06 03:35:00.114408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.139 qpair failed and we were unable to recover it. 00:26:40.139 [2024-12-06 03:35:00.114482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.139 [2024-12-06 03:35:00.114493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.139 qpair failed and we were unable to recover it. 00:26:40.139 [2024-12-06 03:35:00.114555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.139 [2024-12-06 03:35:00.114565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.139 qpair failed and we were unable to recover it. 00:26:40.139 [2024-12-06 03:35:00.114648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.139 [2024-12-06 03:35:00.114659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.139 qpair failed and we were unable to recover it. 00:26:40.139 [2024-12-06 03:35:00.114749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.139 [2024-12-06 03:35:00.114760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.139 qpair failed and we were unable to recover it. 00:26:40.139 [2024-12-06 03:35:00.114853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.139 [2024-12-06 03:35:00.114864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.139 qpair failed and we were unable to recover it. 00:26:40.139 [2024-12-06 03:35:00.114943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.139 [2024-12-06 03:35:00.114961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.139 qpair failed and we were unable to recover it. 00:26:40.139 [2024-12-06 03:35:00.115103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.139 [2024-12-06 03:35:00.115113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.139 qpair failed and we were unable to recover it. 00:26:40.139 [2024-12-06 03:35:00.115185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.139 [2024-12-06 03:35:00.115196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.139 qpair failed and we were unable to recover it. 00:26:40.139 [2024-12-06 03:35:00.115296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.140 [2024-12-06 03:35:00.115308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.140 qpair failed and we were unable to recover it. 00:26:40.140 [2024-12-06 03:35:00.115383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.140 [2024-12-06 03:35:00.115393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.140 qpair failed and we were unable to recover it. 00:26:40.140 [2024-12-06 03:35:00.115456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.140 [2024-12-06 03:35:00.115467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.140 qpair failed and we were unable to recover it. 00:26:40.140 [2024-12-06 03:35:00.115547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.140 [2024-12-06 03:35:00.115558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.140 qpair failed and we were unable to recover it. 00:26:40.140 [2024-12-06 03:35:00.115641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.140 [2024-12-06 03:35:00.115652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.140 qpair failed and we were unable to recover it. 00:26:40.140 [2024-12-06 03:35:00.115810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.140 [2024-12-06 03:35:00.115821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.140 qpair failed and we were unable to recover it. 00:26:40.140 [2024-12-06 03:35:00.115956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.140 [2024-12-06 03:35:00.115967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.140 qpair failed and we were unable to recover it. 00:26:40.140 [2024-12-06 03:35:00.116053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.140 [2024-12-06 03:35:00.116064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.140 qpair failed and we were unable to recover it. 00:26:40.140 [2024-12-06 03:35:00.116147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.140 [2024-12-06 03:35:00.116158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.140 qpair failed and we were unable to recover it. 00:26:40.140 [2024-12-06 03:35:00.116240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.140 [2024-12-06 03:35:00.116251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.140 qpair failed and we were unable to recover it. 00:26:40.140 [2024-12-06 03:35:00.116337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.140 [2024-12-06 03:35:00.116348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.140 qpair failed and we were unable to recover it. 00:26:40.140 [2024-12-06 03:35:00.116483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.140 [2024-12-06 03:35:00.116494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.140 qpair failed and we were unable to recover it. 00:26:40.140 [2024-12-06 03:35:00.116583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.140 [2024-12-06 03:35:00.116594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.140 qpair failed and we were unable to recover it. 00:26:40.140 [2024-12-06 03:35:00.116738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.140 [2024-12-06 03:35:00.116749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.140 qpair failed and we were unable to recover it. 00:26:40.140 [2024-12-06 03:35:00.116851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.140 [2024-12-06 03:35:00.116861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.140 qpair failed and we were unable to recover it. 00:26:40.140 [2024-12-06 03:35:00.117052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.140 [2024-12-06 03:35:00.117063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.140 qpair failed and we were unable to recover it. 00:26:40.140 [2024-12-06 03:35:00.117146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.140 [2024-12-06 03:35:00.117157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.140 qpair failed and we were unable to recover it. 00:26:40.140 [2024-12-06 03:35:00.117230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.140 [2024-12-06 03:35:00.117241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.140 qpair failed and we were unable to recover it. 00:26:40.140 [2024-12-06 03:35:00.117312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.140 [2024-12-06 03:35:00.117323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.140 qpair failed and we were unable to recover it. 00:26:40.140 [2024-12-06 03:35:00.117406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.140 [2024-12-06 03:35:00.117417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.140 qpair failed and we were unable to recover it. 00:26:40.140 [2024-12-06 03:35:00.117551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.140 [2024-12-06 03:35:00.117561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.140 qpair failed and we were unable to recover it. 00:26:40.140 [2024-12-06 03:35:00.117625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.140 [2024-12-06 03:35:00.117636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.140 qpair failed and we were unable to recover it. 00:26:40.140 [2024-12-06 03:35:00.117712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.140 [2024-12-06 03:35:00.117723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.140 qpair failed and we were unable to recover it. 00:26:40.140 [2024-12-06 03:35:00.117808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.140 [2024-12-06 03:35:00.117819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.140 qpair failed and we were unable to recover it. 00:26:40.140 [2024-12-06 03:35:00.117965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.140 [2024-12-06 03:35:00.117977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.140 qpair failed and we were unable to recover it. 00:26:40.140 [2024-12-06 03:35:00.118070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.140 [2024-12-06 03:35:00.118081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.140 qpair failed and we were unable to recover it. 00:26:40.140 [2024-12-06 03:35:00.118320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.140 [2024-12-06 03:35:00.118330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.140 qpair failed and we were unable to recover it. 00:26:40.140 [2024-12-06 03:35:00.118411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.140 [2024-12-06 03:35:00.118421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.140 qpair failed and we were unable to recover it. 00:26:40.140 [2024-12-06 03:35:00.118493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.140 [2024-12-06 03:35:00.118504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.140 qpair failed and we were unable to recover it. 00:26:40.140 [2024-12-06 03:35:00.118574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.140 [2024-12-06 03:35:00.118586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.140 qpair failed and we were unable to recover it. 00:26:40.140 [2024-12-06 03:35:00.118649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.140 [2024-12-06 03:35:00.118661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.140 qpair failed and we were unable to recover it. 00:26:40.140 [2024-12-06 03:35:00.118872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.140 [2024-12-06 03:35:00.118883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.140 qpair failed and we were unable to recover it. 00:26:40.140 [2024-12-06 03:35:00.118976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.140 [2024-12-06 03:35:00.118988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.140 qpair failed and we were unable to recover it. 00:26:40.140 [2024-12-06 03:35:00.119066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.140 [2024-12-06 03:35:00.119077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.140 qpair failed and we were unable to recover it. 00:26:40.140 [2024-12-06 03:35:00.119151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.140 [2024-12-06 03:35:00.119161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.141 qpair failed and we were unable to recover it. 00:26:40.141 [2024-12-06 03:35:00.119223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.141 [2024-12-06 03:35:00.119233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.141 qpair failed and we were unable to recover it. 00:26:40.141 [2024-12-06 03:35:00.119322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.141 [2024-12-06 03:35:00.119333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.141 qpair failed and we were unable to recover it. 00:26:40.141 [2024-12-06 03:35:00.119480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.141 [2024-12-06 03:35:00.119495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.141 qpair failed and we were unable to recover it. 00:26:40.141 [2024-12-06 03:35:00.119569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.141 [2024-12-06 03:35:00.119580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.141 qpair failed and we were unable to recover it. 00:26:40.141 [2024-12-06 03:35:00.119713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.141 [2024-12-06 03:35:00.119724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.141 qpair failed and we were unable to recover it. 00:26:40.141 [2024-12-06 03:35:00.119791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.141 [2024-12-06 03:35:00.119803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.141 qpair failed and we were unable to recover it. 00:26:40.141 [2024-12-06 03:35:00.119874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.141 [2024-12-06 03:35:00.119885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.141 qpair failed and we were unable to recover it. 00:26:40.141 [2024-12-06 03:35:00.119965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.141 [2024-12-06 03:35:00.119977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.141 qpair failed and we were unable to recover it. 00:26:40.141 [2024-12-06 03:35:00.120109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.141 [2024-12-06 03:35:00.120119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.141 qpair failed and we were unable to recover it. 00:26:40.141 [2024-12-06 03:35:00.120262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.141 [2024-12-06 03:35:00.120273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.141 qpair failed and we were unable to recover it. 00:26:40.141 [2024-12-06 03:35:00.120417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.141 [2024-12-06 03:35:00.120428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.141 qpair failed and we were unable to recover it. 00:26:40.141 [2024-12-06 03:35:00.120597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.141 [2024-12-06 03:35:00.120608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.141 qpair failed and we were unable to recover it. 00:26:40.141 [2024-12-06 03:35:00.120699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.141 [2024-12-06 03:35:00.120710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.141 qpair failed and we were unable to recover it. 00:26:40.141 [2024-12-06 03:35:00.120842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.141 [2024-12-06 03:35:00.120854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.141 qpair failed and we were unable to recover it. 00:26:40.141 [2024-12-06 03:35:00.120936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.141 [2024-12-06 03:35:00.120950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.141 qpair failed and we were unable to recover it. 00:26:40.141 [2024-12-06 03:35:00.121153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.141 [2024-12-06 03:35:00.121165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.141 qpair failed and we were unable to recover it. 00:26:40.141 [2024-12-06 03:35:00.121272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.141 [2024-12-06 03:35:00.121283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.141 qpair failed and we were unable to recover it. 00:26:40.141 [2024-12-06 03:35:00.121416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.141 [2024-12-06 03:35:00.121427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.141 qpair failed and we were unable to recover it. 00:26:40.141 [2024-12-06 03:35:00.121601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.141 [2024-12-06 03:35:00.121612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.141 qpair failed and we were unable to recover it. 00:26:40.141 [2024-12-06 03:35:00.121708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.141 [2024-12-06 03:35:00.121719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.141 qpair failed and we were unable to recover it. 00:26:40.141 [2024-12-06 03:35:00.121862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.141 [2024-12-06 03:35:00.121873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.141 qpair failed and we were unable to recover it. 00:26:40.141 [2024-12-06 03:35:00.122118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.141 [2024-12-06 03:35:00.122129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.141 qpair failed and we were unable to recover it. 00:26:40.141 [2024-12-06 03:35:00.122203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.141 [2024-12-06 03:35:00.122214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.141 qpair failed and we were unable to recover it. 00:26:40.141 [2024-12-06 03:35:00.122308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.141 [2024-12-06 03:35:00.122319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.141 qpair failed and we were unable to recover it. 00:26:40.141 [2024-12-06 03:35:00.122546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.141 [2024-12-06 03:35:00.122557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.141 qpair failed and we were unable to recover it. 00:26:40.141 [2024-12-06 03:35:00.122754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.141 [2024-12-06 03:35:00.122765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.141 qpair failed and we were unable to recover it. 00:26:40.141 [2024-12-06 03:35:00.122867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.141 [2024-12-06 03:35:00.122878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.141 qpair failed and we were unable to recover it. 00:26:40.141 [2024-12-06 03:35:00.123033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.141 [2024-12-06 03:35:00.123045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.141 qpair failed and we were unable to recover it. 00:26:40.141 [2024-12-06 03:35:00.123219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.141 [2024-12-06 03:35:00.123231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.141 qpair failed and we were unable to recover it. 00:26:40.141 Malloc0 00:26:40.141 [2024-12-06 03:35:00.123314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.141 [2024-12-06 03:35:00.123325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.141 qpair failed and we were unable to recover it. 00:26:40.141 [2024-12-06 03:35:00.123605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.141 [2024-12-06 03:35:00.123616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.141 qpair failed and we were unable to recover it. 00:26:40.141 [2024-12-06 03:35:00.123771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.141 [2024-12-06 03:35:00.123782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.141 qpair failed and we were unable to recover it. 00:26:40.141 [2024-12-06 03:35:00.124023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.141 03:35:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.141 [2024-12-06 03:35:00.124035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.141 qpair failed and we were unable to recover it. 00:26:40.141 [2024-12-06 03:35:00.124208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.141 [2024-12-06 03:35:00.124219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.141 qpair failed and we were unable to recover it. 00:26:40.141 [2024-12-06 03:35:00.124390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.141 [2024-12-06 03:35:00.124401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.141 qpair failed and we were unable to recover it. 00:26:40.141 03:35:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:40.141 [2024-12-06 03:35:00.124617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.141 [2024-12-06 03:35:00.124628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.141 qpair failed and we were unable to recover it. 00:26:40.142 03:35:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.142 [2024-12-06 03:35:00.124835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.142 [2024-12-06 03:35:00.124847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.142 qpair failed and we were unable to recover it. 00:26:40.142 [2024-12-06 03:35:00.124992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.142 [2024-12-06 03:35:00.125005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.142 qpair failed and we were unable to recover it. 00:26:40.142 03:35:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:40.142 [2024-12-06 03:35:00.125095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.142 [2024-12-06 03:35:00.125105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.142 qpair failed and we were unable to recover it. 00:26:40.142 [2024-12-06 03:35:00.125304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.142 [2024-12-06 03:35:00.125315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.142 qpair failed and we were unable to recover it. 00:26:40.142 [2024-12-06 03:35:00.125409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.142 [2024-12-06 03:35:00.125419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.142 qpair failed and we were unable to recover it. 00:26:40.142 [2024-12-06 03:35:00.125627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.142 [2024-12-06 03:35:00.125639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.142 qpair failed and we were unable to recover it. 00:26:40.142 [2024-12-06 03:35:00.125787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.142 [2024-12-06 03:35:00.125798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.142 qpair failed and we were unable to recover it. 00:26:40.142 [2024-12-06 03:35:00.125894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.142 [2024-12-06 03:35:00.125905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.142 qpair failed and we were unable to recover it. 00:26:40.142 [2024-12-06 03:35:00.126070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.142 [2024-12-06 03:35:00.126082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.142 qpair failed and we were unable to recover it. 00:26:40.142 [2024-12-06 03:35:00.126224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.142 [2024-12-06 03:35:00.126235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.142 qpair failed and we were unable to recover it. 00:26:40.142 [2024-12-06 03:35:00.126406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.142 [2024-12-06 03:35:00.126418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.142 qpair failed and we were unable to recover it. 00:26:40.142 [2024-12-06 03:35:00.126581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.142 [2024-12-06 03:35:00.126592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.142 qpair failed and we were unable to recover it. 00:26:40.142 [2024-12-06 03:35:00.126785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.142 [2024-12-06 03:35:00.126796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.142 qpair failed and we were unable to recover it. 00:26:40.142 [2024-12-06 03:35:00.126889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.142 [2024-12-06 03:35:00.126900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.142 qpair failed and we were unable to recover it. 00:26:40.142 [2024-12-06 03:35:00.126985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.142 [2024-12-06 03:35:00.126998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.142 qpair failed and we were unable to recover it. 00:26:40.142 [2024-12-06 03:35:00.127148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.142 [2024-12-06 03:35:00.127159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.142 qpair failed and we were unable to recover it. 00:26:40.142 [2024-12-06 03:35:00.127318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.142 [2024-12-06 03:35:00.127328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.142 qpair failed and we were unable to recover it. 00:26:40.142 [2024-12-06 03:35:00.127412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.142 [2024-12-06 03:35:00.127423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.142 qpair failed and we were unable to recover it. 00:26:40.142 [2024-12-06 03:35:00.127522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.142 [2024-12-06 03:35:00.127533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.142 qpair failed and we were unable to recover it. 00:26:40.142 [2024-12-06 03:35:00.127748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.142 [2024-12-06 03:35:00.127759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.142 qpair failed and we were unable to recover it. 00:26:40.142 [2024-12-06 03:35:00.127853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.142 [2024-12-06 03:35:00.127864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.142 qpair failed and we were unable to recover it. 00:26:40.142 [2024-12-06 03:35:00.128072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.142 [2024-12-06 03:35:00.128084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.142 qpair failed and we were unable to recover it. 00:26:40.142 [2024-12-06 03:35:00.128225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.142 [2024-12-06 03:35:00.128236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.142 qpair failed and we were unable to recover it. 00:26:40.142 [2024-12-06 03:35:00.128374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.142 [2024-12-06 03:35:00.128386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.142 qpair failed and we were unable to recover it. 00:26:40.142 [2024-12-06 03:35:00.128528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.142 [2024-12-06 03:35:00.128539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.142 qpair failed and we were unable to recover it. 00:26:40.142 [2024-12-06 03:35:00.128814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.142 [2024-12-06 03:35:00.128825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.142 qpair failed and we were unable to recover it. 00:26:40.142 [2024-12-06 03:35:00.128975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.142 [2024-12-06 03:35:00.128986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.142 qpair failed and we were unable to recover it. 00:26:40.142 [2024-12-06 03:35:00.129085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.142 [2024-12-06 03:35:00.129096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.142 qpair failed and we were unable to recover it. 00:26:40.142 [2024-12-06 03:35:00.129233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.142 [2024-12-06 03:35:00.129244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.142 qpair failed and we were unable to recover it. 00:26:40.142 [2024-12-06 03:35:00.129342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.142 [2024-12-06 03:35:00.129353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.142 qpair failed and we were unable to recover it. 00:26:40.142 [2024-12-06 03:35:00.129556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.142 [2024-12-06 03:35:00.129566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.142 qpair failed and we were unable to recover it. 00:26:40.142 [2024-12-06 03:35:00.129653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.142 [2024-12-06 03:35:00.129666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.142 qpair failed and we were unable to recover it. 00:26:40.142 [2024-12-06 03:35:00.129810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.142 [2024-12-06 03:35:00.129820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.142 qpair failed and we were unable to recover it. 00:26:40.142 [2024-12-06 03:35:00.130023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.142 [2024-12-06 03:35:00.130035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.142 qpair failed and we were unable to recover it. 00:26:40.142 [2024-12-06 03:35:00.130292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.142 [2024-12-06 03:35:00.130302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.142 qpair failed and we were unable to recover it. 00:26:40.142 [2024-12-06 03:35:00.130461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.142 [2024-12-06 03:35:00.130471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.142 qpair failed and we were unable to recover it. 00:26:40.142 [2024-12-06 03:35:00.130613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.142 [2024-12-06 03:35:00.130623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.142 qpair failed and we were unable to recover it. 00:26:40.142 [2024-12-06 03:35:00.130822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.143 [2024-12-06 03:35:00.130833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.143 qpair failed and we were unable to recover it. 00:26:40.143 [2024-12-06 03:35:00.130836] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:40.143 [2024-12-06 03:35:00.130971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.143 [2024-12-06 03:35:00.130981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.143 qpair failed and we were unable to recover it. 00:26:40.143 [2024-12-06 03:35:00.131208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.143 [2024-12-06 03:35:00.131218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.143 qpair failed and we were unable to recover it. 00:26:40.143 [2024-12-06 03:35:00.131360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.143 [2024-12-06 03:35:00.131370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.143 qpair failed and we were unable to recover it. 00:26:40.143 [2024-12-06 03:35:00.131531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.143 [2024-12-06 03:35:00.131541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.143 qpair failed and we were unable to recover it. 00:26:40.143 [2024-12-06 03:35:00.131627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.143 [2024-12-06 03:35:00.131637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.143 qpair failed and we were unable to recover it. 00:26:40.143 [2024-12-06 03:35:00.131883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.143 [2024-12-06 03:35:00.131893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.143 qpair failed and we were unable to recover it. 00:26:40.143 [2024-12-06 03:35:00.132085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.143 [2024-12-06 03:35:00.132098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.143 qpair failed and we were unable to recover it. 00:26:40.143 [2024-12-06 03:35:00.132245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.143 [2024-12-06 03:35:00.132256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.143 qpair failed and we were unable to recover it. 00:26:40.143 [2024-12-06 03:35:00.132337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.143 [2024-12-06 03:35:00.132347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.143 qpair failed and we were unable to recover it. 00:26:40.143 [2024-12-06 03:35:00.132443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.143 [2024-12-06 03:35:00.132454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.143 qpair failed and we were unable to recover it. 00:26:40.143 [2024-12-06 03:35:00.132545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.143 [2024-12-06 03:35:00.132555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.143 qpair failed and we were unable to recover it. 00:26:40.143 [2024-12-06 03:35:00.132794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.143 [2024-12-06 03:35:00.132804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.143 qpair failed and we were unable to recover it. 00:26:40.143 [2024-12-06 03:35:00.133037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.143 [2024-12-06 03:35:00.133049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.143 qpair failed and we were unable to recover it. 00:26:40.143 [2024-12-06 03:35:00.133143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.143 [2024-12-06 03:35:00.133154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.143 qpair failed and we were unable to recover it. 00:26:40.143 [2024-12-06 03:35:00.133251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.143 [2024-12-06 03:35:00.133261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.143 qpair failed and we were unable to recover it. 00:26:40.143 [2024-12-06 03:35:00.133353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.143 [2024-12-06 03:35:00.133363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.143 qpair failed and we were unable to recover it. 00:26:40.143 [2024-12-06 03:35:00.133447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.143 [2024-12-06 03:35:00.133457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.143 qpair failed and we were unable to recover it. 00:26:40.143 [2024-12-06 03:35:00.133546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.143 [2024-12-06 03:35:00.133556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.143 qpair failed and we were unable to recover it. 00:26:40.143 [2024-12-06 03:35:00.133686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.143 [2024-12-06 03:35:00.133696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.143 qpair failed and we were unable to recover it. 00:26:40.143 [2024-12-06 03:35:00.133893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.143 [2024-12-06 03:35:00.133903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.143 qpair failed and we were unable to recover it. 00:26:40.143 [2024-12-06 03:35:00.134105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.143 [2024-12-06 03:35:00.134117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.143 qpair failed and we were unable to recover it. 00:26:40.143 [2024-12-06 03:35:00.134206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.143 [2024-12-06 03:35:00.134217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.143 qpair failed and we were unable to recover it. 00:26:40.143 [2024-12-06 03:35:00.134368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.143 [2024-12-06 03:35:00.134379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.143 qpair failed and we were unable to recover it. 00:26:40.143 [2024-12-06 03:35:00.134522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.143 [2024-12-06 03:35:00.134533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.143 qpair failed and we were unable to recover it. 00:26:40.143 [2024-12-06 03:35:00.134759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.143 [2024-12-06 03:35:00.134770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.143 qpair failed and we were unable to recover it. 00:26:40.143 [2024-12-06 03:35:00.134980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.143 [2024-12-06 03:35:00.134990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.143 qpair failed and we were unable to recover it. 00:26:40.143 [2024-12-06 03:35:00.135086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.143 [2024-12-06 03:35:00.135096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.143 qpair failed and we were unable to recover it. 00:26:40.143 [2024-12-06 03:35:00.135180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.143 [2024-12-06 03:35:00.135190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.143 qpair failed and we were unable to recover it. 00:26:40.143 [2024-12-06 03:35:00.135263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.143 [2024-12-06 03:35:00.135274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.143 qpair failed and we were unable to recover it. 00:26:40.143 [2024-12-06 03:35:00.135372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.143 [2024-12-06 03:35:00.135383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.143 qpair failed and we were unable to recover it. 00:26:40.143 [2024-12-06 03:35:00.135625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.143 [2024-12-06 03:35:00.135635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.143 qpair failed and we were unable to recover it. 00:26:40.143 [2024-12-06 03:35:00.135718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.143 [2024-12-06 03:35:00.135728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.143 qpair failed and we were unable to recover it. 00:26:40.143 [2024-12-06 03:35:00.135985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.143 [2024-12-06 03:35:00.135997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.143 qpair failed and we were unable to recover it. 00:26:40.143 [2024-12-06 03:35:00.136155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.143 [2024-12-06 03:35:00.136165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.143 qpair failed and we were unable to recover it. 00:26:40.143 [2024-12-06 03:35:00.136316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.143 [2024-12-06 03:35:00.136326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.143 qpair failed and we were unable to recover it. 00:26:40.143 [2024-12-06 03:35:00.136394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.143 [2024-12-06 03:35:00.136405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.143 qpair failed and we were unable to recover it. 00:26:40.143 [2024-12-06 03:35:00.136549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.143 [2024-12-06 03:35:00.136559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.143 qpair failed and we were unable to recover it. 00:26:40.144 [2024-12-06 03:35:00.136707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.144 [2024-12-06 03:35:00.136717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.144 qpair failed and we were unable to recover it. 00:26:40.144 [2024-12-06 03:35:00.136850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.144 [2024-12-06 03:35:00.136861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.144 qpair failed and we were unable to recover it. 00:26:40.144 [2024-12-06 03:35:00.137046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.144 [2024-12-06 03:35:00.137057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.144 qpair failed and we were unable to recover it. 00:26:40.144 [2024-12-06 03:35:00.137198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.144 [2024-12-06 03:35:00.137209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.144 qpair failed and we were unable to recover it. 00:26:40.144 [2024-12-06 03:35:00.137348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.144 [2024-12-06 03:35:00.137359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.144 qpair failed and we were unable to recover it. 00:26:40.144 [2024-12-06 03:35:00.137476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.144 [2024-12-06 03:35:00.137486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.144 qpair failed and we were unable to recover it. 00:26:40.144 [2024-12-06 03:35:00.137733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.144 [2024-12-06 03:35:00.137743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.144 qpair failed and we were unable to recover it. 00:26:40.144 [2024-12-06 03:35:00.137828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.144 [2024-12-06 03:35:00.137838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.144 qpair failed and we were unable to recover it. 00:26:40.144 [2024-12-06 03:35:00.138066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.144 [2024-12-06 03:35:00.138077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.144 qpair failed and we were unable to recover it. 00:26:40.144 [2024-12-06 03:35:00.138231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.144 [2024-12-06 03:35:00.138245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.144 qpair failed and we were unable to recover it. 00:26:40.144 [2024-12-06 03:35:00.138455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.144 [2024-12-06 03:35:00.138466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.144 qpair failed and we were unable to recover it. 00:26:40.144 [2024-12-06 03:35:00.138649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.144 [2024-12-06 03:35:00.138659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.144 qpair failed and we were unable to recover it. 00:26:40.144 [2024-12-06 03:35:00.138883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.144 [2024-12-06 03:35:00.138894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.144 qpair failed and we were unable to recover it. 00:26:40.144 [2024-12-06 03:35:00.139135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.144 [2024-12-06 03:35:00.139146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.144 qpair failed and we were unable to recover it. 00:26:40.144 [2024-12-06 03:35:00.139325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.144 [2024-12-06 03:35:00.139336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.144 qpair failed and we were unable to recover it. 00:26:40.144 [2024-12-06 03:35:00.139504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.144 [2024-12-06 03:35:00.139515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.144 qpair failed and we were unable to recover it. 00:26:40.144 03:35:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.144 [2024-12-06 03:35:00.139677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.144 [2024-12-06 03:35:00.139688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.144 qpair failed and we were unable to recover it. 00:26:40.144 [2024-12-06 03:35:00.139837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.144 [2024-12-06 03:35:00.139848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.144 qpair failed and we were unable to recover it. 00:26:40.144 03:35:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:40.144 [2024-12-06 03:35:00.139950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.144 [2024-12-06 03:35:00.139961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.144 qpair failed and we were unable to recover it. 00:26:40.144 [2024-12-06 03:35:00.140107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.144 [2024-12-06 03:35:00.140118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.144 qpair failed and we were unable to recover it. 00:26:40.144 03:35:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.144 [2024-12-06 03:35:00.140314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.144 [2024-12-06 03:35:00.140325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.144 qpair failed and we were unable to recover it. 00:26:40.144 03:35:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:40.144 [2024-12-06 03:35:00.140594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.144 [2024-12-06 03:35:00.140605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.144 qpair failed and we were unable to recover it. 00:26:40.144 [2024-12-06 03:35:00.140885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.144 [2024-12-06 03:35:00.140896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.144 qpair failed and we were unable to recover it. 00:26:40.144 [2024-12-06 03:35:00.141121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.144 [2024-12-06 03:35:00.141132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.144 qpair failed and we were unable to recover it. 00:26:40.144 [2024-12-06 03:35:00.141228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.144 [2024-12-06 03:35:00.141238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.144 qpair failed and we were unable to recover it. 00:26:40.144 [2024-12-06 03:35:00.141386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.144 [2024-12-06 03:35:00.141397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.144 qpair failed and we were unable to recover it. 00:26:40.144 [2024-12-06 03:35:00.141546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.144 [2024-12-06 03:35:00.141556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.144 qpair failed and we were unable to recover it. 00:26:40.144 [2024-12-06 03:35:00.141770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.144 [2024-12-06 03:35:00.141781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.144 qpair failed and we were unable to recover it. 00:26:40.144 [2024-12-06 03:35:00.142019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.144 [2024-12-06 03:35:00.142029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.144 qpair failed and we were unable to recover it. 00:26:40.144 [2024-12-06 03:35:00.142185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.144 [2024-12-06 03:35:00.142196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.144 qpair failed and we were unable to recover it. 00:26:40.144 [2024-12-06 03:35:00.142344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.144 [2024-12-06 03:35:00.142354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.144 qpair failed and we were unable to recover it. 00:26:40.144 [2024-12-06 03:35:00.142427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.144 [2024-12-06 03:35:00.142437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.144 qpair failed and we were unable to recover it. 00:26:40.144 [2024-12-06 03:35:00.142601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.144 [2024-12-06 03:35:00.142612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.144 qpair failed and we were unable to recover it. 00:26:40.144 [2024-12-06 03:35:00.142861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.144 [2024-12-06 03:35:00.142871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cd8000b90 with addr=10.0.0.2, port=4420 00:26:40.144 qpair failed and we were unable to recover it. 00:26:40.144 [2024-12-06 03:35:00.143094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.144 [2024-12-06 03:35:00.143118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.144 qpair failed and we were unable to recover it. 00:26:40.144 [2024-12-06 03:35:00.143235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.144 [2024-12-06 03:35:00.143251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.144 qpair failed and we were unable to recover it. 00:26:40.145 [2024-12-06 03:35:00.143393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.145 [2024-12-06 03:35:00.143408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.145 qpair failed and we were unable to recover it. 00:26:40.145 [2024-12-06 03:35:00.143638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.145 [2024-12-06 03:35:00.143653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.145 qpair failed and we were unable to recover it. 00:26:40.145 [2024-12-06 03:35:00.143837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.145 [2024-12-06 03:35:00.143852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.145 qpair failed and we were unable to recover it. 00:26:40.145 [2024-12-06 03:35:00.144066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.145 [2024-12-06 03:35:00.144083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.145 qpair failed and we were unable to recover it. 00:26:40.145 [2024-12-06 03:35:00.144290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.145 [2024-12-06 03:35:00.144305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.145 qpair failed and we were unable to recover it. 00:26:40.145 [2024-12-06 03:35:00.144458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.145 [2024-12-06 03:35:00.144472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.145 qpair failed and we were unable to recover it. 00:26:40.145 [2024-12-06 03:35:00.144629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.145 [2024-12-06 03:35:00.144643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.145 qpair failed and we were unable to recover it. 00:26:40.145 [2024-12-06 03:35:00.144873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.145 [2024-12-06 03:35:00.144887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.145 qpair failed and we were unable to recover it. 00:26:40.145 [2024-12-06 03:35:00.145055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.145 [2024-12-06 03:35:00.145070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.145 qpair failed and we were unable to recover it. 00:26:40.145 [2024-12-06 03:35:00.145169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.145 [2024-12-06 03:35:00.145183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.145 qpair failed and we were unable to recover it. 00:26:40.145 [2024-12-06 03:35:00.145361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.145 [2024-12-06 03:35:00.145376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.145 qpair failed and we were unable to recover it. 00:26:40.145 [2024-12-06 03:35:00.145613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.145 [2024-12-06 03:35:00.145628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.145 qpair failed and we were unable to recover it. 00:26:40.145 [2024-12-06 03:35:00.145739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.145 [2024-12-06 03:35:00.145754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.145 qpair failed and we were unable to recover it. 00:26:40.145 [2024-12-06 03:35:00.145857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.145 [2024-12-06 03:35:00.145871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.145 qpair failed and we were unable to recover it. 00:26:40.145 [2024-12-06 03:35:00.146024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.145 [2024-12-06 03:35:00.146040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.145 qpair failed and we were unable to recover it. 00:26:40.145 [2024-12-06 03:35:00.146251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.145 [2024-12-06 03:35:00.146265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.145 qpair failed and we were unable to recover it. 00:26:40.145 [2024-12-06 03:35:00.146487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.145 [2024-12-06 03:35:00.146501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.145 qpair failed and we were unable to recover it. 00:26:40.145 [2024-12-06 03:35:00.146789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.145 [2024-12-06 03:35:00.146803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.145 qpair failed and we were unable to recover it. 00:26:40.145 [2024-12-06 03:35:00.147019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.145 [2024-12-06 03:35:00.147036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.145 qpair failed and we were unable to recover it. 00:26:40.145 [2024-12-06 03:35:00.147198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.145 [2024-12-06 03:35:00.147213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.145 qpair failed and we were unable to recover it. 00:26:40.145 [2024-12-06 03:35:00.147370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.145 [2024-12-06 03:35:00.147384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.145 qpair failed and we were unable to recover it. 00:26:40.145 [2024-12-06 03:35:00.147492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.145 [2024-12-06 03:35:00.147507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.145 qpair failed and we were unable to recover it. 00:26:40.145 03:35:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.145 [2024-12-06 03:35:00.147745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.145 [2024-12-06 03:35:00.147760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.145 qpair failed and we were unable to recover it. 00:26:40.145 [2024-12-06 03:35:00.147920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.145 [2024-12-06 03:35:00.147935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.145 qpair failed and we were unable to recover it. 00:26:40.145 03:35:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:40.145 [2024-12-06 03:35:00.148143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.145 [2024-12-06 03:35:00.148159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.145 qpair failed and we were unable to recover it. 00:26:40.145 03:35:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.145 [2024-12-06 03:35:00.148376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.145 [2024-12-06 03:35:00.148391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.145 qpair failed and we were unable to recover it. 00:26:40.145 03:35:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:40.145 [2024-12-06 03:35:00.148638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.145 [2024-12-06 03:35:00.148653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.145 qpair failed and we were unable to recover it. 00:26:40.145 [2024-12-06 03:35:00.148859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.145 [2024-12-06 03:35:00.148874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.145 qpair failed and we were unable to recover it. 00:26:40.145 [2024-12-06 03:35:00.149077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.145 [2024-12-06 03:35:00.149093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.145 qpair failed and we were unable to recover it. 00:26:40.145 [2024-12-06 03:35:00.149206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.145 [2024-12-06 03:35:00.149220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.145 qpair failed and we were unable to recover it. 00:26:40.145 [2024-12-06 03:35:00.149322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.145 [2024-12-06 03:35:00.149337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.145 qpair failed and we were unable to recover it. 00:26:40.145 [2024-12-06 03:35:00.149474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.146 [2024-12-06 03:35:00.149488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.146 qpair failed and we were unable to recover it. 00:26:40.146 [2024-12-06 03:35:00.149682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.146 [2024-12-06 03:35:00.149696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.146 qpair failed and we were unable to recover it. 00:26:40.146 [2024-12-06 03:35:00.149914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.146 [2024-12-06 03:35:00.149929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.146 qpair failed and we were unable to recover it. 00:26:40.146 [2024-12-06 03:35:00.150124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.146 [2024-12-06 03:35:00.150139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.146 qpair failed and we were unable to recover it. 00:26:40.146 [2024-12-06 03:35:00.150373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.146 [2024-12-06 03:35:00.150387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.146 qpair failed and we were unable to recover it. 00:26:40.146 [2024-12-06 03:35:00.150540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.146 [2024-12-06 03:35:00.150554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.146 qpair failed and we were unable to recover it. 00:26:40.146 [2024-12-06 03:35:00.150787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.146 [2024-12-06 03:35:00.150802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.146 qpair failed and we were unable to recover it. 00:26:40.146 [2024-12-06 03:35:00.150981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.146 [2024-12-06 03:35:00.150996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.146 qpair failed and we were unable to recover it. 00:26:40.146 [2024-12-06 03:35:00.151242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.146 [2024-12-06 03:35:00.151257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.146 qpair failed and we were unable to recover it. 00:26:40.146 [2024-12-06 03:35:00.151351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.146 [2024-12-06 03:35:00.151366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.146 qpair failed and we were unable to recover it. 00:26:40.146 [2024-12-06 03:35:00.151640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.146 [2024-12-06 03:35:00.151655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.146 qpair failed and we were unable to recover it. 00:26:40.146 [2024-12-06 03:35:00.151853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.146 [2024-12-06 03:35:00.151867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.146 qpair failed and we were unable to recover it. 00:26:40.146 [2024-12-06 03:35:00.152046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.146 [2024-12-06 03:35:00.152061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.146 qpair failed and we were unable to recover it. 00:26:40.146 [2024-12-06 03:35:00.152268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.146 [2024-12-06 03:35:00.152283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.146 qpair failed and we were unable to recover it. 00:26:40.146 [2024-12-06 03:35:00.152510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.146 [2024-12-06 03:35:00.152524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.146 qpair failed and we were unable to recover it. 00:26:40.146 [2024-12-06 03:35:00.152693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.146 [2024-12-06 03:35:00.152709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.146 qpair failed and we were unable to recover it. 00:26:40.146 [2024-12-06 03:35:00.152914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.146 [2024-12-06 03:35:00.152929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.146 qpair failed and we were unable to recover it. 00:26:40.146 [2024-12-06 03:35:00.153093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.146 [2024-12-06 03:35:00.153108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.146 qpair failed and we were unable to recover it. 00:26:40.146 [2024-12-06 03:35:00.153323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.146 [2024-12-06 03:35:00.153338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.146 qpair failed and we were unable to recover it. 00:26:40.146 [2024-12-06 03:35:00.153569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.146 [2024-12-06 03:35:00.153587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.146 qpair failed and we were unable to recover it. 00:26:40.146 [2024-12-06 03:35:00.153766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.146 [2024-12-06 03:35:00.153780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.146 qpair failed and we were unable to recover it. 00:26:40.146 [2024-12-06 03:35:00.153956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.146 [2024-12-06 03:35:00.153972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.146 qpair failed and we were unable to recover it. 00:26:40.146 [2024-12-06 03:35:00.154201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.146 [2024-12-06 03:35:00.154215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.146 qpair failed and we were unable to recover it. 00:26:40.146 [2024-12-06 03:35:00.154394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.146 [2024-12-06 03:35:00.154409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.146 qpair failed and we were unable to recover it. 00:26:40.146 [2024-12-06 03:35:00.154609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.146 [2024-12-06 03:35:00.154624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.146 qpair failed and we were unable to recover it. 00:26:40.146 [2024-12-06 03:35:00.154832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.146 [2024-12-06 03:35:00.154847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.146 qpair failed and we were unable to recover it. 00:26:40.146 [2024-12-06 03:35:00.155076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.146 [2024-12-06 03:35:00.155092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.146 qpair failed and we were unable to recover it. 00:26:40.146 [2024-12-06 03:35:00.155173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.146 [2024-12-06 03:35:00.155188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.146 qpair failed and we were unable to recover it. 00:26:40.146 [2024-12-06 03:35:00.155273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.146 [2024-12-06 03:35:00.155287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.146 qpair failed and we were unable to recover it. 00:26:40.146 [2024-12-06 03:35:00.155396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.146 [2024-12-06 03:35:00.155411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.146 qpair failed and we were unable to recover it. 00:26:40.146 [2024-12-06 03:35:00.155512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.146 [2024-12-06 03:35:00.155527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.146 qpair failed and we were unable to recover it. 00:26:40.146 03:35:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.146 [2024-12-06 03:35:00.155698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.146 [2024-12-06 03:35:00.155712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.146 qpair failed and we were unable to recover it. 00:26:40.146 [2024-12-06 03:35:00.155854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.146 [2024-12-06 03:35:00.155868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.146 qpair failed and we were unable to recover it. 00:26:40.146 03:35:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:40.146 [2024-12-06 03:35:00.156016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.146 [2024-12-06 03:35:00.156032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.146 qpair failed and we were unable to recover it. 00:26:40.146 [2024-12-06 03:35:00.156242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.146 [2024-12-06 03:35:00.156257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.146 qpair failed and we were unable to recover it. 00:26:40.146 03:35:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.146 [2024-12-06 03:35:00.156451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.146 [2024-12-06 03:35:00.156465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.146 qpair failed and we were unable to recover it. 00:26:40.146 03:35:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:40.146 [2024-12-06 03:35:00.156678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.146 [2024-12-06 03:35:00.156694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.147 qpair failed and we were unable to recover it. 00:26:40.147 [2024-12-06 03:35:00.156832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.147 [2024-12-06 03:35:00.156846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.147 qpair failed and we were unable to recover it. 00:26:40.147 [2024-12-06 03:35:00.157083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.147 [2024-12-06 03:35:00.157098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.147 qpair failed and we were unable to recover it. 00:26:40.147 [2024-12-06 03:35:00.157238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.147 [2024-12-06 03:35:00.157253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.147 qpair failed and we were unable to recover it. 00:26:40.147 [2024-12-06 03:35:00.157362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.147 [2024-12-06 03:35:00.157376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.147 qpair failed and we were unable to recover it. 00:26:40.147 [2024-12-06 03:35:00.157527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.147 [2024-12-06 03:35:00.157541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.147 qpair failed and we were unable to recover it. 00:26:40.147 [2024-12-06 03:35:00.157708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.147 [2024-12-06 03:35:00.157722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.147 qpair failed and we were unable to recover it. 00:26:40.147 [2024-12-06 03:35:00.157973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.147 [2024-12-06 03:35:00.157989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.147 qpair failed and we were unable to recover it. 00:26:40.147 [2024-12-06 03:35:00.158157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.147 [2024-12-06 03:35:00.158173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.147 qpair failed and we were unable to recover it. 00:26:40.147 [2024-12-06 03:35:00.158269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.147 [2024-12-06 03:35:00.158284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.147 qpair failed and we were unable to recover it. 00:26:40.147 [2024-12-06 03:35:00.158386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.147 [2024-12-06 03:35:00.158401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.147 qpair failed and we were unable to recover it. 00:26:40.147 [2024-12-06 03:35:00.158502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.147 [2024-12-06 03:35:00.158516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.147 qpair failed and we were unable to recover it. 00:26:40.147 [2024-12-06 03:35:00.158610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.147 [2024-12-06 03:35:00.158624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.147 qpair failed and we were unable to recover it. 00:26:40.147 [2024-12-06 03:35:00.158790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.147 [2024-12-06 03:35:00.158805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.147 qpair failed and we were unable to recover it. 00:26:40.147 [2024-12-06 03:35:00.158959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.147 [2024-12-06 03:35:00.158974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115cbe0 with addr=10.0.0.2, port=4420 00:26:40.147 qpair failed and we were unable to recover it. 00:26:40.147 [2024-12-06 03:35:00.159068] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:40.147 [2024-12-06 03:35:00.161512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.147 [2024-12-06 03:35:00.161597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.147 [2024-12-06 03:35:00.161619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.147 [2024-12-06 03:35:00.161630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.147 [2024-12-06 03:35:00.161639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.147 [2024-12-06 03:35:00.161666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.147 qpair failed and we were unable to recover it. 00:26:40.147 03:35:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.147 03:35:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:40.147 03:35:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.147 03:35:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:40.409 [2024-12-06 03:35:00.171418] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.410 [2024-12-06 03:35:00.171528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.410 [2024-12-06 03:35:00.171544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.410 [2024-12-06 03:35:00.171551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.410 [2024-12-06 03:35:00.171561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.410 [2024-12-06 03:35:00.171578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.410 qpair failed and we were unable to recover it. 00:26:40.410 03:35:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.410 03:35:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2771915 00:26:40.410 [2024-12-06 03:35:00.181475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.410 [2024-12-06 03:35:00.181574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.410 [2024-12-06 03:35:00.181590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.410 [2024-12-06 03:35:00.181597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.410 [2024-12-06 03:35:00.181603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.410 [2024-12-06 03:35:00.181618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.410 qpair failed and we were unable to recover it. 00:26:40.410 [2024-12-06 03:35:00.191423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.410 [2024-12-06 03:35:00.191488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.410 [2024-12-06 03:35:00.191503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.410 [2024-12-06 03:35:00.191510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.410 [2024-12-06 03:35:00.191517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.410 [2024-12-06 03:35:00.191532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.410 qpair failed and we were unable to recover it. 00:26:40.410 [2024-12-06 03:35:00.201369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.410 [2024-12-06 03:35:00.201429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.410 [2024-12-06 03:35:00.201444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.410 [2024-12-06 03:35:00.201451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.410 [2024-12-06 03:35:00.201457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.410 [2024-12-06 03:35:00.201472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.410 qpair failed and we were unable to recover it. 00:26:40.410 [2024-12-06 03:35:00.211394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.410 [2024-12-06 03:35:00.211448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.410 [2024-12-06 03:35:00.211463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.410 [2024-12-06 03:35:00.211470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.410 [2024-12-06 03:35:00.211479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.410 [2024-12-06 03:35:00.211494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.410 qpair failed and we were unable to recover it. 00:26:40.410 [2024-12-06 03:35:00.221467] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.410 [2024-12-06 03:35:00.221525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.410 [2024-12-06 03:35:00.221540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.410 [2024-12-06 03:35:00.221547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.410 [2024-12-06 03:35:00.221553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.410 [2024-12-06 03:35:00.221567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.410 qpair failed and we were unable to recover it. 00:26:40.410 [2024-12-06 03:35:00.231441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.410 [2024-12-06 03:35:00.231500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.410 [2024-12-06 03:35:00.231515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.410 [2024-12-06 03:35:00.231522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.410 [2024-12-06 03:35:00.231529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.410 [2024-12-06 03:35:00.231543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.410 qpair failed and we were unable to recover it. 00:26:40.410 [2024-12-06 03:35:00.241484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.410 [2024-12-06 03:35:00.241569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.410 [2024-12-06 03:35:00.241587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.410 [2024-12-06 03:35:00.241594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.410 [2024-12-06 03:35:00.241601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.410 [2024-12-06 03:35:00.241617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.410 qpair failed and we were unable to recover it. 00:26:40.410 [2024-12-06 03:35:00.251547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.410 [2024-12-06 03:35:00.251605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.410 [2024-12-06 03:35:00.251620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.410 [2024-12-06 03:35:00.251627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.410 [2024-12-06 03:35:00.251634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.410 [2024-12-06 03:35:00.251649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.410 qpair failed and we were unable to recover it. 00:26:40.410 [2024-12-06 03:35:00.261560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.410 [2024-12-06 03:35:00.261616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.410 [2024-12-06 03:35:00.261631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.410 [2024-12-06 03:35:00.261638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.410 [2024-12-06 03:35:00.261644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.410 [2024-12-06 03:35:00.261658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.411 qpair failed and we were unable to recover it. 00:26:40.411 [2024-12-06 03:35:00.271628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.411 [2024-12-06 03:35:00.271727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.411 [2024-12-06 03:35:00.271744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.411 [2024-12-06 03:35:00.271751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.411 [2024-12-06 03:35:00.271757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.411 [2024-12-06 03:35:00.271773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.411 qpair failed and we were unable to recover it. 00:26:40.411 [2024-12-06 03:35:00.281603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.411 [2024-12-06 03:35:00.281660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.411 [2024-12-06 03:35:00.281677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.411 [2024-12-06 03:35:00.281686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.411 [2024-12-06 03:35:00.281693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.411 [2024-12-06 03:35:00.281708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.411 qpair failed and we were unable to recover it. 00:26:40.411 [2024-12-06 03:35:00.291704] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.411 [2024-12-06 03:35:00.291760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.411 [2024-12-06 03:35:00.291775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.411 [2024-12-06 03:35:00.291782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.411 [2024-12-06 03:35:00.291788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.411 [2024-12-06 03:35:00.291803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.411 qpair failed and we were unable to recover it. 00:26:40.411 [2024-12-06 03:35:00.301644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.411 [2024-12-06 03:35:00.301694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.411 [2024-12-06 03:35:00.301713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.411 [2024-12-06 03:35:00.301721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.411 [2024-12-06 03:35:00.301727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.411 [2024-12-06 03:35:00.301742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.411 qpair failed and we were unable to recover it. 00:26:40.411 [2024-12-06 03:35:00.311700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.411 [2024-12-06 03:35:00.311759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.411 [2024-12-06 03:35:00.311774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.411 [2024-12-06 03:35:00.311781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.411 [2024-12-06 03:35:00.311788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.411 [2024-12-06 03:35:00.311803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.411 qpair failed and we were unable to recover it. 00:26:40.411 [2024-12-06 03:35:00.321650] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.411 [2024-12-06 03:35:00.321713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.411 [2024-12-06 03:35:00.321728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.411 [2024-12-06 03:35:00.321735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.411 [2024-12-06 03:35:00.321741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.411 [2024-12-06 03:35:00.321756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.411 qpair failed and we were unable to recover it. 00:26:40.411 [2024-12-06 03:35:00.331744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.411 [2024-12-06 03:35:00.331802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.411 [2024-12-06 03:35:00.331816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.411 [2024-12-06 03:35:00.331823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.411 [2024-12-06 03:35:00.331829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.411 [2024-12-06 03:35:00.331844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.411 qpair failed and we were unable to recover it. 00:26:40.411 [2024-12-06 03:35:00.341805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.411 [2024-12-06 03:35:00.341875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.411 [2024-12-06 03:35:00.341890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.411 [2024-12-06 03:35:00.341897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.411 [2024-12-06 03:35:00.341906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.411 [2024-12-06 03:35:00.341921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.411 qpair failed and we were unable to recover it. 00:26:40.411 [2024-12-06 03:35:00.351787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.411 [2024-12-06 03:35:00.351848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.411 [2024-12-06 03:35:00.351863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.411 [2024-12-06 03:35:00.351871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.411 [2024-12-06 03:35:00.351877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.411 [2024-12-06 03:35:00.351893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.411 qpair failed and we were unable to recover it. 00:26:40.411 [2024-12-06 03:35:00.361827] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.411 [2024-12-06 03:35:00.361884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.411 [2024-12-06 03:35:00.361898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.411 [2024-12-06 03:35:00.361905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.411 [2024-12-06 03:35:00.361911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.411 [2024-12-06 03:35:00.361926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.411 qpair failed and we were unable to recover it. 00:26:40.411 [2024-12-06 03:35:00.371838] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.411 [2024-12-06 03:35:00.371896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.411 [2024-12-06 03:35:00.371911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.411 [2024-12-06 03:35:00.371918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.411 [2024-12-06 03:35:00.371924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.411 [2024-12-06 03:35:00.371939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.411 qpair failed and we were unable to recover it. 00:26:40.411 [2024-12-06 03:35:00.381888] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.411 [2024-12-06 03:35:00.381950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.411 [2024-12-06 03:35:00.381965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.411 [2024-12-06 03:35:00.381973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.411 [2024-12-06 03:35:00.381979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.411 [2024-12-06 03:35:00.381995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.411 qpair failed and we were unable to recover it. 00:26:40.411 [2024-12-06 03:35:00.391910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.411 [2024-12-06 03:35:00.391971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.411 [2024-12-06 03:35:00.391986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.411 [2024-12-06 03:35:00.391993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.411 [2024-12-06 03:35:00.391999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.411 [2024-12-06 03:35:00.392015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.412 qpair failed and we were unable to recover it. 00:26:40.412 [2024-12-06 03:35:00.401937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.412 [2024-12-06 03:35:00.402000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.412 [2024-12-06 03:35:00.402015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.412 [2024-12-06 03:35:00.402022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.412 [2024-12-06 03:35:00.402028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.412 [2024-12-06 03:35:00.402044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.412 qpair failed and we were unable to recover it. 00:26:40.412 [2024-12-06 03:35:00.411955] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.412 [2024-12-06 03:35:00.412013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.412 [2024-12-06 03:35:00.412027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.412 [2024-12-06 03:35:00.412034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.412 [2024-12-06 03:35:00.412040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.412 [2024-12-06 03:35:00.412055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.412 qpair failed and we were unable to recover it. 00:26:40.412 [2024-12-06 03:35:00.421974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.412 [2024-12-06 03:35:00.422030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.412 [2024-12-06 03:35:00.422044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.412 [2024-12-06 03:35:00.422051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.412 [2024-12-06 03:35:00.422057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.412 [2024-12-06 03:35:00.422073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.412 qpair failed and we were unable to recover it. 00:26:40.412 [2024-12-06 03:35:00.432009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.412 [2024-12-06 03:35:00.432070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.412 [2024-12-06 03:35:00.432088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.412 [2024-12-06 03:35:00.432095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.412 [2024-12-06 03:35:00.432102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.412 [2024-12-06 03:35:00.432117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.412 qpair failed and we were unable to recover it. 00:26:40.412 [2024-12-06 03:35:00.442068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.412 [2024-12-06 03:35:00.442130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.412 [2024-12-06 03:35:00.442144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.412 [2024-12-06 03:35:00.442151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.412 [2024-12-06 03:35:00.442157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.412 [2024-12-06 03:35:00.442172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.412 qpair failed and we were unable to recover it. 00:26:40.412 [2024-12-06 03:35:00.452109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.412 [2024-12-06 03:35:00.452168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.412 [2024-12-06 03:35:00.452182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.412 [2024-12-06 03:35:00.452189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.412 [2024-12-06 03:35:00.452196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.412 [2024-12-06 03:35:00.452210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.412 qpair failed and we were unable to recover it. 00:26:40.412 [2024-12-06 03:35:00.462093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.412 [2024-12-06 03:35:00.462152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.412 [2024-12-06 03:35:00.462166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.412 [2024-12-06 03:35:00.462173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.412 [2024-12-06 03:35:00.462179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.412 [2024-12-06 03:35:00.462194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.412 qpair failed and we were unable to recover it. 00:26:40.412 [2024-12-06 03:35:00.472137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.412 [2024-12-06 03:35:00.472198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.412 [2024-12-06 03:35:00.472212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.412 [2024-12-06 03:35:00.472219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.412 [2024-12-06 03:35:00.472229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.412 [2024-12-06 03:35:00.472244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.412 qpair failed and we were unable to recover it. 00:26:40.412 [2024-12-06 03:35:00.482172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.412 [2024-12-06 03:35:00.482228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.412 [2024-12-06 03:35:00.482244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.412 [2024-12-06 03:35:00.482251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.412 [2024-12-06 03:35:00.482257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.412 [2024-12-06 03:35:00.482272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.412 qpair failed and we were unable to recover it. 00:26:40.412 [2024-12-06 03:35:00.492188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.412 [2024-12-06 03:35:00.492244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.412 [2024-12-06 03:35:00.492259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.412 [2024-12-06 03:35:00.492266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.412 [2024-12-06 03:35:00.492273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.412 [2024-12-06 03:35:00.492288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.412 qpair failed and we were unable to recover it. 00:26:40.412 [2024-12-06 03:35:00.502236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.412 [2024-12-06 03:35:00.502302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.412 [2024-12-06 03:35:00.502316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.412 [2024-12-06 03:35:00.502323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.412 [2024-12-06 03:35:00.502329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.412 [2024-12-06 03:35:00.502344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.412 qpair failed and we were unable to recover it. 00:26:40.412 [2024-12-06 03:35:00.512271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.412 [2024-12-06 03:35:00.512329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.412 [2024-12-06 03:35:00.512344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.412 [2024-12-06 03:35:00.512350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.412 [2024-12-06 03:35:00.512356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.412 [2024-12-06 03:35:00.512371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.412 qpair failed and we were unable to recover it. 00:26:40.412 [2024-12-06 03:35:00.522270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.412 [2024-12-06 03:35:00.522334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.412 [2024-12-06 03:35:00.522349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.413 [2024-12-06 03:35:00.522356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.413 [2024-12-06 03:35:00.522362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.413 [2024-12-06 03:35:00.522377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.413 qpair failed and we were unable to recover it. 00:26:40.413 [2024-12-06 03:35:00.532296] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.413 [2024-12-06 03:35:00.532352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.413 [2024-12-06 03:35:00.532366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.413 [2024-12-06 03:35:00.532373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.413 [2024-12-06 03:35:00.532379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.413 [2024-12-06 03:35:00.532393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.413 qpair failed and we were unable to recover it. 00:26:40.413 [2024-12-06 03:35:00.542266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.413 [2024-12-06 03:35:00.542359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.413 [2024-12-06 03:35:00.542373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.413 [2024-12-06 03:35:00.542379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.413 [2024-12-06 03:35:00.542386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.413 [2024-12-06 03:35:00.542400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.413 qpair failed and we were unable to recover it. 00:26:40.675 [2024-12-06 03:35:00.552392] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.675 [2024-12-06 03:35:00.552450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.675 [2024-12-06 03:35:00.552465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.675 [2024-12-06 03:35:00.552471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.675 [2024-12-06 03:35:00.552478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.675 [2024-12-06 03:35:00.552492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.675 qpair failed and we were unable to recover it. 00:26:40.675 [2024-12-06 03:35:00.562388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.675 [2024-12-06 03:35:00.562444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.675 [2024-12-06 03:35:00.562462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.675 [2024-12-06 03:35:00.562469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.675 [2024-12-06 03:35:00.562475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.675 [2024-12-06 03:35:00.562490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.675 qpair failed and we were unable to recover it. 00:26:40.675 [2024-12-06 03:35:00.572338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.675 [2024-12-06 03:35:00.572395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.675 [2024-12-06 03:35:00.572409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.675 [2024-12-06 03:35:00.572416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.675 [2024-12-06 03:35:00.572422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.675 [2024-12-06 03:35:00.572437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.675 qpair failed and we were unable to recover it. 00:26:40.675 [2024-12-06 03:35:00.582517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.675 [2024-12-06 03:35:00.582600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.675 [2024-12-06 03:35:00.582614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.675 [2024-12-06 03:35:00.582621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.675 [2024-12-06 03:35:00.582626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.675 [2024-12-06 03:35:00.582641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.675 qpair failed and we were unable to recover it. 00:26:40.675 [2024-12-06 03:35:00.592511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.675 [2024-12-06 03:35:00.592594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.675 [2024-12-06 03:35:00.592608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.675 [2024-12-06 03:35:00.592615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.675 [2024-12-06 03:35:00.592621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.675 [2024-12-06 03:35:00.592635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.675 qpair failed and we were unable to recover it. 00:26:40.675 [2024-12-06 03:35:00.602546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.675 [2024-12-06 03:35:00.602609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.675 [2024-12-06 03:35:00.602623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.675 [2024-12-06 03:35:00.602630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.675 [2024-12-06 03:35:00.602638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.675 [2024-12-06 03:35:00.602653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.675 qpair failed and we were unable to recover it. 00:26:40.675 [2024-12-06 03:35:00.612517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.675 [2024-12-06 03:35:00.612598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.675 [2024-12-06 03:35:00.612613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.675 [2024-12-06 03:35:00.612619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.675 [2024-12-06 03:35:00.612626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.675 [2024-12-06 03:35:00.612640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.675 qpair failed and we were unable to recover it. 00:26:40.675 [2024-12-06 03:35:00.622575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.675 [2024-12-06 03:35:00.622634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.675 [2024-12-06 03:35:00.622648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.675 [2024-12-06 03:35:00.622655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.675 [2024-12-06 03:35:00.622661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.675 [2024-12-06 03:35:00.622676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.675 qpair failed and we were unable to recover it. 00:26:40.675 [2024-12-06 03:35:00.632618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.675 [2024-12-06 03:35:00.632721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.675 [2024-12-06 03:35:00.632735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.675 [2024-12-06 03:35:00.632742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.675 [2024-12-06 03:35:00.632748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.675 [2024-12-06 03:35:00.632763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.675 qpair failed and we were unable to recover it. 00:26:40.675 [2024-12-06 03:35:00.642630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.675 [2024-12-06 03:35:00.642688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.675 [2024-12-06 03:35:00.642702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.675 [2024-12-06 03:35:00.642709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.675 [2024-12-06 03:35:00.642715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.675 [2024-12-06 03:35:00.642730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.675 qpair failed and we were unable to recover it. 00:26:40.675 [2024-12-06 03:35:00.652624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.675 [2024-12-06 03:35:00.652682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.675 [2024-12-06 03:35:00.652696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.675 [2024-12-06 03:35:00.652703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.675 [2024-12-06 03:35:00.652709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.675 [2024-12-06 03:35:00.652724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.675 qpair failed and we were unable to recover it. 00:26:40.675 [2024-12-06 03:35:00.662716] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.675 [2024-12-06 03:35:00.662780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.675 [2024-12-06 03:35:00.662795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.675 [2024-12-06 03:35:00.662801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.675 [2024-12-06 03:35:00.662808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.675 [2024-12-06 03:35:00.662822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.675 qpair failed and we were unable to recover it. 00:26:40.675 [2024-12-06 03:35:00.672722] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.675 [2024-12-06 03:35:00.672782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.675 [2024-12-06 03:35:00.672796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.676 [2024-12-06 03:35:00.672803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.676 [2024-12-06 03:35:00.672809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.676 [2024-12-06 03:35:00.672823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.676 qpair failed and we were unable to recover it. 00:26:40.676 [2024-12-06 03:35:00.682725] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.676 [2024-12-06 03:35:00.682785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.676 [2024-12-06 03:35:00.682799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.676 [2024-12-06 03:35:00.682806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.676 [2024-12-06 03:35:00.682812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.676 [2024-12-06 03:35:00.682826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.676 qpair failed and we were unable to recover it. 00:26:40.676 [2024-12-06 03:35:00.692777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.676 [2024-12-06 03:35:00.692843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.676 [2024-12-06 03:35:00.692862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.676 [2024-12-06 03:35:00.692869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.676 [2024-12-06 03:35:00.692876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.676 [2024-12-06 03:35:00.692891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.676 qpair failed and we were unable to recover it. 00:26:40.676 [2024-12-06 03:35:00.702765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.676 [2024-12-06 03:35:00.702821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.676 [2024-12-06 03:35:00.702835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.676 [2024-12-06 03:35:00.702842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.676 [2024-12-06 03:35:00.702849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.676 [2024-12-06 03:35:00.702863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.676 qpair failed and we were unable to recover it. 00:26:40.676 [2024-12-06 03:35:00.712781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.676 [2024-12-06 03:35:00.712841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.676 [2024-12-06 03:35:00.712856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.676 [2024-12-06 03:35:00.712863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.676 [2024-12-06 03:35:00.712869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.676 [2024-12-06 03:35:00.712885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.676 qpair failed and we were unable to recover it. 00:26:40.676 [2024-12-06 03:35:00.722841] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.676 [2024-12-06 03:35:00.722924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.676 [2024-12-06 03:35:00.722940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.676 [2024-12-06 03:35:00.722951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.676 [2024-12-06 03:35:00.722958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.676 [2024-12-06 03:35:00.722974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.676 qpair failed and we were unable to recover it. 00:26:40.676 [2024-12-06 03:35:00.732877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.676 [2024-12-06 03:35:00.732943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.676 [2024-12-06 03:35:00.732961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.676 [2024-12-06 03:35:00.732969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.676 [2024-12-06 03:35:00.732978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.676 [2024-12-06 03:35:00.732993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.676 qpair failed and we were unable to recover it. 00:26:40.676 [2024-12-06 03:35:00.742821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.676 [2024-12-06 03:35:00.742878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.676 [2024-12-06 03:35:00.742892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.676 [2024-12-06 03:35:00.742899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.676 [2024-12-06 03:35:00.742906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.676 [2024-12-06 03:35:00.742920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.676 qpair failed and we were unable to recover it. 00:26:40.676 [2024-12-06 03:35:00.752930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.676 [2024-12-06 03:35:00.752997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.676 [2024-12-06 03:35:00.753011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.676 [2024-12-06 03:35:00.753019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.676 [2024-12-06 03:35:00.753025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.676 [2024-12-06 03:35:00.753040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.676 qpair failed and we were unable to recover it. 00:26:40.676 [2024-12-06 03:35:00.762956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.676 [2024-12-06 03:35:00.763013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.676 [2024-12-06 03:35:00.763027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.676 [2024-12-06 03:35:00.763034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.676 [2024-12-06 03:35:00.763040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.676 [2024-12-06 03:35:00.763055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.676 qpair failed and we were unable to recover it. 00:26:40.676 [2024-12-06 03:35:00.772942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.676 [2024-12-06 03:35:00.773016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.676 [2024-12-06 03:35:00.773031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.676 [2024-12-06 03:35:00.773037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.676 [2024-12-06 03:35:00.773044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.676 [2024-12-06 03:35:00.773059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.676 qpair failed and we were unable to recover it. 00:26:40.676 [2024-12-06 03:35:00.783000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.676 [2024-12-06 03:35:00.783055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.676 [2024-12-06 03:35:00.783069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.676 [2024-12-06 03:35:00.783076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.676 [2024-12-06 03:35:00.783083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.676 [2024-12-06 03:35:00.783097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.676 qpair failed and we were unable to recover it. 00:26:40.676 [2024-12-06 03:35:00.793026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.676 [2024-12-06 03:35:00.793085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.676 [2024-12-06 03:35:00.793100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.676 [2024-12-06 03:35:00.793107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.676 [2024-12-06 03:35:00.793113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.676 [2024-12-06 03:35:00.793129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.676 qpair failed and we were unable to recover it. 00:26:40.676 [2024-12-06 03:35:00.803054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.677 [2024-12-06 03:35:00.803110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.677 [2024-12-06 03:35:00.803124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.677 [2024-12-06 03:35:00.803131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.677 [2024-12-06 03:35:00.803137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.677 [2024-12-06 03:35:00.803152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.677 qpair failed and we were unable to recover it. 00:26:40.938 [2024-12-06 03:35:00.813079] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.938 [2024-12-06 03:35:00.813139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.938 [2024-12-06 03:35:00.813154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.938 [2024-12-06 03:35:00.813161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.938 [2024-12-06 03:35:00.813167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.938 [2024-12-06 03:35:00.813183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.938 qpair failed and we were unable to recover it. 00:26:40.938 [2024-12-06 03:35:00.823116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.938 [2024-12-06 03:35:00.823175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.938 [2024-12-06 03:35:00.823193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.938 [2024-12-06 03:35:00.823200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.938 [2024-12-06 03:35:00.823206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.938 [2024-12-06 03:35:00.823221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.938 qpair failed and we were unable to recover it. 00:26:40.938 [2024-12-06 03:35:00.833166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.938 [2024-12-06 03:35:00.833224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.938 [2024-12-06 03:35:00.833238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.938 [2024-12-06 03:35:00.833245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.938 [2024-12-06 03:35:00.833252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.938 [2024-12-06 03:35:00.833265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.938 qpair failed and we were unable to recover it. 00:26:40.938 [2024-12-06 03:35:00.843176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.938 [2024-12-06 03:35:00.843234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.938 [2024-12-06 03:35:00.843248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.938 [2024-12-06 03:35:00.843255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.938 [2024-12-06 03:35:00.843262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.938 [2024-12-06 03:35:00.843276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.938 qpair failed and we were unable to recover it. 00:26:40.938 [2024-12-06 03:35:00.853193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.938 [2024-12-06 03:35:00.853249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.939 [2024-12-06 03:35:00.853263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.939 [2024-12-06 03:35:00.853270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.939 [2024-12-06 03:35:00.853277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.939 [2024-12-06 03:35:00.853291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.939 qpair failed and we were unable to recover it. 00:26:40.939 [2024-12-06 03:35:00.863237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.939 [2024-12-06 03:35:00.863289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.939 [2024-12-06 03:35:00.863304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.939 [2024-12-06 03:35:00.863311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.939 [2024-12-06 03:35:00.863320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.939 [2024-12-06 03:35:00.863335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.939 qpair failed and we were unable to recover it. 00:26:40.939 [2024-12-06 03:35:00.873265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.939 [2024-12-06 03:35:00.873322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.939 [2024-12-06 03:35:00.873335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.939 [2024-12-06 03:35:00.873342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.939 [2024-12-06 03:35:00.873349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.939 [2024-12-06 03:35:00.873363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.939 qpair failed and we were unable to recover it. 00:26:40.939 [2024-12-06 03:35:00.883260] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.939 [2024-12-06 03:35:00.883315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.939 [2024-12-06 03:35:00.883329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.939 [2024-12-06 03:35:00.883336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.939 [2024-12-06 03:35:00.883342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.939 [2024-12-06 03:35:00.883357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.939 qpair failed and we were unable to recover it. 00:26:40.939 [2024-12-06 03:35:00.893253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.939 [2024-12-06 03:35:00.893329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.939 [2024-12-06 03:35:00.893344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.939 [2024-12-06 03:35:00.893351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.939 [2024-12-06 03:35:00.893357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.939 [2024-12-06 03:35:00.893371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.939 qpair failed and we were unable to recover it. 00:26:40.939 [2024-12-06 03:35:00.903347] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.939 [2024-12-06 03:35:00.903403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.939 [2024-12-06 03:35:00.903419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.939 [2024-12-06 03:35:00.903426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.939 [2024-12-06 03:35:00.903432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.939 [2024-12-06 03:35:00.903447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.939 qpair failed and we were unable to recover it. 00:26:40.939 [2024-12-06 03:35:00.913377] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.939 [2024-12-06 03:35:00.913465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.939 [2024-12-06 03:35:00.913480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.939 [2024-12-06 03:35:00.913487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.939 [2024-12-06 03:35:00.913493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.939 [2024-12-06 03:35:00.913507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.939 qpair failed and we were unable to recover it. 00:26:40.939 [2024-12-06 03:35:00.923399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.939 [2024-12-06 03:35:00.923456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.939 [2024-12-06 03:35:00.923471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.939 [2024-12-06 03:35:00.923478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.939 [2024-12-06 03:35:00.923484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.939 [2024-12-06 03:35:00.923499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.939 qpair failed and we were unable to recover it. 00:26:40.939 [2024-12-06 03:35:00.933434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.939 [2024-12-06 03:35:00.933489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.939 [2024-12-06 03:35:00.933503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.939 [2024-12-06 03:35:00.933510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.939 [2024-12-06 03:35:00.933516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.939 [2024-12-06 03:35:00.933530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.939 qpair failed and we were unable to recover it. 00:26:40.939 [2024-12-06 03:35:00.943384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.939 [2024-12-06 03:35:00.943443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.939 [2024-12-06 03:35:00.943457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.939 [2024-12-06 03:35:00.943464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.939 [2024-12-06 03:35:00.943470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.939 [2024-12-06 03:35:00.943484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.939 qpair failed and we were unable to recover it. 00:26:40.939 [2024-12-06 03:35:00.953554] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.939 [2024-12-06 03:35:00.953620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.939 [2024-12-06 03:35:00.953637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.940 [2024-12-06 03:35:00.953644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.940 [2024-12-06 03:35:00.953650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.940 [2024-12-06 03:35:00.953664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.940 qpair failed and we were unable to recover it. 00:26:40.940 [2024-12-06 03:35:00.963560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.940 [2024-12-06 03:35:00.963617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.940 [2024-12-06 03:35:00.963631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.940 [2024-12-06 03:35:00.963638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.940 [2024-12-06 03:35:00.963644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.940 [2024-12-06 03:35:00.963658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.940 qpair failed and we were unable to recover it. 00:26:40.940 [2024-12-06 03:35:00.973585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.940 [2024-12-06 03:35:00.973643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.940 [2024-12-06 03:35:00.973657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.940 [2024-12-06 03:35:00.973664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.940 [2024-12-06 03:35:00.973670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.940 [2024-12-06 03:35:00.973685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.940 qpair failed and we were unable to recover it. 00:26:40.940 [2024-12-06 03:35:00.983617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.940 [2024-12-06 03:35:00.983677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.940 [2024-12-06 03:35:00.983691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.940 [2024-12-06 03:35:00.983698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.940 [2024-12-06 03:35:00.983704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.940 [2024-12-06 03:35:00.983719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.940 qpair failed and we were unable to recover it. 00:26:40.940 [2024-12-06 03:35:00.993606] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.940 [2024-12-06 03:35:00.993663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.940 [2024-12-06 03:35:00.993678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.940 [2024-12-06 03:35:00.993686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.940 [2024-12-06 03:35:00.993695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.940 [2024-12-06 03:35:00.993710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.940 qpair failed and we were unable to recover it. 00:26:40.940 [2024-12-06 03:35:01.003714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.940 [2024-12-06 03:35:01.003772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.940 [2024-12-06 03:35:01.003786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.940 [2024-12-06 03:35:01.003793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.940 [2024-12-06 03:35:01.003799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.940 [2024-12-06 03:35:01.003813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.940 qpair failed and we were unable to recover it. 00:26:40.940 [2024-12-06 03:35:01.013655] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.940 [2024-12-06 03:35:01.013712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.940 [2024-12-06 03:35:01.013727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.940 [2024-12-06 03:35:01.013735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.940 [2024-12-06 03:35:01.013741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.940 [2024-12-06 03:35:01.013755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.940 qpair failed and we were unable to recover it. 00:26:40.940 [2024-12-06 03:35:01.023677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.940 [2024-12-06 03:35:01.023732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.940 [2024-12-06 03:35:01.023747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.940 [2024-12-06 03:35:01.023754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.940 [2024-12-06 03:35:01.023760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.940 [2024-12-06 03:35:01.023774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.940 qpair failed and we were unable to recover it. 00:26:40.940 [2024-12-06 03:35:01.033723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.940 [2024-12-06 03:35:01.033779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.940 [2024-12-06 03:35:01.033794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.940 [2024-12-06 03:35:01.033801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.940 [2024-12-06 03:35:01.033807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.940 [2024-12-06 03:35:01.033821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.940 qpair failed and we were unable to recover it. 00:26:40.940 [2024-12-06 03:35:01.043748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.940 [2024-12-06 03:35:01.043806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.940 [2024-12-06 03:35:01.043820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.940 [2024-12-06 03:35:01.043828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.940 [2024-12-06 03:35:01.043834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.940 [2024-12-06 03:35:01.043848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.940 qpair failed and we were unable to recover it. 00:26:40.941 [2024-12-06 03:35:01.053828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.941 [2024-12-06 03:35:01.053889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.941 [2024-12-06 03:35:01.053904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.941 [2024-12-06 03:35:01.053911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.941 [2024-12-06 03:35:01.053918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.941 [2024-12-06 03:35:01.053932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.941 qpair failed and we were unable to recover it. 00:26:40.941 [2024-12-06 03:35:01.063827] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.941 [2024-12-06 03:35:01.063889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.941 [2024-12-06 03:35:01.063904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.941 [2024-12-06 03:35:01.063910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.941 [2024-12-06 03:35:01.063917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.941 [2024-12-06 03:35:01.063931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.941 qpair failed and we were unable to recover it. 00:26:40.941 [2024-12-06 03:35:01.073833] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:40.941 [2024-12-06 03:35:01.073891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:40.941 [2024-12-06 03:35:01.073906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:40.941 [2024-12-06 03:35:01.073913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:40.941 [2024-12-06 03:35:01.073919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:40.941 [2024-12-06 03:35:01.073933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.941 qpair failed and we were unable to recover it. 00:26:41.202 [2024-12-06 03:35:01.083897] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.202 [2024-12-06 03:35:01.083962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.202 [2024-12-06 03:35:01.083980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.202 [2024-12-06 03:35:01.083988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.202 [2024-12-06 03:35:01.083993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.202 [2024-12-06 03:35:01.084008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.202 qpair failed and we were unable to recover it. 00:26:41.202 [2024-12-06 03:35:01.093876] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.202 [2024-12-06 03:35:01.093935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.202 [2024-12-06 03:35:01.093952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.202 [2024-12-06 03:35:01.093960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.202 [2024-12-06 03:35:01.093966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.202 [2024-12-06 03:35:01.093980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.202 qpair failed and we were unable to recover it. 00:26:41.202 [2024-12-06 03:35:01.103953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.202 [2024-12-06 03:35:01.104015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.202 [2024-12-06 03:35:01.104030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.202 [2024-12-06 03:35:01.104037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.202 [2024-12-06 03:35:01.104043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.202 [2024-12-06 03:35:01.104058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.202 qpair failed and we were unable to recover it. 00:26:41.202 [2024-12-06 03:35:01.113938] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.202 [2024-12-06 03:35:01.114025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.202 [2024-12-06 03:35:01.114041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.202 [2024-12-06 03:35:01.114048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.202 [2024-12-06 03:35:01.114054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.202 [2024-12-06 03:35:01.114070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.202 qpair failed and we were unable to recover it. 00:26:41.202 [2024-12-06 03:35:01.123969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.202 [2024-12-06 03:35:01.124027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.202 [2024-12-06 03:35:01.124041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.202 [2024-12-06 03:35:01.124048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.202 [2024-12-06 03:35:01.124057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.202 [2024-12-06 03:35:01.124072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.202 qpair failed and we were unable to recover it. 00:26:41.202 [2024-12-06 03:35:01.133993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.202 [2024-12-06 03:35:01.134047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.202 [2024-12-06 03:35:01.134062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.202 [2024-12-06 03:35:01.134069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.202 [2024-12-06 03:35:01.134075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.202 [2024-12-06 03:35:01.134090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.202 qpair failed and we were unable to recover it. 00:26:41.202 [2024-12-06 03:35:01.144015] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.202 [2024-12-06 03:35:01.144069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.202 [2024-12-06 03:35:01.144084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.202 [2024-12-06 03:35:01.144091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.202 [2024-12-06 03:35:01.144097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.202 [2024-12-06 03:35:01.144112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.202 qpair failed and we were unable to recover it. 00:26:41.202 [2024-12-06 03:35:01.154049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.202 [2024-12-06 03:35:01.154106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.202 [2024-12-06 03:35:01.154120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.203 [2024-12-06 03:35:01.154127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.203 [2024-12-06 03:35:01.154133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.203 [2024-12-06 03:35:01.154147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.203 qpair failed and we were unable to recover it. 00:26:41.203 [2024-12-06 03:35:01.164087] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.203 [2024-12-06 03:35:01.164166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.203 [2024-12-06 03:35:01.164180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.203 [2024-12-06 03:35:01.164187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.203 [2024-12-06 03:35:01.164193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.203 [2024-12-06 03:35:01.164207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.203 qpair failed and we were unable to recover it. 00:26:41.203 [2024-12-06 03:35:01.174180] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.203 [2024-12-06 03:35:01.174235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.203 [2024-12-06 03:35:01.174250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.203 [2024-12-06 03:35:01.174257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.203 [2024-12-06 03:35:01.174263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.203 [2024-12-06 03:35:01.174278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.203 qpair failed and we were unable to recover it. 00:26:41.203 [2024-12-06 03:35:01.184098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.203 [2024-12-06 03:35:01.184153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.203 [2024-12-06 03:35:01.184168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.203 [2024-12-06 03:35:01.184175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.203 [2024-12-06 03:35:01.184181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.203 [2024-12-06 03:35:01.184196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.203 qpair failed and we were unable to recover it. 00:26:41.203 [2024-12-06 03:35:01.194154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.203 [2024-12-06 03:35:01.194212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.203 [2024-12-06 03:35:01.194226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.203 [2024-12-06 03:35:01.194233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.203 [2024-12-06 03:35:01.194239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.203 [2024-12-06 03:35:01.194253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.203 qpair failed and we were unable to recover it. 00:26:41.203 [2024-12-06 03:35:01.204197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.203 [2024-12-06 03:35:01.204256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.203 [2024-12-06 03:35:01.204270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.203 [2024-12-06 03:35:01.204277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.203 [2024-12-06 03:35:01.204283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.203 [2024-12-06 03:35:01.204298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.203 qpair failed and we were unable to recover it. 00:26:41.203 [2024-12-06 03:35:01.214226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.203 [2024-12-06 03:35:01.214279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.203 [2024-12-06 03:35:01.214298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.203 [2024-12-06 03:35:01.214305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.203 [2024-12-06 03:35:01.214311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.203 [2024-12-06 03:35:01.214325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.203 qpair failed and we were unable to recover it. 00:26:41.203 [2024-12-06 03:35:01.224262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.203 [2024-12-06 03:35:01.224323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.203 [2024-12-06 03:35:01.224337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.203 [2024-12-06 03:35:01.224344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.203 [2024-12-06 03:35:01.224350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.203 [2024-12-06 03:35:01.224364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.203 qpair failed and we were unable to recover it. 00:26:41.203 [2024-12-06 03:35:01.234286] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.203 [2024-12-06 03:35:01.234345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.203 [2024-12-06 03:35:01.234359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.203 [2024-12-06 03:35:01.234366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.203 [2024-12-06 03:35:01.234373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.203 [2024-12-06 03:35:01.234387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.203 qpair failed and we were unable to recover it. 00:26:41.203 [2024-12-06 03:35:01.244306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.203 [2024-12-06 03:35:01.244366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.203 [2024-12-06 03:35:01.244383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.203 [2024-12-06 03:35:01.244391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.203 [2024-12-06 03:35:01.244396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.203 [2024-12-06 03:35:01.244412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.203 qpair failed and we were unable to recover it. 00:26:41.203 [2024-12-06 03:35:01.254322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.203 [2024-12-06 03:35:01.254409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.203 [2024-12-06 03:35:01.254424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.203 [2024-12-06 03:35:01.254434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.204 [2024-12-06 03:35:01.254440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.204 [2024-12-06 03:35:01.254455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.204 qpair failed and we were unable to recover it. 00:26:41.204 [2024-12-06 03:35:01.264358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.204 [2024-12-06 03:35:01.264414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.204 [2024-12-06 03:35:01.264428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.204 [2024-12-06 03:35:01.264435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.204 [2024-12-06 03:35:01.264441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.204 [2024-12-06 03:35:01.264455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.204 qpair failed and we were unable to recover it. 00:26:41.204 [2024-12-06 03:35:01.274395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.204 [2024-12-06 03:35:01.274454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.204 [2024-12-06 03:35:01.274468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.204 [2024-12-06 03:35:01.274476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.204 [2024-12-06 03:35:01.274482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.204 [2024-12-06 03:35:01.274496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.204 qpair failed and we were unable to recover it. 00:26:41.204 [2024-12-06 03:35:01.284370] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.204 [2024-12-06 03:35:01.284452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.204 [2024-12-06 03:35:01.284467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.204 [2024-12-06 03:35:01.284474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.204 [2024-12-06 03:35:01.284480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.204 [2024-12-06 03:35:01.284495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.204 qpair failed and we were unable to recover it. 00:26:41.204 [2024-12-06 03:35:01.294457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.204 [2024-12-06 03:35:01.294524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.204 [2024-12-06 03:35:01.294539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.204 [2024-12-06 03:35:01.294546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.204 [2024-12-06 03:35:01.294552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.204 [2024-12-06 03:35:01.294566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.204 qpair failed and we were unable to recover it. 00:26:41.204 [2024-12-06 03:35:01.304440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.204 [2024-12-06 03:35:01.304506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.204 [2024-12-06 03:35:01.304522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.204 [2024-12-06 03:35:01.304530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.204 [2024-12-06 03:35:01.304538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.204 [2024-12-06 03:35:01.304554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.204 qpair failed and we were unable to recover it. 00:26:41.204 [2024-12-06 03:35:01.314512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.204 [2024-12-06 03:35:01.314572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.204 [2024-12-06 03:35:01.314589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.204 [2024-12-06 03:35:01.314596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.204 [2024-12-06 03:35:01.314602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.204 [2024-12-06 03:35:01.314617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.204 qpair failed and we were unable to recover it. 00:26:41.204 [2024-12-06 03:35:01.324538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.204 [2024-12-06 03:35:01.324597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.204 [2024-12-06 03:35:01.324611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.204 [2024-12-06 03:35:01.324619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.204 [2024-12-06 03:35:01.324625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.204 [2024-12-06 03:35:01.324639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.204 qpair failed and we were unable to recover it. 00:26:41.204 [2024-12-06 03:35:01.334554] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.204 [2024-12-06 03:35:01.334610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.204 [2024-12-06 03:35:01.334625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.204 [2024-12-06 03:35:01.334632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.204 [2024-12-06 03:35:01.334639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.204 [2024-12-06 03:35:01.334653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.204 qpair failed and we were unable to recover it. 00:26:41.464 [2024-12-06 03:35:01.344509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.464 [2024-12-06 03:35:01.344566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.464 [2024-12-06 03:35:01.344584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.464 [2024-12-06 03:35:01.344591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.464 [2024-12-06 03:35:01.344597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.464 [2024-12-06 03:35:01.344612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-12-06 03:35:01.354608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.464 [2024-12-06 03:35:01.354667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.464 [2024-12-06 03:35:01.354681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.464 [2024-12-06 03:35:01.354688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.464 [2024-12-06 03:35:01.354694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.464 [2024-12-06 03:35:01.354708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-12-06 03:35:01.364591] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.464 [2024-12-06 03:35:01.364674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.464 [2024-12-06 03:35:01.364688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.464 [2024-12-06 03:35:01.364695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.464 [2024-12-06 03:35:01.364701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.464 [2024-12-06 03:35:01.364715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-12-06 03:35:01.374598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.464 [2024-12-06 03:35:01.374651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.464 [2024-12-06 03:35:01.374666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.464 [2024-12-06 03:35:01.374672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.464 [2024-12-06 03:35:01.374679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.464 [2024-12-06 03:35:01.374693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-12-06 03:35:01.384631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.464 [2024-12-06 03:35:01.384689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.464 [2024-12-06 03:35:01.384704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.464 [2024-12-06 03:35:01.384715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.464 [2024-12-06 03:35:01.384721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.464 [2024-12-06 03:35:01.384736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-12-06 03:35:01.394774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.464 [2024-12-06 03:35:01.394832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.464 [2024-12-06 03:35:01.394846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.464 [2024-12-06 03:35:01.394854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.464 [2024-12-06 03:35:01.394860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.464 [2024-12-06 03:35:01.394874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-12-06 03:35:01.404751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.464 [2024-12-06 03:35:01.404812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.464 [2024-12-06 03:35:01.404826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.465 [2024-12-06 03:35:01.404833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.465 [2024-12-06 03:35:01.404839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.465 [2024-12-06 03:35:01.404854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-12-06 03:35:01.414765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.465 [2024-12-06 03:35:01.414823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.465 [2024-12-06 03:35:01.414838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.465 [2024-12-06 03:35:01.414845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.465 [2024-12-06 03:35:01.414851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.465 [2024-12-06 03:35:01.414866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-12-06 03:35:01.424838] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.465 [2024-12-06 03:35:01.424925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.465 [2024-12-06 03:35:01.424940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.465 [2024-12-06 03:35:01.424952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.465 [2024-12-06 03:35:01.424958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.465 [2024-12-06 03:35:01.424974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-12-06 03:35:01.434789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.465 [2024-12-06 03:35:01.434849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.465 [2024-12-06 03:35:01.434864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.465 [2024-12-06 03:35:01.434871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.465 [2024-12-06 03:35:01.434877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.465 [2024-12-06 03:35:01.434891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-12-06 03:35:01.444892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.465 [2024-12-06 03:35:01.444955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.465 [2024-12-06 03:35:01.444971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.465 [2024-12-06 03:35:01.444978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.465 [2024-12-06 03:35:01.444984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.465 [2024-12-06 03:35:01.444999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-12-06 03:35:01.454831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.465 [2024-12-06 03:35:01.454890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.465 [2024-12-06 03:35:01.454905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.465 [2024-12-06 03:35:01.454913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.465 [2024-12-06 03:35:01.454919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.465 [2024-12-06 03:35:01.454933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-12-06 03:35:01.464923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.465 [2024-12-06 03:35:01.464984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.465 [2024-12-06 03:35:01.464999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.465 [2024-12-06 03:35:01.465006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.465 [2024-12-06 03:35:01.465013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.465 [2024-12-06 03:35:01.465027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-12-06 03:35:01.474976] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.465 [2024-12-06 03:35:01.475043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.465 [2024-12-06 03:35:01.475061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.465 [2024-12-06 03:35:01.475068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.465 [2024-12-06 03:35:01.475074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.465 [2024-12-06 03:35:01.475088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-12-06 03:35:01.484925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.465 [2024-12-06 03:35:01.484992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.465 [2024-12-06 03:35:01.485008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.465 [2024-12-06 03:35:01.485015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.465 [2024-12-06 03:35:01.485021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.465 [2024-12-06 03:35:01.485036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-12-06 03:35:01.495073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.465 [2024-12-06 03:35:01.495157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.465 [2024-12-06 03:35:01.495171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.465 [2024-12-06 03:35:01.495178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.465 [2024-12-06 03:35:01.495183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.465 [2024-12-06 03:35:01.495198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-12-06 03:35:01.505056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.465 [2024-12-06 03:35:01.505115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.465 [2024-12-06 03:35:01.505129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.465 [2024-12-06 03:35:01.505136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.465 [2024-12-06 03:35:01.505142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.465 [2024-12-06 03:35:01.505158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-12-06 03:35:01.515116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.465 [2024-12-06 03:35:01.515172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.465 [2024-12-06 03:35:01.515186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.465 [2024-12-06 03:35:01.515196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.465 [2024-12-06 03:35:01.515203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.465 [2024-12-06 03:35:01.515218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-12-06 03:35:01.525089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.465 [2024-12-06 03:35:01.525188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.465 [2024-12-06 03:35:01.525204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.465 [2024-12-06 03:35:01.525211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.465 [2024-12-06 03:35:01.525217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.465 [2024-12-06 03:35:01.525232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-12-06 03:35:01.535069] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.465 [2024-12-06 03:35:01.535128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.465 [2024-12-06 03:35:01.535142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.465 [2024-12-06 03:35:01.535149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.466 [2024-12-06 03:35:01.535155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.466 [2024-12-06 03:35:01.535169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-12-06 03:35:01.545148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.466 [2024-12-06 03:35:01.545211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.466 [2024-12-06 03:35:01.545225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.466 [2024-12-06 03:35:01.545232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.466 [2024-12-06 03:35:01.545239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.466 [2024-12-06 03:35:01.545253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-12-06 03:35:01.555201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.466 [2024-12-06 03:35:01.555267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.466 [2024-12-06 03:35:01.555282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.466 [2024-12-06 03:35:01.555289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.466 [2024-12-06 03:35:01.555295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.466 [2024-12-06 03:35:01.555309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-12-06 03:35:01.565153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.466 [2024-12-06 03:35:01.565213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.466 [2024-12-06 03:35:01.565228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.466 [2024-12-06 03:35:01.565234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.466 [2024-12-06 03:35:01.565240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.466 [2024-12-06 03:35:01.565255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-12-06 03:35:01.575168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.466 [2024-12-06 03:35:01.575224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.466 [2024-12-06 03:35:01.575238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.466 [2024-12-06 03:35:01.575246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.466 [2024-12-06 03:35:01.575251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.466 [2024-12-06 03:35:01.575266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-12-06 03:35:01.585200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.466 [2024-12-06 03:35:01.585255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.466 [2024-12-06 03:35:01.585269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.466 [2024-12-06 03:35:01.585276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.466 [2024-12-06 03:35:01.585282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.466 [2024-12-06 03:35:01.585296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-12-06 03:35:01.595355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.466 [2024-12-06 03:35:01.595413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.466 [2024-12-06 03:35:01.595427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.466 [2024-12-06 03:35:01.595435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.466 [2024-12-06 03:35:01.595441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.466 [2024-12-06 03:35:01.595454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.725 [2024-12-06 03:35:01.605346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.725 [2024-12-06 03:35:01.605406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.725 [2024-12-06 03:35:01.605420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.725 [2024-12-06 03:35:01.605427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.725 [2024-12-06 03:35:01.605433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.725 [2024-12-06 03:35:01.605448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.725 qpair failed and we were unable to recover it. 00:26:41.725 [2024-12-06 03:35:01.615294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.725 [2024-12-06 03:35:01.615350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.725 [2024-12-06 03:35:01.615365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.725 [2024-12-06 03:35:01.615371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.725 [2024-12-06 03:35:01.615377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.725 [2024-12-06 03:35:01.615392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.725 qpair failed and we were unable to recover it. 00:26:41.725 [2024-12-06 03:35:01.625354] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.725 [2024-12-06 03:35:01.625408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.725 [2024-12-06 03:35:01.625422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.725 [2024-12-06 03:35:01.625429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.725 [2024-12-06 03:35:01.625435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.725 [2024-12-06 03:35:01.625450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.725 qpair failed and we were unable to recover it. 00:26:41.725 [2024-12-06 03:35:01.635358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.726 [2024-12-06 03:35:01.635416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.726 [2024-12-06 03:35:01.635429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.726 [2024-12-06 03:35:01.635436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.726 [2024-12-06 03:35:01.635442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.726 [2024-12-06 03:35:01.635457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.726 qpair failed and we were unable to recover it. 00:26:41.726 [2024-12-06 03:35:01.645388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.726 [2024-12-06 03:35:01.645464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.726 [2024-12-06 03:35:01.645478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.726 [2024-12-06 03:35:01.645489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.726 [2024-12-06 03:35:01.645495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.726 [2024-12-06 03:35:01.645509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.726 qpair failed and we were unable to recover it. 00:26:41.726 [2024-12-06 03:35:01.655471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.726 [2024-12-06 03:35:01.655529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.726 [2024-12-06 03:35:01.655543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.726 [2024-12-06 03:35:01.655550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.726 [2024-12-06 03:35:01.655556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.726 [2024-12-06 03:35:01.655570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.726 qpair failed and we were unable to recover it. 00:26:41.726 [2024-12-06 03:35:01.665432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.726 [2024-12-06 03:35:01.665486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.726 [2024-12-06 03:35:01.665500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.726 [2024-12-06 03:35:01.665507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.726 [2024-12-06 03:35:01.665514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.726 [2024-12-06 03:35:01.665528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.726 qpair failed and we were unable to recover it. 00:26:41.726 [2024-12-06 03:35:01.675475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.726 [2024-12-06 03:35:01.675531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.726 [2024-12-06 03:35:01.675546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.726 [2024-12-06 03:35:01.675553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.726 [2024-12-06 03:35:01.675559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.726 [2024-12-06 03:35:01.675573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.726 qpair failed and we were unable to recover it. 00:26:41.726 [2024-12-06 03:35:01.685550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.726 [2024-12-06 03:35:01.685607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.726 [2024-12-06 03:35:01.685622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.726 [2024-12-06 03:35:01.685629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.726 [2024-12-06 03:35:01.685636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.726 [2024-12-06 03:35:01.685650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.726 qpair failed and we were unable to recover it. 00:26:41.726 [2024-12-06 03:35:01.695528] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.726 [2024-12-06 03:35:01.695585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.726 [2024-12-06 03:35:01.695600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.726 [2024-12-06 03:35:01.695607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.726 [2024-12-06 03:35:01.695614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.726 [2024-12-06 03:35:01.695628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.726 qpair failed and we were unable to recover it. 00:26:41.726 [2024-12-06 03:35:01.705557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.726 [2024-12-06 03:35:01.705608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.726 [2024-12-06 03:35:01.705621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.726 [2024-12-06 03:35:01.705628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.726 [2024-12-06 03:35:01.705634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.726 [2024-12-06 03:35:01.705649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.726 qpair failed and we were unable to recover it. 00:26:41.726 [2024-12-06 03:35:01.715659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.726 [2024-12-06 03:35:01.715720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.726 [2024-12-06 03:35:01.715734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.726 [2024-12-06 03:35:01.715741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.726 [2024-12-06 03:35:01.715747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.726 [2024-12-06 03:35:01.715761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.726 qpair failed and we were unable to recover it. 00:26:41.726 [2024-12-06 03:35:01.725673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.726 [2024-12-06 03:35:01.725729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.726 [2024-12-06 03:35:01.725743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.726 [2024-12-06 03:35:01.725750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.726 [2024-12-06 03:35:01.725756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.726 [2024-12-06 03:35:01.725771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.726 qpair failed and we were unable to recover it. 00:26:41.726 [2024-12-06 03:35:01.735698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.726 [2024-12-06 03:35:01.735754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.726 [2024-12-06 03:35:01.735769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.726 [2024-12-06 03:35:01.735776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.726 [2024-12-06 03:35:01.735782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.726 [2024-12-06 03:35:01.735796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.726 qpair failed and we were unable to recover it. 00:26:41.726 [2024-12-06 03:35:01.745739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.726 [2024-12-06 03:35:01.745796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.726 [2024-12-06 03:35:01.745812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.726 [2024-12-06 03:35:01.745819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.726 [2024-12-06 03:35:01.745826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.726 [2024-12-06 03:35:01.745840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.726 qpair failed and we were unable to recover it. 00:26:41.726 [2024-12-06 03:35:01.755790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.726 [2024-12-06 03:35:01.755857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.726 [2024-12-06 03:35:01.755873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.726 [2024-12-06 03:35:01.755880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.726 [2024-12-06 03:35:01.755886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.726 [2024-12-06 03:35:01.755901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.726 qpair failed and we were unable to recover it. 00:26:41.726 [2024-12-06 03:35:01.765804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.726 [2024-12-06 03:35:01.765861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.727 [2024-12-06 03:35:01.765876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.727 [2024-12-06 03:35:01.765883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.727 [2024-12-06 03:35:01.765889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.727 [2024-12-06 03:35:01.765903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.727 qpair failed and we were unable to recover it. 00:26:41.727 [2024-12-06 03:35:01.775860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.727 [2024-12-06 03:35:01.775919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.727 [2024-12-06 03:35:01.775933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.727 [2024-12-06 03:35:01.775943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.727 [2024-12-06 03:35:01.775953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.727 [2024-12-06 03:35:01.775967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.727 qpair failed and we were unable to recover it. 00:26:41.727 [2024-12-06 03:35:01.785861] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.727 [2024-12-06 03:35:01.785929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.727 [2024-12-06 03:35:01.785944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.727 [2024-12-06 03:35:01.785956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.727 [2024-12-06 03:35:01.785963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.727 [2024-12-06 03:35:01.785978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.727 qpair failed and we were unable to recover it. 00:26:41.727 [2024-12-06 03:35:01.795930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.727 [2024-12-06 03:35:01.796035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.727 [2024-12-06 03:35:01.796050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.727 [2024-12-06 03:35:01.796057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.727 [2024-12-06 03:35:01.796063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.727 [2024-12-06 03:35:01.796078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.727 qpair failed and we were unable to recover it. 00:26:41.727 [2024-12-06 03:35:01.805921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.727 [2024-12-06 03:35:01.805995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.727 [2024-12-06 03:35:01.806010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.727 [2024-12-06 03:35:01.806017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.727 [2024-12-06 03:35:01.806023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.727 [2024-12-06 03:35:01.806038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.727 qpair failed and we were unable to recover it. 00:26:41.727 [2024-12-06 03:35:01.815932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.727 [2024-12-06 03:35:01.815989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.727 [2024-12-06 03:35:01.816003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.727 [2024-12-06 03:35:01.816010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.727 [2024-12-06 03:35:01.816016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.727 [2024-12-06 03:35:01.816031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.727 qpair failed and we were unable to recover it. 00:26:41.727 [2024-12-06 03:35:01.825981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.727 [2024-12-06 03:35:01.826038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.727 [2024-12-06 03:35:01.826052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.727 [2024-12-06 03:35:01.826059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.727 [2024-12-06 03:35:01.826065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.727 [2024-12-06 03:35:01.826080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.727 qpair failed and we were unable to recover it. 00:26:41.727 [2024-12-06 03:35:01.836010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.727 [2024-12-06 03:35:01.836073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.727 [2024-12-06 03:35:01.836087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.727 [2024-12-06 03:35:01.836094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.727 [2024-12-06 03:35:01.836100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.727 [2024-12-06 03:35:01.836115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.727 qpair failed and we were unable to recover it. 00:26:41.727 [2024-12-06 03:35:01.846030] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.727 [2024-12-06 03:35:01.846089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.727 [2024-12-06 03:35:01.846103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.727 [2024-12-06 03:35:01.846110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.727 [2024-12-06 03:35:01.846116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.727 [2024-12-06 03:35:01.846130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.727 qpair failed and we were unable to recover it. 00:26:41.727 [2024-12-06 03:35:01.856012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.727 [2024-12-06 03:35:01.856099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.727 [2024-12-06 03:35:01.856113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.727 [2024-12-06 03:35:01.856119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.727 [2024-12-06 03:35:01.856126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.727 [2024-12-06 03:35:01.856140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.727 qpair failed and we were unable to recover it. 00:26:41.986 [2024-12-06 03:35:01.866062] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.986 [2024-12-06 03:35:01.866121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.986 [2024-12-06 03:35:01.866135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.986 [2024-12-06 03:35:01.866141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.986 [2024-12-06 03:35:01.866148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.986 [2024-12-06 03:35:01.866162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.986 qpair failed and we were unable to recover it. 00:26:41.986 [2024-12-06 03:35:01.876128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.986 [2024-12-06 03:35:01.876185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.986 [2024-12-06 03:35:01.876200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.986 [2024-12-06 03:35:01.876207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.986 [2024-12-06 03:35:01.876213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.986 [2024-12-06 03:35:01.876228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.986 qpair failed and we were unable to recover it. 00:26:41.986 [2024-12-06 03:35:01.886150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.986 [2024-12-06 03:35:01.886238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.986 [2024-12-06 03:35:01.886252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.986 [2024-12-06 03:35:01.886259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.986 [2024-12-06 03:35:01.886265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.986 [2024-12-06 03:35:01.886280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.986 qpair failed and we were unable to recover it. 00:26:41.986 [2024-12-06 03:35:01.896151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.986 [2024-12-06 03:35:01.896204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.986 [2024-12-06 03:35:01.896218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.986 [2024-12-06 03:35:01.896225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.986 [2024-12-06 03:35:01.896231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.986 [2024-12-06 03:35:01.896246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.986 qpair failed and we were unable to recover it. 00:26:41.986 [2024-12-06 03:35:01.906114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.986 [2024-12-06 03:35:01.906176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.986 [2024-12-06 03:35:01.906190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.986 [2024-12-06 03:35:01.906200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.986 [2024-12-06 03:35:01.906207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.986 [2024-12-06 03:35:01.906221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.986 qpair failed and we were unable to recover it. 00:26:41.986 [2024-12-06 03:35:01.916227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.986 [2024-12-06 03:35:01.916287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.986 [2024-12-06 03:35:01.916301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.986 [2024-12-06 03:35:01.916308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.986 [2024-12-06 03:35:01.916314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.986 [2024-12-06 03:35:01.916329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.986 qpair failed and we were unable to recover it. 00:26:41.986 [2024-12-06 03:35:01.926297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.986 [2024-12-06 03:35:01.926402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.986 [2024-12-06 03:35:01.926416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.986 [2024-12-06 03:35:01.926423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.986 [2024-12-06 03:35:01.926430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.986 [2024-12-06 03:35:01.926444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.986 qpair failed and we were unable to recover it. 00:26:41.986 [2024-12-06 03:35:01.936266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.986 [2024-12-06 03:35:01.936317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.986 [2024-12-06 03:35:01.936333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.986 [2024-12-06 03:35:01.936340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.986 [2024-12-06 03:35:01.936346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.986 [2024-12-06 03:35:01.936361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.986 qpair failed and we were unable to recover it. 00:26:41.986 [2024-12-06 03:35:01.946303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.986 [2024-12-06 03:35:01.946358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.986 [2024-12-06 03:35:01.946372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.986 [2024-12-06 03:35:01.946379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.986 [2024-12-06 03:35:01.946385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.986 [2024-12-06 03:35:01.946405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.986 qpair failed and we were unable to recover it. 00:26:41.986 [2024-12-06 03:35:01.956314] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.986 [2024-12-06 03:35:01.956372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.986 [2024-12-06 03:35:01.956386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.986 [2024-12-06 03:35:01.956393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.986 [2024-12-06 03:35:01.956399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.986 [2024-12-06 03:35:01.956414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.986 qpair failed and we were unable to recover it. 00:26:41.987 [2024-12-06 03:35:01.966348] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.987 [2024-12-06 03:35:01.966409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.987 [2024-12-06 03:35:01.966424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.987 [2024-12-06 03:35:01.966431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.987 [2024-12-06 03:35:01.966437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.987 [2024-12-06 03:35:01.966452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.987 qpair failed and we were unable to recover it. 00:26:41.987 [2024-12-06 03:35:01.976311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.987 [2024-12-06 03:35:01.976368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.987 [2024-12-06 03:35:01.976383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.987 [2024-12-06 03:35:01.976390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.987 [2024-12-06 03:35:01.976396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.987 [2024-12-06 03:35:01.976410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.987 qpair failed and we were unable to recover it. 00:26:41.987 [2024-12-06 03:35:01.986395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.987 [2024-12-06 03:35:01.986451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.987 [2024-12-06 03:35:01.986465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.987 [2024-12-06 03:35:01.986472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.987 [2024-12-06 03:35:01.986479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.987 [2024-12-06 03:35:01.986493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.987 qpair failed and we were unable to recover it. 00:26:41.987 [2024-12-06 03:35:01.996438] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.987 [2024-12-06 03:35:01.996499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.987 [2024-12-06 03:35:01.996513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.987 [2024-12-06 03:35:01.996521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.987 [2024-12-06 03:35:01.996527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.987 [2024-12-06 03:35:01.996541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.987 qpair failed and we were unable to recover it. 00:26:41.987 [2024-12-06 03:35:02.006519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.987 [2024-12-06 03:35:02.006577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.987 [2024-12-06 03:35:02.006591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.987 [2024-12-06 03:35:02.006598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.987 [2024-12-06 03:35:02.006604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.987 [2024-12-06 03:35:02.006618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.987 qpair failed and we were unable to recover it. 00:26:41.987 [2024-12-06 03:35:02.016418] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.987 [2024-12-06 03:35:02.016477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.987 [2024-12-06 03:35:02.016491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.987 [2024-12-06 03:35:02.016498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.987 [2024-12-06 03:35:02.016504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.987 [2024-12-06 03:35:02.016518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.987 qpair failed and we were unable to recover it. 00:26:41.987 [2024-12-06 03:35:02.026440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.987 [2024-12-06 03:35:02.026500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.987 [2024-12-06 03:35:02.026514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.987 [2024-12-06 03:35:02.026521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.987 [2024-12-06 03:35:02.026527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.987 [2024-12-06 03:35:02.026541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.987 qpair failed and we were unable to recover it. 00:26:41.987 [2024-12-06 03:35:02.036557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.987 [2024-12-06 03:35:02.036613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.987 [2024-12-06 03:35:02.036627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.987 [2024-12-06 03:35:02.036637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.987 [2024-12-06 03:35:02.036643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.987 [2024-12-06 03:35:02.036657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.987 qpair failed and we were unable to recover it. 00:26:41.987 [2024-12-06 03:35:02.046603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.987 [2024-12-06 03:35:02.046673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.987 [2024-12-06 03:35:02.046688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.987 [2024-12-06 03:35:02.046695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.987 [2024-12-06 03:35:02.046701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.987 [2024-12-06 03:35:02.046715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.987 qpair failed and we were unable to recover it. 00:26:41.987 [2024-12-06 03:35:02.056619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.987 [2024-12-06 03:35:02.056676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.987 [2024-12-06 03:35:02.056691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.987 [2024-12-06 03:35:02.056698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.987 [2024-12-06 03:35:02.056704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.987 [2024-12-06 03:35:02.056719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.987 qpair failed and we were unable to recover it. 00:26:41.987 [2024-12-06 03:35:02.066634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.987 [2024-12-06 03:35:02.066689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.987 [2024-12-06 03:35:02.066704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.987 [2024-12-06 03:35:02.066711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.987 [2024-12-06 03:35:02.066717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.987 [2024-12-06 03:35:02.066731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.987 qpair failed and we were unable to recover it. 00:26:41.987 [2024-12-06 03:35:02.076675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.987 [2024-12-06 03:35:02.076750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.987 [2024-12-06 03:35:02.076764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.987 [2024-12-06 03:35:02.076771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.987 [2024-12-06 03:35:02.076777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.987 [2024-12-06 03:35:02.076794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.987 qpair failed and we were unable to recover it. 00:26:41.987 [2024-12-06 03:35:02.086698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.987 [2024-12-06 03:35:02.086749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.987 [2024-12-06 03:35:02.086763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.987 [2024-12-06 03:35:02.086770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.987 [2024-12-06 03:35:02.086776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.987 [2024-12-06 03:35:02.086791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.987 qpair failed and we were unable to recover it. 00:26:41.987 [2024-12-06 03:35:02.096642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.988 [2024-12-06 03:35:02.096717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.988 [2024-12-06 03:35:02.096731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.988 [2024-12-06 03:35:02.096738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.988 [2024-12-06 03:35:02.096744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.988 [2024-12-06 03:35:02.096759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.988 qpair failed and we were unable to recover it. 00:26:41.988 [2024-12-06 03:35:02.106742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.988 [2024-12-06 03:35:02.106799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.988 [2024-12-06 03:35:02.106813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.988 [2024-12-06 03:35:02.106821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.988 [2024-12-06 03:35:02.106826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.988 [2024-12-06 03:35:02.106841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.988 qpair failed and we were unable to recover it. 00:26:41.988 [2024-12-06 03:35:02.116755] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:41.988 [2024-12-06 03:35:02.116816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:41.988 [2024-12-06 03:35:02.116830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:41.988 [2024-12-06 03:35:02.116837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:41.988 [2024-12-06 03:35:02.116843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:41.988 [2024-12-06 03:35:02.116857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:41.988 qpair failed and we were unable to recover it. 00:26:42.247 [2024-12-06 03:35:02.126819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.247 [2024-12-06 03:35:02.126879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.247 [2024-12-06 03:35:02.126894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.247 [2024-12-06 03:35:02.126901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.247 [2024-12-06 03:35:02.126907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.247 [2024-12-06 03:35:02.126921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.247 qpair failed and we were unable to recover it. 00:26:42.247 [2024-12-06 03:35:02.136850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.247 [2024-12-06 03:35:02.136907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.247 [2024-12-06 03:35:02.136922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.247 [2024-12-06 03:35:02.136929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.247 [2024-12-06 03:35:02.136935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.247 [2024-12-06 03:35:02.136954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.247 qpair failed and we were unable to recover it. 00:26:42.247 [2024-12-06 03:35:02.146865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.248 [2024-12-06 03:35:02.146923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.248 [2024-12-06 03:35:02.146939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.248 [2024-12-06 03:35:02.146950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.248 [2024-12-06 03:35:02.146957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.248 [2024-12-06 03:35:02.146973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.248 qpair failed and we were unable to recover it. 00:26:42.248 [2024-12-06 03:35:02.156995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.248 [2024-12-06 03:35:02.157062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.248 [2024-12-06 03:35:02.157077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.248 [2024-12-06 03:35:02.157084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.248 [2024-12-06 03:35:02.157090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.248 [2024-12-06 03:35:02.157105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.248 qpair failed and we were unable to recover it. 00:26:42.248 [2024-12-06 03:35:02.166923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.248 [2024-12-06 03:35:02.166985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.248 [2024-12-06 03:35:02.167000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.248 [2024-12-06 03:35:02.167010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.248 [2024-12-06 03:35:02.167016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.248 [2024-12-06 03:35:02.167031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.248 qpair failed and we were unable to recover it. 00:26:42.248 [2024-12-06 03:35:02.176972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.248 [2024-12-06 03:35:02.177027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.248 [2024-12-06 03:35:02.177041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.248 [2024-12-06 03:35:02.177048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.248 [2024-12-06 03:35:02.177054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.248 [2024-12-06 03:35:02.177068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.248 qpair failed and we were unable to recover it. 00:26:42.248 [2024-12-06 03:35:02.186981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.248 [2024-12-06 03:35:02.187036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.248 [2024-12-06 03:35:02.187050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.248 [2024-12-06 03:35:02.187057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.248 [2024-12-06 03:35:02.187063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.248 [2024-12-06 03:35:02.187078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.248 qpair failed and we were unable to recover it. 00:26:42.248 [2024-12-06 03:35:02.197027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.248 [2024-12-06 03:35:02.197086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.248 [2024-12-06 03:35:02.197101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.248 [2024-12-06 03:35:02.197108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.248 [2024-12-06 03:35:02.197114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.248 [2024-12-06 03:35:02.197129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.248 qpair failed and we were unable to recover it. 00:26:42.248 [2024-12-06 03:35:02.207047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.248 [2024-12-06 03:35:02.207104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.248 [2024-12-06 03:35:02.207118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.248 [2024-12-06 03:35:02.207126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.248 [2024-12-06 03:35:02.207132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.248 [2024-12-06 03:35:02.207150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.248 qpair failed and we were unable to recover it. 00:26:42.248 [2024-12-06 03:35:02.217083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.248 [2024-12-06 03:35:02.217142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.248 [2024-12-06 03:35:02.217157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.248 [2024-12-06 03:35:02.217164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.248 [2024-12-06 03:35:02.217170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.248 [2024-12-06 03:35:02.217185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.248 qpair failed and we were unable to recover it. 00:26:42.248 [2024-12-06 03:35:02.227083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.248 [2024-12-06 03:35:02.227140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.248 [2024-12-06 03:35:02.227154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.248 [2024-12-06 03:35:02.227162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.248 [2024-12-06 03:35:02.227169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.248 [2024-12-06 03:35:02.227183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.248 qpair failed and we were unable to recover it. 00:26:42.248 [2024-12-06 03:35:02.237108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.248 [2024-12-06 03:35:02.237170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.248 [2024-12-06 03:35:02.237184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.248 [2024-12-06 03:35:02.237191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.248 [2024-12-06 03:35:02.237197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.248 [2024-12-06 03:35:02.237211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.248 qpair failed and we were unable to recover it. 00:26:42.248 [2024-12-06 03:35:02.247156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.248 [2024-12-06 03:35:02.247231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.248 [2024-12-06 03:35:02.247248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.248 [2024-12-06 03:35:02.247256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.248 [2024-12-06 03:35:02.247263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.248 [2024-12-06 03:35:02.247278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.248 qpair failed and we were unable to recover it. 00:26:42.248 [2024-12-06 03:35:02.257181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.248 [2024-12-06 03:35:02.257240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.248 [2024-12-06 03:35:02.257255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.248 [2024-12-06 03:35:02.257262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.248 [2024-12-06 03:35:02.257268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.248 [2024-12-06 03:35:02.257283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.248 qpair failed and we were unable to recover it. 00:26:42.248 [2024-12-06 03:35:02.267212] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.248 [2024-12-06 03:35:02.267265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.248 [2024-12-06 03:35:02.267280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.248 [2024-12-06 03:35:02.267287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.248 [2024-12-06 03:35:02.267293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.248 [2024-12-06 03:35:02.267307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.248 qpair failed and we were unable to recover it. 00:26:42.248 [2024-12-06 03:35:02.277257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.249 [2024-12-06 03:35:02.277331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.249 [2024-12-06 03:35:02.277346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.249 [2024-12-06 03:35:02.277353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.249 [2024-12-06 03:35:02.277359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.249 [2024-12-06 03:35:02.277373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.249 qpair failed and we were unable to recover it. 00:26:42.249 [2024-12-06 03:35:02.287286] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.249 [2024-12-06 03:35:02.287345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.249 [2024-12-06 03:35:02.287359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.249 [2024-12-06 03:35:02.287366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.249 [2024-12-06 03:35:02.287372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.249 [2024-12-06 03:35:02.287386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.249 qpair failed and we were unable to recover it. 00:26:42.249 [2024-12-06 03:35:02.297296] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.249 [2024-12-06 03:35:02.297354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.249 [2024-12-06 03:35:02.297368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.249 [2024-12-06 03:35:02.297378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.249 [2024-12-06 03:35:02.297384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.249 [2024-12-06 03:35:02.297398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.249 qpair failed and we were unable to recover it. 00:26:42.249 [2024-12-06 03:35:02.307318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.249 [2024-12-06 03:35:02.307385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.249 [2024-12-06 03:35:02.307400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.249 [2024-12-06 03:35:02.307407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.249 [2024-12-06 03:35:02.307413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.249 [2024-12-06 03:35:02.307428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.249 qpair failed and we were unable to recover it. 00:26:42.249 [2024-12-06 03:35:02.317415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.249 [2024-12-06 03:35:02.317522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.249 [2024-12-06 03:35:02.317537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.249 [2024-12-06 03:35:02.317544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.249 [2024-12-06 03:35:02.317551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.249 [2024-12-06 03:35:02.317566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.249 qpair failed and we were unable to recover it. 00:26:42.249 [2024-12-06 03:35:02.327417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.249 [2024-12-06 03:35:02.327476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.249 [2024-12-06 03:35:02.327490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.249 [2024-12-06 03:35:02.327497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.249 [2024-12-06 03:35:02.327503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.249 [2024-12-06 03:35:02.327518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.249 qpair failed and we were unable to recover it. 00:26:42.249 [2024-12-06 03:35:02.337402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.249 [2024-12-06 03:35:02.337459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.249 [2024-12-06 03:35:02.337473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.249 [2024-12-06 03:35:02.337480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.249 [2024-12-06 03:35:02.337486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.249 [2024-12-06 03:35:02.337504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.249 qpair failed and we were unable to recover it. 00:26:42.249 [2024-12-06 03:35:02.347434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.249 [2024-12-06 03:35:02.347496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.249 [2024-12-06 03:35:02.347510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.249 [2024-12-06 03:35:02.347518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.249 [2024-12-06 03:35:02.347524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.249 [2024-12-06 03:35:02.347538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.249 qpair failed and we were unable to recover it. 00:26:42.249 [2024-12-06 03:35:02.357530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.249 [2024-12-06 03:35:02.357606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.249 [2024-12-06 03:35:02.357622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.249 [2024-12-06 03:35:02.357629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.249 [2024-12-06 03:35:02.357635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.249 [2024-12-06 03:35:02.357650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.249 qpair failed and we were unable to recover it. 00:26:42.249 [2024-12-06 03:35:02.367439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.249 [2024-12-06 03:35:02.367495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.249 [2024-12-06 03:35:02.367509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.249 [2024-12-06 03:35:02.367516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.249 [2024-12-06 03:35:02.367522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.249 [2024-12-06 03:35:02.367537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.249 qpair failed and we were unable to recover it. 00:26:42.249 [2024-12-06 03:35:02.377550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.249 [2024-12-06 03:35:02.377608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.249 [2024-12-06 03:35:02.377623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.249 [2024-12-06 03:35:02.377630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.249 [2024-12-06 03:35:02.377636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.249 [2024-12-06 03:35:02.377651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.249 qpair failed and we were unable to recover it. 00:26:42.508 [2024-12-06 03:35:02.387576] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.508 [2024-12-06 03:35:02.387636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.508 [2024-12-06 03:35:02.387651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.508 [2024-12-06 03:35:02.387658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.508 [2024-12-06 03:35:02.387664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.508 [2024-12-06 03:35:02.387679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.508 qpair failed and we were unable to recover it. 00:26:42.508 [2024-12-06 03:35:02.397579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.508 [2024-12-06 03:35:02.397639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.508 [2024-12-06 03:35:02.397654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.508 [2024-12-06 03:35:02.397660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.508 [2024-12-06 03:35:02.397667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.508 [2024-12-06 03:35:02.397681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.508 qpair failed and we were unable to recover it. 00:26:42.508 [2024-12-06 03:35:02.407603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.508 [2024-12-06 03:35:02.407661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.508 [2024-12-06 03:35:02.407675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.508 [2024-12-06 03:35:02.407682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.508 [2024-12-06 03:35:02.407689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.508 [2024-12-06 03:35:02.407703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.508 qpair failed and we were unable to recover it. 00:26:42.509 [2024-12-06 03:35:02.417618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.509 [2024-12-06 03:35:02.417675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.509 [2024-12-06 03:35:02.417689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.509 [2024-12-06 03:35:02.417697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.509 [2024-12-06 03:35:02.417703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.509 [2024-12-06 03:35:02.417717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.509 qpair failed and we were unable to recover it. 00:26:42.509 [2024-12-06 03:35:02.427673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.509 [2024-12-06 03:35:02.427728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.509 [2024-12-06 03:35:02.427743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.509 [2024-12-06 03:35:02.427753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.509 [2024-12-06 03:35:02.427759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.509 [2024-12-06 03:35:02.427774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.509 qpair failed and we were unable to recover it. 00:26:42.509 [2024-12-06 03:35:02.437691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.509 [2024-12-06 03:35:02.437793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.509 [2024-12-06 03:35:02.437808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.509 [2024-12-06 03:35:02.437816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.509 [2024-12-06 03:35:02.437822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.509 [2024-12-06 03:35:02.437836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.509 qpair failed and we were unable to recover it. 00:26:42.509 [2024-12-06 03:35:02.447744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.509 [2024-12-06 03:35:02.447829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.509 [2024-12-06 03:35:02.447844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.509 [2024-12-06 03:35:02.447851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.509 [2024-12-06 03:35:02.447857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.509 [2024-12-06 03:35:02.447871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.509 qpair failed and we were unable to recover it. 00:26:42.509 [2024-12-06 03:35:02.457772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.509 [2024-12-06 03:35:02.457826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.509 [2024-12-06 03:35:02.457840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.509 [2024-12-06 03:35:02.457847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.509 [2024-12-06 03:35:02.457853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.509 [2024-12-06 03:35:02.457867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.509 qpair failed and we were unable to recover it. 00:26:42.509 [2024-12-06 03:35:02.467769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.509 [2024-12-06 03:35:02.467827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.509 [2024-12-06 03:35:02.467842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.509 [2024-12-06 03:35:02.467849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.509 [2024-12-06 03:35:02.467855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.509 [2024-12-06 03:35:02.467872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.509 qpair failed and we were unable to recover it. 00:26:42.509 [2024-12-06 03:35:02.477833] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.509 [2024-12-06 03:35:02.477912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.509 [2024-12-06 03:35:02.477926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.509 [2024-12-06 03:35:02.477933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.509 [2024-12-06 03:35:02.477939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.509 [2024-12-06 03:35:02.477956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.509 qpair failed and we were unable to recover it. 00:26:42.509 [2024-12-06 03:35:02.487831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.509 [2024-12-06 03:35:02.487889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.509 [2024-12-06 03:35:02.487904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.509 [2024-12-06 03:35:02.487912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.509 [2024-12-06 03:35:02.487917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.509 [2024-12-06 03:35:02.487933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.509 qpair failed and we were unable to recover it. 00:26:42.509 [2024-12-06 03:35:02.497844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.509 [2024-12-06 03:35:02.497900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.509 [2024-12-06 03:35:02.497915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.509 [2024-12-06 03:35:02.497922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.509 [2024-12-06 03:35:02.497928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.509 [2024-12-06 03:35:02.497942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.509 qpair failed and we were unable to recover it. 00:26:42.509 [2024-12-06 03:35:02.507875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.509 [2024-12-06 03:35:02.507934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.509 [2024-12-06 03:35:02.507953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.509 [2024-12-06 03:35:02.507960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.509 [2024-12-06 03:35:02.507966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.509 [2024-12-06 03:35:02.507981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.509 qpair failed and we were unable to recover it. 00:26:42.509 [2024-12-06 03:35:02.517913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.509 [2024-12-06 03:35:02.517981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.509 [2024-12-06 03:35:02.517996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.509 [2024-12-06 03:35:02.518003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.509 [2024-12-06 03:35:02.518009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.509 [2024-12-06 03:35:02.518024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.509 qpair failed and we were unable to recover it. 00:26:42.509 [2024-12-06 03:35:02.527960] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.509 [2024-12-06 03:35:02.528020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.509 [2024-12-06 03:35:02.528034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.509 [2024-12-06 03:35:02.528042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.509 [2024-12-06 03:35:02.528048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.509 [2024-12-06 03:35:02.528063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.509 qpair failed and we were unable to recover it. 00:26:42.509 [2024-12-06 03:35:02.537991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.509 [2024-12-06 03:35:02.538048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.509 [2024-12-06 03:35:02.538063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.509 [2024-12-06 03:35:02.538070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.509 [2024-12-06 03:35:02.538076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.509 [2024-12-06 03:35:02.538091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.509 qpair failed and we were unable to recover it. 00:26:42.509 [2024-12-06 03:35:02.547992] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.510 [2024-12-06 03:35:02.548101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.510 [2024-12-06 03:35:02.548115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.510 [2024-12-06 03:35:02.548123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.510 [2024-12-06 03:35:02.548129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.510 [2024-12-06 03:35:02.548145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.510 qpair failed and we were unable to recover it. 00:26:42.510 [2024-12-06 03:35:02.558077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.510 [2024-12-06 03:35:02.558177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.510 [2024-12-06 03:35:02.558191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.510 [2024-12-06 03:35:02.558201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.510 [2024-12-06 03:35:02.558208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.510 [2024-12-06 03:35:02.558223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.510 qpair failed and we were unable to recover it. 00:26:42.510 [2024-12-06 03:35:02.568059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.510 [2024-12-06 03:35:02.568118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.510 [2024-12-06 03:35:02.568134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.510 [2024-12-06 03:35:02.568142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.510 [2024-12-06 03:35:02.568148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.510 [2024-12-06 03:35:02.568163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.510 qpair failed and we were unable to recover it. 00:26:42.510 [2024-12-06 03:35:02.578078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.510 [2024-12-06 03:35:02.578136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.510 [2024-12-06 03:35:02.578150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.510 [2024-12-06 03:35:02.578158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.510 [2024-12-06 03:35:02.578165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.510 [2024-12-06 03:35:02.578180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.510 qpair failed and we were unable to recover it. 00:26:42.510 [2024-12-06 03:35:02.588181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.510 [2024-12-06 03:35:02.588242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.510 [2024-12-06 03:35:02.588257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.510 [2024-12-06 03:35:02.588264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.510 [2024-12-06 03:35:02.588270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.510 [2024-12-06 03:35:02.588284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.510 qpair failed and we were unable to recover it. 00:26:42.510 [2024-12-06 03:35:02.598151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.510 [2024-12-06 03:35:02.598209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.510 [2024-12-06 03:35:02.598224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.510 [2024-12-06 03:35:02.598231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.510 [2024-12-06 03:35:02.598237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.510 [2024-12-06 03:35:02.598255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.510 qpair failed and we were unable to recover it. 00:26:42.510 [2024-12-06 03:35:02.608178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.510 [2024-12-06 03:35:02.608232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.510 [2024-12-06 03:35:02.608247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.510 [2024-12-06 03:35:02.608254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.510 [2024-12-06 03:35:02.608260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.510 [2024-12-06 03:35:02.608274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.510 qpair failed and we were unable to recover it. 00:26:42.510 [2024-12-06 03:35:02.618214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.510 [2024-12-06 03:35:02.618294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.510 [2024-12-06 03:35:02.618309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.510 [2024-12-06 03:35:02.618316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.510 [2024-12-06 03:35:02.618322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.510 [2024-12-06 03:35:02.618336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.510 qpair failed and we were unable to recover it. 00:26:42.510 [2024-12-06 03:35:02.628236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.510 [2024-12-06 03:35:02.628315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.510 [2024-12-06 03:35:02.628330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.510 [2024-12-06 03:35:02.628337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.510 [2024-12-06 03:35:02.628343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.510 [2024-12-06 03:35:02.628358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.510 qpair failed and we were unable to recover it. 00:26:42.510 [2024-12-06 03:35:02.638263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.510 [2024-12-06 03:35:02.638321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.510 [2024-12-06 03:35:02.638336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.510 [2024-12-06 03:35:02.638343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.510 [2024-12-06 03:35:02.638349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.510 [2024-12-06 03:35:02.638364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.510 qpair failed and we were unable to recover it. 00:26:42.770 [2024-12-06 03:35:02.648302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.770 [2024-12-06 03:35:02.648366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.770 [2024-12-06 03:35:02.648381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.770 [2024-12-06 03:35:02.648388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.770 [2024-12-06 03:35:02.648394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.770 [2024-12-06 03:35:02.648409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.770 qpair failed and we were unable to recover it. 00:26:42.770 [2024-12-06 03:35:02.658337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.770 [2024-12-06 03:35:02.658389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.770 [2024-12-06 03:35:02.658404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.770 [2024-12-06 03:35:02.658411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.770 [2024-12-06 03:35:02.658417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.771 [2024-12-06 03:35:02.658432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.771 qpair failed and we were unable to recover it. 00:26:42.771 [2024-12-06 03:35:02.668366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.771 [2024-12-06 03:35:02.668444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.771 [2024-12-06 03:35:02.668458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.771 [2024-12-06 03:35:02.668465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.771 [2024-12-06 03:35:02.668471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.771 [2024-12-06 03:35:02.668486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.771 qpair failed and we were unable to recover it. 00:26:42.771 [2024-12-06 03:35:02.678383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.771 [2024-12-06 03:35:02.678442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.771 [2024-12-06 03:35:02.678456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.771 [2024-12-06 03:35:02.678463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.771 [2024-12-06 03:35:02.678469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.771 [2024-12-06 03:35:02.678483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.771 qpair failed and we were unable to recover it. 00:26:42.771 [2024-12-06 03:35:02.688407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.771 [2024-12-06 03:35:02.688467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.771 [2024-12-06 03:35:02.688482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.771 [2024-12-06 03:35:02.688492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.771 [2024-12-06 03:35:02.688498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.771 [2024-12-06 03:35:02.688512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.771 qpair failed and we were unable to recover it. 00:26:42.771 [2024-12-06 03:35:02.698442] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.771 [2024-12-06 03:35:02.698501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.771 [2024-12-06 03:35:02.698516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.771 [2024-12-06 03:35:02.698523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.771 [2024-12-06 03:35:02.698529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.771 [2024-12-06 03:35:02.698543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.771 qpair failed and we were unable to recover it. 00:26:42.771 [2024-12-06 03:35:02.708499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.771 [2024-12-06 03:35:02.708578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.771 [2024-12-06 03:35:02.708594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.771 [2024-12-06 03:35:02.708601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.771 [2024-12-06 03:35:02.708607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.771 [2024-12-06 03:35:02.708622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.771 qpair failed and we were unable to recover it. 00:26:42.771 [2024-12-06 03:35:02.718500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.771 [2024-12-06 03:35:02.718558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.771 [2024-12-06 03:35:02.718573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.771 [2024-12-06 03:35:02.718580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.771 [2024-12-06 03:35:02.718586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.771 [2024-12-06 03:35:02.718600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.771 qpair failed and we were unable to recover it. 00:26:42.771 [2024-12-06 03:35:02.728558] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.771 [2024-12-06 03:35:02.728627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.771 [2024-12-06 03:35:02.728642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.771 [2024-12-06 03:35:02.728648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.771 [2024-12-06 03:35:02.728654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.771 [2024-12-06 03:35:02.728672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.771 qpair failed and we were unable to recover it. 00:26:42.771 [2024-12-06 03:35:02.738579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.771 [2024-12-06 03:35:02.738644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.771 [2024-12-06 03:35:02.738659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.771 [2024-12-06 03:35:02.738665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.771 [2024-12-06 03:35:02.738672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.771 [2024-12-06 03:35:02.738686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.771 qpair failed and we were unable to recover it. 00:26:42.771 [2024-12-06 03:35:02.748588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.771 [2024-12-06 03:35:02.748645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.771 [2024-12-06 03:35:02.748659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.771 [2024-12-06 03:35:02.748666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.771 [2024-12-06 03:35:02.748672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.771 [2024-12-06 03:35:02.748687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.771 qpair failed and we were unable to recover it. 00:26:42.771 [2024-12-06 03:35:02.758604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.771 [2024-12-06 03:35:02.758660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.771 [2024-12-06 03:35:02.758674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.771 [2024-12-06 03:35:02.758681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.771 [2024-12-06 03:35:02.758687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.771 [2024-12-06 03:35:02.758701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.771 qpair failed and we were unable to recover it. 00:26:42.771 [2024-12-06 03:35:02.768654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.771 [2024-12-06 03:35:02.768739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.771 [2024-12-06 03:35:02.768754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.771 [2024-12-06 03:35:02.768761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.771 [2024-12-06 03:35:02.768768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.771 [2024-12-06 03:35:02.768784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.771 qpair failed and we were unable to recover it. 00:26:42.771 [2024-12-06 03:35:02.778708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.771 [2024-12-06 03:35:02.778768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.771 [2024-12-06 03:35:02.778783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.771 [2024-12-06 03:35:02.778790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.771 [2024-12-06 03:35:02.778797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.771 [2024-12-06 03:35:02.778811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.771 qpair failed and we were unable to recover it. 00:26:42.771 [2024-12-06 03:35:02.788694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.771 [2024-12-06 03:35:02.788749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.771 [2024-12-06 03:35:02.788764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.771 [2024-12-06 03:35:02.788771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.771 [2024-12-06 03:35:02.788777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.771 [2024-12-06 03:35:02.788793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.771 qpair failed and we were unable to recover it. 00:26:42.771 [2024-12-06 03:35:02.798761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.771 [2024-12-06 03:35:02.798866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.771 [2024-12-06 03:35:02.798880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.771 [2024-12-06 03:35:02.798887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.771 [2024-12-06 03:35:02.798894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.771 [2024-12-06 03:35:02.798909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.771 qpair failed and we were unable to recover it. 00:26:42.771 [2024-12-06 03:35:02.808783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.771 [2024-12-06 03:35:02.808843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.771 [2024-12-06 03:35:02.808857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.771 [2024-12-06 03:35:02.808864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.771 [2024-12-06 03:35:02.808871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.771 [2024-12-06 03:35:02.808885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.771 qpair failed and we were unable to recover it. 00:26:42.771 [2024-12-06 03:35:02.818774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.771 [2024-12-06 03:35:02.818831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.771 [2024-12-06 03:35:02.818846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.771 [2024-12-06 03:35:02.818856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.771 [2024-12-06 03:35:02.818862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.771 [2024-12-06 03:35:02.818877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.771 qpair failed and we were unable to recover it. 00:26:42.771 [2024-12-06 03:35:02.828823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.771 [2024-12-06 03:35:02.828879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.771 [2024-12-06 03:35:02.828895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.771 [2024-12-06 03:35:02.828902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.771 [2024-12-06 03:35:02.828909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.771 [2024-12-06 03:35:02.828924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.771 qpair failed and we were unable to recover it. 00:26:42.771 [2024-12-06 03:35:02.838840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.771 [2024-12-06 03:35:02.838900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.771 [2024-12-06 03:35:02.838914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.771 [2024-12-06 03:35:02.838921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.771 [2024-12-06 03:35:02.838928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.771 [2024-12-06 03:35:02.838942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.771 qpair failed and we were unable to recover it. 00:26:42.771 [2024-12-06 03:35:02.848874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.771 [2024-12-06 03:35:02.848930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.771 [2024-12-06 03:35:02.848945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.771 [2024-12-06 03:35:02.848958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.771 [2024-12-06 03:35:02.848964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.772 [2024-12-06 03:35:02.848979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.772 qpair failed and we were unable to recover it. 00:26:42.772 [2024-12-06 03:35:02.858886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.772 [2024-12-06 03:35:02.858944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.772 [2024-12-06 03:35:02.858963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.772 [2024-12-06 03:35:02.858970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.772 [2024-12-06 03:35:02.858976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.772 [2024-12-06 03:35:02.858994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.772 qpair failed and we were unable to recover it. 00:26:42.772 [2024-12-06 03:35:02.868915] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.772 [2024-12-06 03:35:02.868990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.772 [2024-12-06 03:35:02.869005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.772 [2024-12-06 03:35:02.869012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.772 [2024-12-06 03:35:02.869018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.772 [2024-12-06 03:35:02.869033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.772 qpair failed and we were unable to recover it. 00:26:42.772 [2024-12-06 03:35:02.878992] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.772 [2024-12-06 03:35:02.879052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.772 [2024-12-06 03:35:02.879066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.772 [2024-12-06 03:35:02.879073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.772 [2024-12-06 03:35:02.879080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.772 [2024-12-06 03:35:02.879094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.772 qpair failed and we were unable to recover it. 00:26:42.772 [2024-12-06 03:35:02.888985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.772 [2024-12-06 03:35:02.889041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.772 [2024-12-06 03:35:02.889056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.772 [2024-12-06 03:35:02.889063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.772 [2024-12-06 03:35:02.889069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.772 [2024-12-06 03:35:02.889084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.772 qpair failed and we were unable to recover it. 00:26:42.772 [2024-12-06 03:35:02.899003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:42.772 [2024-12-06 03:35:02.899066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:42.772 [2024-12-06 03:35:02.899081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:42.772 [2024-12-06 03:35:02.899088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:42.772 [2024-12-06 03:35:02.899094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:42.772 [2024-12-06 03:35:02.899108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.772 qpair failed and we were unable to recover it. 00:26:43.032 [2024-12-06 03:35:02.909045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.032 [2024-12-06 03:35:02.909136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.032 [2024-12-06 03:35:02.909150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.032 [2024-12-06 03:35:02.909157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.032 [2024-12-06 03:35:02.909163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.032 [2024-12-06 03:35:02.909178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.032 qpair failed and we were unable to recover it. 00:26:43.032 [2024-12-06 03:35:02.919110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.032 [2024-12-06 03:35:02.919165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.032 [2024-12-06 03:35:02.919180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.032 [2024-12-06 03:35:02.919186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.032 [2024-12-06 03:35:02.919193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.032 [2024-12-06 03:35:02.919207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.032 qpair failed and we were unable to recover it. 00:26:43.032 [2024-12-06 03:35:02.929086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.032 [2024-12-06 03:35:02.929140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.032 [2024-12-06 03:35:02.929154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.032 [2024-12-06 03:35:02.929160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.032 [2024-12-06 03:35:02.929166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.032 [2024-12-06 03:35:02.929181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.032 qpair failed and we were unable to recover it. 00:26:43.032 [2024-12-06 03:35:02.939120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.032 [2024-12-06 03:35:02.939178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.032 [2024-12-06 03:35:02.939193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.032 [2024-12-06 03:35:02.939200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.032 [2024-12-06 03:35:02.939206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.032 [2024-12-06 03:35:02.939220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.032 qpair failed and we were unable to recover it. 00:26:43.032 [2024-12-06 03:35:02.949154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.032 [2024-12-06 03:35:02.949216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.032 [2024-12-06 03:35:02.949230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.032 [2024-12-06 03:35:02.949240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.032 [2024-12-06 03:35:02.949246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.032 [2024-12-06 03:35:02.949261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.032 qpair failed and we were unable to recover it. 00:26:43.032 [2024-12-06 03:35:02.959269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.032 [2024-12-06 03:35:02.959352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.032 [2024-12-06 03:35:02.959366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.033 [2024-12-06 03:35:02.959373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.033 [2024-12-06 03:35:02.959380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.033 [2024-12-06 03:35:02.959394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.033 qpair failed and we were unable to recover it. 00:26:43.033 [2024-12-06 03:35:02.969197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.033 [2024-12-06 03:35:02.969259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.033 [2024-12-06 03:35:02.969274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.033 [2024-12-06 03:35:02.969281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.033 [2024-12-06 03:35:02.969288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.033 [2024-12-06 03:35:02.969302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.033 qpair failed and we were unable to recover it. 00:26:43.033 [2024-12-06 03:35:02.979265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.033 [2024-12-06 03:35:02.979322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.033 [2024-12-06 03:35:02.979338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.033 [2024-12-06 03:35:02.979345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.033 [2024-12-06 03:35:02.979351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.033 [2024-12-06 03:35:02.979366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.033 qpair failed and we were unable to recover it. 00:26:43.033 [2024-12-06 03:35:02.989305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.033 [2024-12-06 03:35:02.989362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.033 [2024-12-06 03:35:02.989377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.033 [2024-12-06 03:35:02.989384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.033 [2024-12-06 03:35:02.989391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.033 [2024-12-06 03:35:02.989408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.033 qpair failed and we were unable to recover it. 00:26:43.033 [2024-12-06 03:35:02.999271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.033 [2024-12-06 03:35:02.999327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.033 [2024-12-06 03:35:02.999341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.033 [2024-12-06 03:35:02.999348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.033 [2024-12-06 03:35:02.999354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.033 [2024-12-06 03:35:02.999369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.033 qpair failed and we were unable to recover it. 00:26:43.033 [2024-12-06 03:35:03.009303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.033 [2024-12-06 03:35:03.009363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.033 [2024-12-06 03:35:03.009377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.033 [2024-12-06 03:35:03.009384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.033 [2024-12-06 03:35:03.009390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.033 [2024-12-06 03:35:03.009405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.033 qpair failed and we were unable to recover it. 00:26:43.033 [2024-12-06 03:35:03.019341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.033 [2024-12-06 03:35:03.019398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.033 [2024-12-06 03:35:03.019413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.033 [2024-12-06 03:35:03.019419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.033 [2024-12-06 03:35:03.019426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.033 [2024-12-06 03:35:03.019440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.033 qpair failed and we were unable to recover it. 00:26:43.033 [2024-12-06 03:35:03.029324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.033 [2024-12-06 03:35:03.029376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.033 [2024-12-06 03:35:03.029390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.033 [2024-12-06 03:35:03.029397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.033 [2024-12-06 03:35:03.029404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.033 [2024-12-06 03:35:03.029417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.033 qpair failed and we were unable to recover it. 00:26:43.033 [2024-12-06 03:35:03.039443] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.033 [2024-12-06 03:35:03.039503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.033 [2024-12-06 03:35:03.039517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.033 [2024-12-06 03:35:03.039524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.033 [2024-12-06 03:35:03.039530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.033 [2024-12-06 03:35:03.039545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.033 qpair failed and we were unable to recover it. 00:26:43.033 [2024-12-06 03:35:03.049439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.033 [2024-12-06 03:35:03.049493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.033 [2024-12-06 03:35:03.049507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.033 [2024-12-06 03:35:03.049514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.033 [2024-12-06 03:35:03.049520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.033 [2024-12-06 03:35:03.049535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.033 qpair failed and we were unable to recover it. 00:26:43.033 [2024-12-06 03:35:03.059448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.033 [2024-12-06 03:35:03.059524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.033 [2024-12-06 03:35:03.059538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.033 [2024-12-06 03:35:03.059545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.033 [2024-12-06 03:35:03.059551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.033 [2024-12-06 03:35:03.059566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.033 qpair failed and we were unable to recover it. 00:26:43.033 [2024-12-06 03:35:03.069526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.034 [2024-12-06 03:35:03.069581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.034 [2024-12-06 03:35:03.069595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.034 [2024-12-06 03:35:03.069602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.034 [2024-12-06 03:35:03.069608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.034 [2024-12-06 03:35:03.069622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.034 qpair failed and we were unable to recover it. 00:26:43.034 [2024-12-06 03:35:03.079555] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.034 [2024-12-06 03:35:03.079613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.034 [2024-12-06 03:35:03.079630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.034 [2024-12-06 03:35:03.079638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.034 [2024-12-06 03:35:03.079644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.034 [2024-12-06 03:35:03.079658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.034 qpair failed and we were unable to recover it. 00:26:43.034 [2024-12-06 03:35:03.089492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.034 [2024-12-06 03:35:03.089551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.034 [2024-12-06 03:35:03.089565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.034 [2024-12-06 03:35:03.089572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.034 [2024-12-06 03:35:03.089578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.034 [2024-12-06 03:35:03.089592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.034 qpair failed and we were unable to recover it. 00:26:43.034 [2024-12-06 03:35:03.099561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.034 [2024-12-06 03:35:03.099628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.034 [2024-12-06 03:35:03.099643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.034 [2024-12-06 03:35:03.099650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.034 [2024-12-06 03:35:03.099656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.034 [2024-12-06 03:35:03.099670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.034 qpair failed and we were unable to recover it. 00:26:43.034 [2024-12-06 03:35:03.109571] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.034 [2024-12-06 03:35:03.109629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.034 [2024-12-06 03:35:03.109643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.034 [2024-12-06 03:35:03.109650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.034 [2024-12-06 03:35:03.109656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.034 [2024-12-06 03:35:03.109671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.034 qpair failed and we were unable to recover it. 00:26:43.034 [2024-12-06 03:35:03.119585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.034 [2024-12-06 03:35:03.119642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.034 [2024-12-06 03:35:03.119657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.034 [2024-12-06 03:35:03.119664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.034 [2024-12-06 03:35:03.119670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.034 [2024-12-06 03:35:03.119688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.034 qpair failed and we were unable to recover it. 00:26:43.034 [2024-12-06 03:35:03.129709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.034 [2024-12-06 03:35:03.129776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.034 [2024-12-06 03:35:03.129791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.034 [2024-12-06 03:35:03.129797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.034 [2024-12-06 03:35:03.129803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.034 [2024-12-06 03:35:03.129818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.034 qpair failed and we were unable to recover it. 00:26:43.034 [2024-12-06 03:35:03.139787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.034 [2024-12-06 03:35:03.139872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.034 [2024-12-06 03:35:03.139887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.034 [2024-12-06 03:35:03.139893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.034 [2024-12-06 03:35:03.139899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.034 [2024-12-06 03:35:03.139914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.034 qpair failed and we were unable to recover it. 00:26:43.034 [2024-12-06 03:35:03.149734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.034 [2024-12-06 03:35:03.149790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.034 [2024-12-06 03:35:03.149805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.034 [2024-12-06 03:35:03.149812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.034 [2024-12-06 03:35:03.149818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.034 [2024-12-06 03:35:03.149832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.034 qpair failed and we were unable to recover it. 00:26:43.034 [2024-12-06 03:35:03.159818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.034 [2024-12-06 03:35:03.159922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.034 [2024-12-06 03:35:03.159937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.034 [2024-12-06 03:35:03.159944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.034 [2024-12-06 03:35:03.159953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.034 [2024-12-06 03:35:03.159969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.034 qpair failed and we were unable to recover it. 00:26:43.295 [2024-12-06 03:35:03.169800] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.295 [2024-12-06 03:35:03.169874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.295 [2024-12-06 03:35:03.169889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.295 [2024-12-06 03:35:03.169896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.295 [2024-12-06 03:35:03.169902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.295 [2024-12-06 03:35:03.169916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-12-06 03:35:03.179897] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.295 [2024-12-06 03:35:03.179958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.295 [2024-12-06 03:35:03.179973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.295 [2024-12-06 03:35:03.179980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.295 [2024-12-06 03:35:03.179987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.295 [2024-12-06 03:35:03.180002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-12-06 03:35:03.189844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.295 [2024-12-06 03:35:03.189920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.295 [2024-12-06 03:35:03.189936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.295 [2024-12-06 03:35:03.189944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.295 [2024-12-06 03:35:03.189954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.295 [2024-12-06 03:35:03.189970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-12-06 03:35:03.199892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.295 [2024-12-06 03:35:03.199956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.295 [2024-12-06 03:35:03.199970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.295 [2024-12-06 03:35:03.199977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.295 [2024-12-06 03:35:03.199984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.295 [2024-12-06 03:35:03.199999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-12-06 03:35:03.209912] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.295 [2024-12-06 03:35:03.209973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.295 [2024-12-06 03:35:03.209991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.295 [2024-12-06 03:35:03.209998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.295 [2024-12-06 03:35:03.210004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.295 [2024-12-06 03:35:03.210019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-12-06 03:35:03.219930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.295 [2024-12-06 03:35:03.219998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.295 [2024-12-06 03:35:03.220012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.295 [2024-12-06 03:35:03.220020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.295 [2024-12-06 03:35:03.220026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.295 [2024-12-06 03:35:03.220041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-12-06 03:35:03.229978] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.295 [2024-12-06 03:35:03.230033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.295 [2024-12-06 03:35:03.230047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.295 [2024-12-06 03:35:03.230054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.295 [2024-12-06 03:35:03.230060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.295 [2024-12-06 03:35:03.230075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-12-06 03:35:03.239981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.295 [2024-12-06 03:35:03.240040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.295 [2024-12-06 03:35:03.240057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.295 [2024-12-06 03:35:03.240065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.295 [2024-12-06 03:35:03.240071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.295 [2024-12-06 03:35:03.240086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-12-06 03:35:03.250013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.295 [2024-12-06 03:35:03.250074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.295 [2024-12-06 03:35:03.250089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.295 [2024-12-06 03:35:03.250097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.295 [2024-12-06 03:35:03.250103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.296 [2024-12-06 03:35:03.250121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-12-06 03:35:03.260073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.296 [2024-12-06 03:35:03.260129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.296 [2024-12-06 03:35:03.260144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.296 [2024-12-06 03:35:03.260151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.296 [2024-12-06 03:35:03.260158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.296 [2024-12-06 03:35:03.260172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-12-06 03:35:03.270105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.296 [2024-12-06 03:35:03.270186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.296 [2024-12-06 03:35:03.270200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.296 [2024-12-06 03:35:03.270207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.296 [2024-12-06 03:35:03.270213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.296 [2024-12-06 03:35:03.270228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-12-06 03:35:03.280152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.296 [2024-12-06 03:35:03.280214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.296 [2024-12-06 03:35:03.280228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.296 [2024-12-06 03:35:03.280236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.296 [2024-12-06 03:35:03.280242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.296 [2024-12-06 03:35:03.280256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-12-06 03:35:03.290184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.296 [2024-12-06 03:35:03.290243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.296 [2024-12-06 03:35:03.290257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.296 [2024-12-06 03:35:03.290264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.296 [2024-12-06 03:35:03.290270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.296 [2024-12-06 03:35:03.290285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-12-06 03:35:03.300145] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.296 [2024-12-06 03:35:03.300197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.296 [2024-12-06 03:35:03.300212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.296 [2024-12-06 03:35:03.300219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.296 [2024-12-06 03:35:03.300225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.296 [2024-12-06 03:35:03.300240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-12-06 03:35:03.310188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.296 [2024-12-06 03:35:03.310277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.296 [2024-12-06 03:35:03.310291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.296 [2024-12-06 03:35:03.310297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.296 [2024-12-06 03:35:03.310303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.296 [2024-12-06 03:35:03.310318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-12-06 03:35:03.320233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.296 [2024-12-06 03:35:03.320292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.296 [2024-12-06 03:35:03.320307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.296 [2024-12-06 03:35:03.320314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.296 [2024-12-06 03:35:03.320320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.296 [2024-12-06 03:35:03.320335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-12-06 03:35:03.330266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.296 [2024-12-06 03:35:03.330324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.296 [2024-12-06 03:35:03.330339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.296 [2024-12-06 03:35:03.330346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.296 [2024-12-06 03:35:03.330353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.296 [2024-12-06 03:35:03.330367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-12-06 03:35:03.340283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.296 [2024-12-06 03:35:03.340336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.296 [2024-12-06 03:35:03.340354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.296 [2024-12-06 03:35:03.340361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.296 [2024-12-06 03:35:03.340367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.296 [2024-12-06 03:35:03.340381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-12-06 03:35:03.350316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.296 [2024-12-06 03:35:03.350373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.296 [2024-12-06 03:35:03.350387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.296 [2024-12-06 03:35:03.350394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.296 [2024-12-06 03:35:03.350400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.296 [2024-12-06 03:35:03.350415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-12-06 03:35:03.360353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.296 [2024-12-06 03:35:03.360411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.296 [2024-12-06 03:35:03.360426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.296 [2024-12-06 03:35:03.360433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.296 [2024-12-06 03:35:03.360439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.296 [2024-12-06 03:35:03.360453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-12-06 03:35:03.370352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.296 [2024-12-06 03:35:03.370408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.296 [2024-12-06 03:35:03.370422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.296 [2024-12-06 03:35:03.370429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.296 [2024-12-06 03:35:03.370435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.296 [2024-12-06 03:35:03.370449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-12-06 03:35:03.380393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.296 [2024-12-06 03:35:03.380476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.296 [2024-12-06 03:35:03.380490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.296 [2024-12-06 03:35:03.380497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.296 [2024-12-06 03:35:03.380503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.296 [2024-12-06 03:35:03.380526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.297 [2024-12-06 03:35:03.390408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.297 [2024-12-06 03:35:03.390466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.297 [2024-12-06 03:35:03.390481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.297 [2024-12-06 03:35:03.390488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.297 [2024-12-06 03:35:03.390494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.297 [2024-12-06 03:35:03.390508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-12-06 03:35:03.400449] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.297 [2024-12-06 03:35:03.400544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.297 [2024-12-06 03:35:03.400559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.297 [2024-12-06 03:35:03.400566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.297 [2024-12-06 03:35:03.400572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.297 [2024-12-06 03:35:03.400586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-12-06 03:35:03.410473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.297 [2024-12-06 03:35:03.410528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.297 [2024-12-06 03:35:03.410543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.297 [2024-12-06 03:35:03.410550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.297 [2024-12-06 03:35:03.410556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.297 [2024-12-06 03:35:03.410570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-12-06 03:35:03.420495] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.297 [2024-12-06 03:35:03.420552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.297 [2024-12-06 03:35:03.420566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.297 [2024-12-06 03:35:03.420573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.297 [2024-12-06 03:35:03.420579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.297 [2024-12-06 03:35:03.420593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-12-06 03:35:03.430506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.297 [2024-12-06 03:35:03.430614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.297 [2024-12-06 03:35:03.430628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.297 [2024-12-06 03:35:03.430636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.297 [2024-12-06 03:35:03.430642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.297 [2024-12-06 03:35:03.430657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.558 [2024-12-06 03:35:03.440562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.558 [2024-12-06 03:35:03.440620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.558 [2024-12-06 03:35:03.440635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.558 [2024-12-06 03:35:03.440641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.558 [2024-12-06 03:35:03.440647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.558 [2024-12-06 03:35:03.440662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-12-06 03:35:03.450591] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.558 [2024-12-06 03:35:03.450649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.558 [2024-12-06 03:35:03.450663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.558 [2024-12-06 03:35:03.450671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.558 [2024-12-06 03:35:03.450677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.558 [2024-12-06 03:35:03.450692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-12-06 03:35:03.460611] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.558 [2024-12-06 03:35:03.460705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.558 [2024-12-06 03:35:03.460719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.558 [2024-12-06 03:35:03.460726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.558 [2024-12-06 03:35:03.460732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.558 [2024-12-06 03:35:03.460747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-12-06 03:35:03.470629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.558 [2024-12-06 03:35:03.470685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.558 [2024-12-06 03:35:03.470702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.558 [2024-12-06 03:35:03.470709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.558 [2024-12-06 03:35:03.470715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.558 [2024-12-06 03:35:03.470729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.559 [2024-12-06 03:35:03.480711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.559 [2024-12-06 03:35:03.480774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.559 [2024-12-06 03:35:03.480788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.559 [2024-12-06 03:35:03.480795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.559 [2024-12-06 03:35:03.480801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.559 [2024-12-06 03:35:03.480815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-12-06 03:35:03.490701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.559 [2024-12-06 03:35:03.490762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.559 [2024-12-06 03:35:03.490776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.559 [2024-12-06 03:35:03.490783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.559 [2024-12-06 03:35:03.490789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.559 [2024-12-06 03:35:03.490803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-12-06 03:35:03.500727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.559 [2024-12-06 03:35:03.500778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.559 [2024-12-06 03:35:03.500793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.559 [2024-12-06 03:35:03.500800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.559 [2024-12-06 03:35:03.500806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.559 [2024-12-06 03:35:03.500820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-12-06 03:35:03.510757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.559 [2024-12-06 03:35:03.510810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.559 [2024-12-06 03:35:03.510825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.559 [2024-12-06 03:35:03.510831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.559 [2024-12-06 03:35:03.510840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.559 [2024-12-06 03:35:03.510855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-12-06 03:35:03.520770] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.559 [2024-12-06 03:35:03.520830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.559 [2024-12-06 03:35:03.520844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.559 [2024-12-06 03:35:03.520851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.559 [2024-12-06 03:35:03.520857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.559 [2024-12-06 03:35:03.520871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-12-06 03:35:03.530821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.559 [2024-12-06 03:35:03.530882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.559 [2024-12-06 03:35:03.530896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.559 [2024-12-06 03:35:03.530903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.559 [2024-12-06 03:35:03.530909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.559 [2024-12-06 03:35:03.530923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-12-06 03:35:03.540768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.559 [2024-12-06 03:35:03.540825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.559 [2024-12-06 03:35:03.540840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.559 [2024-12-06 03:35:03.540847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.559 [2024-12-06 03:35:03.540853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.559 [2024-12-06 03:35:03.540867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-12-06 03:35:03.550882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.559 [2024-12-06 03:35:03.550939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.559 [2024-12-06 03:35:03.550957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.559 [2024-12-06 03:35:03.550964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.559 [2024-12-06 03:35:03.550970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.559 [2024-12-06 03:35:03.550985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-12-06 03:35:03.560891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.559 [2024-12-06 03:35:03.560954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.559 [2024-12-06 03:35:03.560969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.559 [2024-12-06 03:35:03.560976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.559 [2024-12-06 03:35:03.560982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.559 [2024-12-06 03:35:03.560996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-12-06 03:35:03.570932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.559 [2024-12-06 03:35:03.571109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.559 [2024-12-06 03:35:03.571125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.559 [2024-12-06 03:35:03.571133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.559 [2024-12-06 03:35:03.571139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.559 [2024-12-06 03:35:03.571154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-12-06 03:35:03.580928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.559 [2024-12-06 03:35:03.580983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.559 [2024-12-06 03:35:03.580997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.559 [2024-12-06 03:35:03.581004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.559 [2024-12-06 03:35:03.581010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.559 [2024-12-06 03:35:03.581025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-12-06 03:35:03.590970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.559 [2024-12-06 03:35:03.591023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.559 [2024-12-06 03:35:03.591038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.559 [2024-12-06 03:35:03.591045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.559 [2024-12-06 03:35:03.591051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.559 [2024-12-06 03:35:03.591065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-12-06 03:35:03.601021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.559 [2024-12-06 03:35:03.601079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.559 [2024-12-06 03:35:03.601098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.559 [2024-12-06 03:35:03.601105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.559 [2024-12-06 03:35:03.601111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.559 [2024-12-06 03:35:03.601127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-12-06 03:35:03.611047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.560 [2024-12-06 03:35:03.611108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.560 [2024-12-06 03:35:03.611123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.560 [2024-12-06 03:35:03.611130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.560 [2024-12-06 03:35:03.611136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.560 [2024-12-06 03:35:03.611150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-12-06 03:35:03.621051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.560 [2024-12-06 03:35:03.621116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.560 [2024-12-06 03:35:03.621131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.560 [2024-12-06 03:35:03.621138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.560 [2024-12-06 03:35:03.621144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.560 [2024-12-06 03:35:03.621158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-12-06 03:35:03.631024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.560 [2024-12-06 03:35:03.631080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.560 [2024-12-06 03:35:03.631095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.560 [2024-12-06 03:35:03.631102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.560 [2024-12-06 03:35:03.631108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.560 [2024-12-06 03:35:03.631123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-12-06 03:35:03.641216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.560 [2024-12-06 03:35:03.641299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.560 [2024-12-06 03:35:03.641314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.560 [2024-12-06 03:35:03.641320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.560 [2024-12-06 03:35:03.641331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.560 [2024-12-06 03:35:03.641345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-12-06 03:35:03.651167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.560 [2024-12-06 03:35:03.651240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.560 [2024-12-06 03:35:03.651255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.560 [2024-12-06 03:35:03.651262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.560 [2024-12-06 03:35:03.651268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.560 [2024-12-06 03:35:03.651282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-12-06 03:35:03.661213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.560 [2024-12-06 03:35:03.661271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.560 [2024-12-06 03:35:03.661285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.560 [2024-12-06 03:35:03.661292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.560 [2024-12-06 03:35:03.661298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.560 [2024-12-06 03:35:03.661312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-12-06 03:35:03.671270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.560 [2024-12-06 03:35:03.671330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.560 [2024-12-06 03:35:03.671344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.560 [2024-12-06 03:35:03.671351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.560 [2024-12-06 03:35:03.671358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.560 [2024-12-06 03:35:03.671372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-12-06 03:35:03.681239] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.560 [2024-12-06 03:35:03.681297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.560 [2024-12-06 03:35:03.681311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.560 [2024-12-06 03:35:03.681319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.560 [2024-12-06 03:35:03.681325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.560 [2024-12-06 03:35:03.681340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-12-06 03:35:03.691293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.560 [2024-12-06 03:35:03.691354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.560 [2024-12-06 03:35:03.691369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.560 [2024-12-06 03:35:03.691376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.560 [2024-12-06 03:35:03.691382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.560 [2024-12-06 03:35:03.691398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.821 [2024-12-06 03:35:03.701284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.821 [2024-12-06 03:35:03.701340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.821 [2024-12-06 03:35:03.701354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.821 [2024-12-06 03:35:03.701361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.821 [2024-12-06 03:35:03.701367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.821 [2024-12-06 03:35:03.701381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.821 qpair failed and we were unable to recover it. 00:26:43.821 [2024-12-06 03:35:03.711311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.821 [2024-12-06 03:35:03.711387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.821 [2024-12-06 03:35:03.711401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.821 [2024-12-06 03:35:03.711408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.822 [2024-12-06 03:35:03.711414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.822 [2024-12-06 03:35:03.711428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.822 qpair failed and we were unable to recover it. 00:26:43.822 [2024-12-06 03:35:03.721350] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.822 [2024-12-06 03:35:03.721408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.822 [2024-12-06 03:35:03.721423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.822 [2024-12-06 03:35:03.721429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.822 [2024-12-06 03:35:03.721436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.822 [2024-12-06 03:35:03.721450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.822 qpair failed and we were unable to recover it. 00:26:43.822 [2024-12-06 03:35:03.731390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.822 [2024-12-06 03:35:03.731444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.822 [2024-12-06 03:35:03.731463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.822 [2024-12-06 03:35:03.731470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.822 [2024-12-06 03:35:03.731476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.822 [2024-12-06 03:35:03.731491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.822 qpair failed and we were unable to recover it. 00:26:43.822 [2024-12-06 03:35:03.741403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.822 [2024-12-06 03:35:03.741457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.822 [2024-12-06 03:35:03.741471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.822 [2024-12-06 03:35:03.741479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.822 [2024-12-06 03:35:03.741485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.822 [2024-12-06 03:35:03.741499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.822 qpair failed and we were unable to recover it. 00:26:43.822 [2024-12-06 03:35:03.751455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.822 [2024-12-06 03:35:03.751520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.822 [2024-12-06 03:35:03.751534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.822 [2024-12-06 03:35:03.751540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.822 [2024-12-06 03:35:03.751546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.822 [2024-12-06 03:35:03.751561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.822 qpair failed and we were unable to recover it. 00:26:43.822 [2024-12-06 03:35:03.761462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.822 [2024-12-06 03:35:03.761521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.822 [2024-12-06 03:35:03.761535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.822 [2024-12-06 03:35:03.761542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.822 [2024-12-06 03:35:03.761548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.822 [2024-12-06 03:35:03.761562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.822 qpair failed and we were unable to recover it. 00:26:43.822 [2024-12-06 03:35:03.771497] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.822 [2024-12-06 03:35:03.771583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.822 [2024-12-06 03:35:03.771597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.822 [2024-12-06 03:35:03.771604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.822 [2024-12-06 03:35:03.771613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.822 [2024-12-06 03:35:03.771627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.822 qpair failed and we were unable to recover it. 00:26:43.822 [2024-12-06 03:35:03.781525] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.822 [2024-12-06 03:35:03.781582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.822 [2024-12-06 03:35:03.781596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.822 [2024-12-06 03:35:03.781603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.822 [2024-12-06 03:35:03.781609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.822 [2024-12-06 03:35:03.781624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.822 qpair failed and we were unable to recover it. 00:26:43.822 [2024-12-06 03:35:03.791537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.822 [2024-12-06 03:35:03.791588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.822 [2024-12-06 03:35:03.791602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.822 [2024-12-06 03:35:03.791609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.822 [2024-12-06 03:35:03.791615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.822 [2024-12-06 03:35:03.791629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.822 qpair failed and we were unable to recover it. 00:26:43.822 [2024-12-06 03:35:03.801566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.822 [2024-12-06 03:35:03.801624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.822 [2024-12-06 03:35:03.801639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.822 [2024-12-06 03:35:03.801646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.822 [2024-12-06 03:35:03.801652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.822 [2024-12-06 03:35:03.801667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.822 qpair failed and we were unable to recover it. 00:26:43.822 [2024-12-06 03:35:03.811599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.822 [2024-12-06 03:35:03.811688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.822 [2024-12-06 03:35:03.811703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.822 [2024-12-06 03:35:03.811710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.822 [2024-12-06 03:35:03.811716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.822 [2024-12-06 03:35:03.811730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.822 qpair failed and we were unable to recover it. 00:26:43.822 [2024-12-06 03:35:03.821615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.822 [2024-12-06 03:35:03.821671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.822 [2024-12-06 03:35:03.821686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.822 [2024-12-06 03:35:03.821693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.822 [2024-12-06 03:35:03.821699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.822 [2024-12-06 03:35:03.821713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.822 qpair failed and we were unable to recover it. 00:26:43.822 [2024-12-06 03:35:03.831634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.822 [2024-12-06 03:35:03.831694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.822 [2024-12-06 03:35:03.831710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.822 [2024-12-06 03:35:03.831717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.822 [2024-12-06 03:35:03.831723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.822 [2024-12-06 03:35:03.831738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.822 qpair failed and we were unable to recover it. 00:26:43.822 [2024-12-06 03:35:03.841652] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.822 [2024-12-06 03:35:03.841709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.822 [2024-12-06 03:35:03.841724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.822 [2024-12-06 03:35:03.841731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.822 [2024-12-06 03:35:03.841738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.822 [2024-12-06 03:35:03.841752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.822 qpair failed and we were unable to recover it. 00:26:43.822 [2024-12-06 03:35:03.851706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.822 [2024-12-06 03:35:03.851764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.822 [2024-12-06 03:35:03.851778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.822 [2024-12-06 03:35:03.851785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.822 [2024-12-06 03:35:03.851791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.822 [2024-12-06 03:35:03.851805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.822 qpair failed and we were unable to recover it. 00:26:43.822 [2024-12-06 03:35:03.861738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.822 [2024-12-06 03:35:03.861818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.822 [2024-12-06 03:35:03.861835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.822 [2024-12-06 03:35:03.861842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.822 [2024-12-06 03:35:03.861848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.822 [2024-12-06 03:35:03.861863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.822 qpair failed and we were unable to recover it. 00:26:43.822 [2024-12-06 03:35:03.871792] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.822 [2024-12-06 03:35:03.871855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.822 [2024-12-06 03:35:03.871869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.822 [2024-12-06 03:35:03.871876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.822 [2024-12-06 03:35:03.871882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.822 [2024-12-06 03:35:03.871896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.822 qpair failed and we were unable to recover it. 00:26:43.822 [2024-12-06 03:35:03.881793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.822 [2024-12-06 03:35:03.881849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.822 [2024-12-06 03:35:03.881863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.822 [2024-12-06 03:35:03.881870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.822 [2024-12-06 03:35:03.881876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.822 [2024-12-06 03:35:03.881890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.822 qpair failed and we were unable to recover it. 00:26:43.822 [2024-12-06 03:35:03.891809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.822 [2024-12-06 03:35:03.891865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.822 [2024-12-06 03:35:03.891880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.822 [2024-12-06 03:35:03.891887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.822 [2024-12-06 03:35:03.891893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.823 [2024-12-06 03:35:03.891908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.823 qpair failed and we were unable to recover it. 00:26:43.823 [2024-12-06 03:35:03.901835] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.823 [2024-12-06 03:35:03.901892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.823 [2024-12-06 03:35:03.901906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.823 [2024-12-06 03:35:03.901914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.823 [2024-12-06 03:35:03.901923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.823 [2024-12-06 03:35:03.901937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.823 qpair failed and we were unable to recover it. 00:26:43.823 [2024-12-06 03:35:03.911904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.823 [2024-12-06 03:35:03.911961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.823 [2024-12-06 03:35:03.911976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.823 [2024-12-06 03:35:03.911983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.823 [2024-12-06 03:35:03.911989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.823 [2024-12-06 03:35:03.912004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.823 qpair failed and we were unable to recover it. 00:26:43.823 [2024-12-06 03:35:03.921912] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.823 [2024-12-06 03:35:03.922001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.823 [2024-12-06 03:35:03.922015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.823 [2024-12-06 03:35:03.922022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.823 [2024-12-06 03:35:03.922028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.823 [2024-12-06 03:35:03.922042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.823 qpair failed and we were unable to recover it. 00:26:43.823 [2024-12-06 03:35:03.931936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.823 [2024-12-06 03:35:03.931997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.823 [2024-12-06 03:35:03.932012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.823 [2024-12-06 03:35:03.932019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.823 [2024-12-06 03:35:03.932026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.823 [2024-12-06 03:35:03.932041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.823 qpair failed and we were unable to recover it. 00:26:43.823 [2024-12-06 03:35:03.941919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.823 [2024-12-06 03:35:03.942009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.823 [2024-12-06 03:35:03.942023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.823 [2024-12-06 03:35:03.942030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.823 [2024-12-06 03:35:03.942036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.823 [2024-12-06 03:35:03.942051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.823 qpair failed and we were unable to recover it. 00:26:43.823 [2024-12-06 03:35:03.951985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:43.823 [2024-12-06 03:35:03.952045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:43.823 [2024-12-06 03:35:03.952059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:43.823 [2024-12-06 03:35:03.952066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:43.823 [2024-12-06 03:35:03.952072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:43.823 [2024-12-06 03:35:03.952086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.823 qpair failed and we were unable to recover it. 00:26:44.084 [2024-12-06 03:35:03.962020] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.084 [2024-12-06 03:35:03.962089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.084 [2024-12-06 03:35:03.962103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.084 [2024-12-06 03:35:03.962110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.084 [2024-12-06 03:35:03.962116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.084 [2024-12-06 03:35:03.962130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.084 qpair failed and we were unable to recover it. 00:26:44.084 [2024-12-06 03:35:03.972039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.084 [2024-12-06 03:35:03.972092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.084 [2024-12-06 03:35:03.972106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.084 [2024-12-06 03:35:03.972113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.084 [2024-12-06 03:35:03.972119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.084 [2024-12-06 03:35:03.972134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.084 qpair failed and we were unable to recover it. 00:26:44.084 [2024-12-06 03:35:03.982072] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.084 [2024-12-06 03:35:03.982143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.084 [2024-12-06 03:35:03.982157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.084 [2024-12-06 03:35:03.982164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.084 [2024-12-06 03:35:03.982170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.084 [2024-12-06 03:35:03.982185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.084 qpair failed and we were unable to recover it. 00:26:44.084 [2024-12-06 03:35:03.992142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.084 [2024-12-06 03:35:03.992199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.084 [2024-12-06 03:35:03.992217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.084 [2024-12-06 03:35:03.992224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.084 [2024-12-06 03:35:03.992230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.084 [2024-12-06 03:35:03.992245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.084 qpair failed and we were unable to recover it. 00:26:44.084 [2024-12-06 03:35:04.002144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.084 [2024-12-06 03:35:04.002210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.084 [2024-12-06 03:35:04.002225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.084 [2024-12-06 03:35:04.002232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.084 [2024-12-06 03:35:04.002238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.084 [2024-12-06 03:35:04.002253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.084 qpair failed and we were unable to recover it. 00:26:44.084 [2024-12-06 03:35:04.012164] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.084 [2024-12-06 03:35:04.012219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.084 [2024-12-06 03:35:04.012233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.084 [2024-12-06 03:35:04.012240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.084 [2024-12-06 03:35:04.012246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.084 [2024-12-06 03:35:04.012261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.084 qpair failed and we were unable to recover it. 00:26:44.084 [2024-12-06 03:35:04.022195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.084 [2024-12-06 03:35:04.022249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.084 [2024-12-06 03:35:04.022265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.084 [2024-12-06 03:35:04.022272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.084 [2024-12-06 03:35:04.022278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.084 [2024-12-06 03:35:04.022292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.084 qpair failed and we were unable to recover it. 00:26:44.084 [2024-12-06 03:35:04.032220] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.084 [2024-12-06 03:35:04.032276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.084 [2024-12-06 03:35:04.032291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.084 [2024-12-06 03:35:04.032298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.084 [2024-12-06 03:35:04.032308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.084 [2024-12-06 03:35:04.032323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.084 qpair failed and we were unable to recover it. 00:26:44.084 [2024-12-06 03:35:04.042265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.085 [2024-12-06 03:35:04.042339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.085 [2024-12-06 03:35:04.042353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.085 [2024-12-06 03:35:04.042360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.085 [2024-12-06 03:35:04.042366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.085 [2024-12-06 03:35:04.042381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.085 qpair failed and we were unable to recover it. 00:26:44.085 [2024-12-06 03:35:04.052206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.085 [2024-12-06 03:35:04.052296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.085 [2024-12-06 03:35:04.052310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.085 [2024-12-06 03:35:04.052317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.085 [2024-12-06 03:35:04.052323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.085 [2024-12-06 03:35:04.052337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.085 qpair failed and we were unable to recover it. 00:26:44.085 [2024-12-06 03:35:04.062324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.085 [2024-12-06 03:35:04.062378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.085 [2024-12-06 03:35:04.062392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.085 [2024-12-06 03:35:04.062399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.085 [2024-12-06 03:35:04.062405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.085 [2024-12-06 03:35:04.062419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.085 qpair failed and we were unable to recover it. 00:26:44.085 [2024-12-06 03:35:04.072332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.085 [2024-12-06 03:35:04.072394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.085 [2024-12-06 03:35:04.072408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.085 [2024-12-06 03:35:04.072415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.085 [2024-12-06 03:35:04.072421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.085 [2024-12-06 03:35:04.072436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.085 qpair failed and we were unable to recover it. 00:26:44.085 [2024-12-06 03:35:04.082368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.085 [2024-12-06 03:35:04.082427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.085 [2024-12-06 03:35:04.082441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.085 [2024-12-06 03:35:04.082448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.085 [2024-12-06 03:35:04.082454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.085 [2024-12-06 03:35:04.082468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.085 qpair failed and we were unable to recover it. 00:26:44.085 [2024-12-06 03:35:04.092393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.085 [2024-12-06 03:35:04.092454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.085 [2024-12-06 03:35:04.092468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.085 [2024-12-06 03:35:04.092476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.085 [2024-12-06 03:35:04.092482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.085 [2024-12-06 03:35:04.092496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.085 qpair failed and we were unable to recover it. 00:26:44.085 [2024-12-06 03:35:04.102478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.085 [2024-12-06 03:35:04.102533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.085 [2024-12-06 03:35:04.102547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.085 [2024-12-06 03:35:04.102554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.085 [2024-12-06 03:35:04.102561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.085 [2024-12-06 03:35:04.102575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.085 qpair failed and we were unable to recover it. 00:26:44.085 [2024-12-06 03:35:04.112463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.085 [2024-12-06 03:35:04.112518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.085 [2024-12-06 03:35:04.112532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.085 [2024-12-06 03:35:04.112540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.085 [2024-12-06 03:35:04.112546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.085 [2024-12-06 03:35:04.112560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.085 qpair failed and we were unable to recover it. 00:26:44.085 [2024-12-06 03:35:04.122427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.085 [2024-12-06 03:35:04.122487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.085 [2024-12-06 03:35:04.122505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.085 [2024-12-06 03:35:04.122512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.085 [2024-12-06 03:35:04.122518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.085 [2024-12-06 03:35:04.122532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.085 qpair failed and we were unable to recover it. 00:26:44.085 [2024-12-06 03:35:04.132440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.085 [2024-12-06 03:35:04.132504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.085 [2024-12-06 03:35:04.132518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.085 [2024-12-06 03:35:04.132525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.085 [2024-12-06 03:35:04.132532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.085 [2024-12-06 03:35:04.132546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.085 qpair failed and we were unable to recover it. 00:26:44.085 [2024-12-06 03:35:04.142458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.085 [2024-12-06 03:35:04.142521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.085 [2024-12-06 03:35:04.142535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.085 [2024-12-06 03:35:04.142543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.085 [2024-12-06 03:35:04.142549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.085 [2024-12-06 03:35:04.142563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.085 qpair failed and we were unable to recover it. 00:26:44.085 [2024-12-06 03:35:04.152563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.085 [2024-12-06 03:35:04.152627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.085 [2024-12-06 03:35:04.152641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.085 [2024-12-06 03:35:04.152647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.085 [2024-12-06 03:35:04.152654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.085 [2024-12-06 03:35:04.152668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.085 qpair failed and we were unable to recover it. 00:26:44.085 [2024-12-06 03:35:04.162617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.085 [2024-12-06 03:35:04.162678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.085 [2024-12-06 03:35:04.162692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.085 [2024-12-06 03:35:04.162699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.085 [2024-12-06 03:35:04.162708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.085 [2024-12-06 03:35:04.162723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.085 qpair failed and we were unable to recover it. 00:26:44.085 [2024-12-06 03:35:04.172642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.086 [2024-12-06 03:35:04.172700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.086 [2024-12-06 03:35:04.172714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.086 [2024-12-06 03:35:04.172721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.086 [2024-12-06 03:35:04.172727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.086 [2024-12-06 03:35:04.172741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.086 qpair failed and we were unable to recover it. 00:26:44.086 [2024-12-06 03:35:04.182686] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.086 [2024-12-06 03:35:04.182744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.086 [2024-12-06 03:35:04.182759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.086 [2024-12-06 03:35:04.182766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.086 [2024-12-06 03:35:04.182772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.086 [2024-12-06 03:35:04.182787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.086 qpair failed and we were unable to recover it. 00:26:44.086 [2024-12-06 03:35:04.192694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.086 [2024-12-06 03:35:04.192752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.086 [2024-12-06 03:35:04.192768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.086 [2024-12-06 03:35:04.192775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.086 [2024-12-06 03:35:04.192781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.086 [2024-12-06 03:35:04.192796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.086 qpair failed and we were unable to recover it. 00:26:44.086 [2024-12-06 03:35:04.202755] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.086 [2024-12-06 03:35:04.202811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.086 [2024-12-06 03:35:04.202826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.086 [2024-12-06 03:35:04.202833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.086 [2024-12-06 03:35:04.202839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.086 [2024-12-06 03:35:04.202854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.086 qpair failed and we were unable to recover it. 00:26:44.086 [2024-12-06 03:35:04.212676] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.086 [2024-12-06 03:35:04.212734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.086 [2024-12-06 03:35:04.212748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.086 [2024-12-06 03:35:04.212755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.086 [2024-12-06 03:35:04.212761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.086 [2024-12-06 03:35:04.212775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.086 qpair failed and we were unable to recover it. 00:26:44.346 [2024-12-06 03:35:04.222748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.346 [2024-12-06 03:35:04.222807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.346 [2024-12-06 03:35:04.222823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.346 [2024-12-06 03:35:04.222830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.346 [2024-12-06 03:35:04.222836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.346 [2024-12-06 03:35:04.222852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.346 qpair failed and we were unable to recover it. 00:26:44.346 [2024-12-06 03:35:04.232736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.346 [2024-12-06 03:35:04.232788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.346 [2024-12-06 03:35:04.232802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.346 [2024-12-06 03:35:04.232809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.346 [2024-12-06 03:35:04.232816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.346 [2024-12-06 03:35:04.232831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.346 qpair failed and we were unable to recover it. 00:26:44.346 [2024-12-06 03:35:04.242825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.346 [2024-12-06 03:35:04.242901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.346 [2024-12-06 03:35:04.242918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.346 [2024-12-06 03:35:04.242926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.346 [2024-12-06 03:35:04.242932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.346 [2024-12-06 03:35:04.242953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.346 qpair failed and we were unable to recover it. 00:26:44.346 [2024-12-06 03:35:04.252855] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.346 [2024-12-06 03:35:04.252913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.346 [2024-12-06 03:35:04.252931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.346 [2024-12-06 03:35:04.252938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.346 [2024-12-06 03:35:04.252945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.347 [2024-12-06 03:35:04.252964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.347 qpair failed and we were unable to recover it. 00:26:44.347 [2024-12-06 03:35:04.262912] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.347 [2024-12-06 03:35:04.262970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.347 [2024-12-06 03:35:04.262985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.347 [2024-12-06 03:35:04.262992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.347 [2024-12-06 03:35:04.262998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.347 [2024-12-06 03:35:04.263013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.347 qpair failed and we were unable to recover it. 00:26:44.347 [2024-12-06 03:35:04.272859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.347 [2024-12-06 03:35:04.272915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.347 [2024-12-06 03:35:04.272930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.347 [2024-12-06 03:35:04.272938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.347 [2024-12-06 03:35:04.272943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.347 [2024-12-06 03:35:04.272963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.347 qpair failed and we were unable to recover it. 00:26:44.347 [2024-12-06 03:35:04.282876] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.347 [2024-12-06 03:35:04.282937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.347 [2024-12-06 03:35:04.282955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.347 [2024-12-06 03:35:04.282962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.347 [2024-12-06 03:35:04.282968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.347 [2024-12-06 03:35:04.282983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.347 qpair failed and we were unable to recover it. 00:26:44.347 [2024-12-06 03:35:04.292960] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.347 [2024-12-06 03:35:04.293015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.347 [2024-12-06 03:35:04.293030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.347 [2024-12-06 03:35:04.293037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.347 [2024-12-06 03:35:04.293046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.347 [2024-12-06 03:35:04.293061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.347 qpair failed and we were unable to recover it. 00:26:44.347 [2024-12-06 03:35:04.302989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.347 [2024-12-06 03:35:04.303046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.347 [2024-12-06 03:35:04.303061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.347 [2024-12-06 03:35:04.303068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.347 [2024-12-06 03:35:04.303074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.347 [2024-12-06 03:35:04.303088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.347 qpair failed and we were unable to recover it. 00:26:44.347 [2024-12-06 03:35:04.313025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.347 [2024-12-06 03:35:04.313078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.347 [2024-12-06 03:35:04.313093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.347 [2024-12-06 03:35:04.313100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.347 [2024-12-06 03:35:04.313106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.347 [2024-12-06 03:35:04.313121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.347 qpair failed and we were unable to recover it. 00:26:44.347 [2024-12-06 03:35:04.322996] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.347 [2024-12-06 03:35:04.323055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.347 [2024-12-06 03:35:04.323071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.347 [2024-12-06 03:35:04.323078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.347 [2024-12-06 03:35:04.323084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.347 [2024-12-06 03:35:04.323098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.347 qpair failed and we were unable to recover it. 00:26:44.347 [2024-12-06 03:35:04.333077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.347 [2024-12-06 03:35:04.333134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.347 [2024-12-06 03:35:04.333148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.347 [2024-12-06 03:35:04.333155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.347 [2024-12-06 03:35:04.333161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.347 [2024-12-06 03:35:04.333175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.347 qpair failed and we were unable to recover it. 00:26:44.347 [2024-12-06 03:35:04.343058] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.347 [2024-12-06 03:35:04.343148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.347 [2024-12-06 03:35:04.343162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.347 [2024-12-06 03:35:04.343169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.347 [2024-12-06 03:35:04.343175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.347 [2024-12-06 03:35:04.343189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.347 qpair failed and we were unable to recover it. 00:26:44.347 [2024-12-06 03:35:04.353098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.347 [2024-12-06 03:35:04.353152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.347 [2024-12-06 03:35:04.353167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.347 [2024-12-06 03:35:04.353173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.347 [2024-12-06 03:35:04.353180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.347 [2024-12-06 03:35:04.353194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.347 qpair failed and we were unable to recover it. 00:26:44.347 [2024-12-06 03:35:04.363165] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.347 [2024-12-06 03:35:04.363224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.347 [2024-12-06 03:35:04.363239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.347 [2024-12-06 03:35:04.363247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.347 [2024-12-06 03:35:04.363253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.347 [2024-12-06 03:35:04.363267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.347 qpair failed and we were unable to recover it. 00:26:44.347 [2024-12-06 03:35:04.373303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.347 [2024-12-06 03:35:04.373384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.347 [2024-12-06 03:35:04.373398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.347 [2024-12-06 03:35:04.373405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.347 [2024-12-06 03:35:04.373411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.347 [2024-12-06 03:35:04.373426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.347 qpair failed and we were unable to recover it. 00:26:44.347 [2024-12-06 03:35:04.383254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.347 [2024-12-06 03:35:04.383314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.348 [2024-12-06 03:35:04.383334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.348 [2024-12-06 03:35:04.383343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.348 [2024-12-06 03:35:04.383350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.348 [2024-12-06 03:35:04.383367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.348 qpair failed and we were unable to recover it. 00:26:44.348 [2024-12-06 03:35:04.393195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.348 [2024-12-06 03:35:04.393253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.348 [2024-12-06 03:35:04.393268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.348 [2024-12-06 03:35:04.393275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.348 [2024-12-06 03:35:04.393281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.348 [2024-12-06 03:35:04.393296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.348 qpair failed and we were unable to recover it. 00:26:44.348 [2024-12-06 03:35:04.403224] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.348 [2024-12-06 03:35:04.403281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.348 [2024-12-06 03:35:04.403296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.348 [2024-12-06 03:35:04.403302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.348 [2024-12-06 03:35:04.403309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.348 [2024-12-06 03:35:04.403323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.348 qpair failed and we were unable to recover it. 00:26:44.348 [2024-12-06 03:35:04.413337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.348 [2024-12-06 03:35:04.413393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.348 [2024-12-06 03:35:04.413407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.348 [2024-12-06 03:35:04.413414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.348 [2024-12-06 03:35:04.413421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.348 [2024-12-06 03:35:04.413434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.348 qpair failed and we were unable to recover it. 00:26:44.348 [2024-12-06 03:35:04.423322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.348 [2024-12-06 03:35:04.423375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.348 [2024-12-06 03:35:04.423390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.348 [2024-12-06 03:35:04.423397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.348 [2024-12-06 03:35:04.423407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.348 [2024-12-06 03:35:04.423422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.348 qpair failed and we were unable to recover it. 00:26:44.348 [2024-12-06 03:35:04.433303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.348 [2024-12-06 03:35:04.433359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.348 [2024-12-06 03:35:04.433375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.348 [2024-12-06 03:35:04.433382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.348 [2024-12-06 03:35:04.433389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.348 [2024-12-06 03:35:04.433404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.348 qpair failed and we were unable to recover it. 00:26:44.348 [2024-12-06 03:35:04.443345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.348 [2024-12-06 03:35:04.443404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.348 [2024-12-06 03:35:04.443418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.348 [2024-12-06 03:35:04.443426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.348 [2024-12-06 03:35:04.443432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.348 [2024-12-06 03:35:04.443446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.348 qpair failed and we were unable to recover it. 00:26:44.348 [2024-12-06 03:35:04.453400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.348 [2024-12-06 03:35:04.453458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.348 [2024-12-06 03:35:04.453472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.348 [2024-12-06 03:35:04.453479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.348 [2024-12-06 03:35:04.453486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.348 [2024-12-06 03:35:04.453500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.348 qpair failed and we were unable to recover it. 00:26:44.348 [2024-12-06 03:35:04.463393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.348 [2024-12-06 03:35:04.463448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.348 [2024-12-06 03:35:04.463463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.348 [2024-12-06 03:35:04.463470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.348 [2024-12-06 03:35:04.463477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.348 [2024-12-06 03:35:04.463491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.348 qpair failed and we were unable to recover it. 00:26:44.348 [2024-12-06 03:35:04.473452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.348 [2024-12-06 03:35:04.473543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.348 [2024-12-06 03:35:04.473557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.348 [2024-12-06 03:35:04.473564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.348 [2024-12-06 03:35:04.473571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.348 [2024-12-06 03:35:04.473585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.348 qpair failed and we were unable to recover it. 00:26:44.609 [2024-12-06 03:35:04.483517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.609 [2024-12-06 03:35:04.483577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.609 [2024-12-06 03:35:04.483592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.609 [2024-12-06 03:35:04.483599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.609 [2024-12-06 03:35:04.483605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.609 [2024-12-06 03:35:04.483620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.609 qpair failed and we were unable to recover it. 00:26:44.609 [2024-12-06 03:35:04.493575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.609 [2024-12-06 03:35:04.493636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.609 [2024-12-06 03:35:04.493650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.609 [2024-12-06 03:35:04.493657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.609 [2024-12-06 03:35:04.493663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.609 [2024-12-06 03:35:04.493678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.609 qpair failed and we were unable to recover it. 00:26:44.609 [2024-12-06 03:35:04.503517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.609 [2024-12-06 03:35:04.503592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.609 [2024-12-06 03:35:04.503607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.609 [2024-12-06 03:35:04.503614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.609 [2024-12-06 03:35:04.503620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.609 [2024-12-06 03:35:04.503634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.609 qpair failed and we were unable to recover it. 00:26:44.609 [2024-12-06 03:35:04.513539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.609 [2024-12-06 03:35:04.513602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.609 [2024-12-06 03:35:04.513619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.609 [2024-12-06 03:35:04.513625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.609 [2024-12-06 03:35:04.513632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.609 [2024-12-06 03:35:04.513646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.609 qpair failed and we were unable to recover it. 00:26:44.609 [2024-12-06 03:35:04.523596] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.609 [2024-12-06 03:35:04.523653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.609 [2024-12-06 03:35:04.523667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.609 [2024-12-06 03:35:04.523674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.609 [2024-12-06 03:35:04.523680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.609 [2024-12-06 03:35:04.523695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.609 qpair failed and we were unable to recover it. 00:26:44.609 [2024-12-06 03:35:04.533637] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.609 [2024-12-06 03:35:04.533693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.609 [2024-12-06 03:35:04.533707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.609 [2024-12-06 03:35:04.533714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.609 [2024-12-06 03:35:04.533720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.609 [2024-12-06 03:35:04.533734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.609 qpair failed and we were unable to recover it. 00:26:44.609 [2024-12-06 03:35:04.543707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.609 [2024-12-06 03:35:04.543775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.609 [2024-12-06 03:35:04.543790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.609 [2024-12-06 03:35:04.543797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.609 [2024-12-06 03:35:04.543803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.609 [2024-12-06 03:35:04.543816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.609 qpair failed and we were unable to recover it. 00:26:44.609 [2024-12-06 03:35:04.553763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.609 [2024-12-06 03:35:04.553818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.609 [2024-12-06 03:35:04.553832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.609 [2024-12-06 03:35:04.553839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.609 [2024-12-06 03:35:04.553849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.609 [2024-12-06 03:35:04.553864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.609 qpair failed and we were unable to recover it. 00:26:44.609 [2024-12-06 03:35:04.563764] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.609 [2024-12-06 03:35:04.563831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.609 [2024-12-06 03:35:04.563846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.609 [2024-12-06 03:35:04.563853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.609 [2024-12-06 03:35:04.563859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.609 [2024-12-06 03:35:04.563874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.609 qpair failed and we were unable to recover it. 00:26:44.609 [2024-12-06 03:35:04.573778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.609 [2024-12-06 03:35:04.573834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.609 [2024-12-06 03:35:04.573849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.609 [2024-12-06 03:35:04.573856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.609 [2024-12-06 03:35:04.573862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.609 [2024-12-06 03:35:04.573876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.609 qpair failed and we were unable to recover it. 00:26:44.609 [2024-12-06 03:35:04.583803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.609 [2024-12-06 03:35:04.583853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.609 [2024-12-06 03:35:04.583868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.609 [2024-12-06 03:35:04.583874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.609 [2024-12-06 03:35:04.583881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.609 [2024-12-06 03:35:04.583896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.609 qpair failed and we were unable to recover it. 00:26:44.609 [2024-12-06 03:35:04.593762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.609 [2024-12-06 03:35:04.593815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.609 [2024-12-06 03:35:04.593830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.609 [2024-12-06 03:35:04.593837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.609 [2024-12-06 03:35:04.593843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.609 [2024-12-06 03:35:04.593858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.609 qpair failed and we were unable to recover it. 00:26:44.609 [2024-12-06 03:35:04.603873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.609 [2024-12-06 03:35:04.603934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.609 [2024-12-06 03:35:04.603952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.609 [2024-12-06 03:35:04.603960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.610 [2024-12-06 03:35:04.603966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.610 [2024-12-06 03:35:04.603982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.610 qpair failed and we were unable to recover it. 00:26:44.610 [2024-12-06 03:35:04.613903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.610 [2024-12-06 03:35:04.613963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.610 [2024-12-06 03:35:04.613978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.610 [2024-12-06 03:35:04.613985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.610 [2024-12-06 03:35:04.613991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.610 [2024-12-06 03:35:04.614006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.610 qpair failed and we were unable to recover it. 00:26:44.610 [2024-12-06 03:35:04.623919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.610 [2024-12-06 03:35:04.623973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.610 [2024-12-06 03:35:04.623988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.610 [2024-12-06 03:35:04.623995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.610 [2024-12-06 03:35:04.624001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.610 [2024-12-06 03:35:04.624015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.610 qpair failed and we were unable to recover it. 00:26:44.610 [2024-12-06 03:35:04.633951] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.610 [2024-12-06 03:35:04.634009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.610 [2024-12-06 03:35:04.634024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.610 [2024-12-06 03:35:04.634031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.610 [2024-12-06 03:35:04.634037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.610 [2024-12-06 03:35:04.634051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.610 qpair failed and we were unable to recover it. 00:26:44.610 [2024-12-06 03:35:04.643912] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.610 [2024-12-06 03:35:04.644003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.610 [2024-12-06 03:35:04.644023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.610 [2024-12-06 03:35:04.644030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.610 [2024-12-06 03:35:04.644036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.610 [2024-12-06 03:35:04.644052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.610 qpair failed and we were unable to recover it. 00:26:44.610 [2024-12-06 03:35:04.654012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.610 [2024-12-06 03:35:04.654071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.610 [2024-12-06 03:35:04.654086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.610 [2024-12-06 03:35:04.654093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.610 [2024-12-06 03:35:04.654099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.610 [2024-12-06 03:35:04.654113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.610 qpair failed and we were unable to recover it. 00:26:44.610 [2024-12-06 03:35:04.664033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.610 [2024-12-06 03:35:04.664120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.610 [2024-12-06 03:35:04.664134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.610 [2024-12-06 03:35:04.664141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.610 [2024-12-06 03:35:04.664147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.610 [2024-12-06 03:35:04.664161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.610 qpair failed and we were unable to recover it. 00:26:44.610 [2024-12-06 03:35:04.674058] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.610 [2024-12-06 03:35:04.674120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.610 [2024-12-06 03:35:04.674135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.610 [2024-12-06 03:35:04.674142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.610 [2024-12-06 03:35:04.674148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.610 [2024-12-06 03:35:04.674163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.610 qpair failed and we were unable to recover it. 00:26:44.610 [2024-12-06 03:35:04.684086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.610 [2024-12-06 03:35:04.684143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.610 [2024-12-06 03:35:04.684157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.610 [2024-12-06 03:35:04.684164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.610 [2024-12-06 03:35:04.684174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.610 [2024-12-06 03:35:04.684188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.610 qpair failed and we were unable to recover it. 00:26:44.610 [2024-12-06 03:35:04.694130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.610 [2024-12-06 03:35:04.694186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.610 [2024-12-06 03:35:04.694201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.610 [2024-12-06 03:35:04.694208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.610 [2024-12-06 03:35:04.694215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.610 [2024-12-06 03:35:04.694229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.610 qpair failed and we were unable to recover it. 00:26:44.610 [2024-12-06 03:35:04.704081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.610 [2024-12-06 03:35:04.704137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.610 [2024-12-06 03:35:04.704151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.610 [2024-12-06 03:35:04.704158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.610 [2024-12-06 03:35:04.704165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.610 [2024-12-06 03:35:04.704179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.610 qpair failed and we were unable to recover it. 00:26:44.610 [2024-12-06 03:35:04.714174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.610 [2024-12-06 03:35:04.714232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.610 [2024-12-06 03:35:04.714246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.610 [2024-12-06 03:35:04.714253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.610 [2024-12-06 03:35:04.714260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.610 [2024-12-06 03:35:04.714274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.610 qpair failed and we were unable to recover it. 00:26:44.610 [2024-12-06 03:35:04.724210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.610 [2024-12-06 03:35:04.724269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.610 [2024-12-06 03:35:04.724284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.610 [2024-12-06 03:35:04.724291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.610 [2024-12-06 03:35:04.724298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.610 [2024-12-06 03:35:04.724312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.610 qpair failed and we were unable to recover it. 00:26:44.610 [2024-12-06 03:35:04.734276] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.610 [2024-12-06 03:35:04.734337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.610 [2024-12-06 03:35:04.734351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.610 [2024-12-06 03:35:04.734358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.610 [2024-12-06 03:35:04.734364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.611 [2024-12-06 03:35:04.734379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.611 qpair failed and we were unable to recover it. 00:26:44.611 [2024-12-06 03:35:04.744260] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.611 [2024-12-06 03:35:04.744350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.611 [2024-12-06 03:35:04.744365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.611 [2024-12-06 03:35:04.744371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.611 [2024-12-06 03:35:04.744377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.611 [2024-12-06 03:35:04.744392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.611 qpair failed and we were unable to recover it. 00:26:44.872 [2024-12-06 03:35:04.754280] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.872 [2024-12-06 03:35:04.754359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.872 [2024-12-06 03:35:04.754374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.872 [2024-12-06 03:35:04.754381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.872 [2024-12-06 03:35:04.754387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.872 [2024-12-06 03:35:04.754402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-12-06 03:35:04.764320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.872 [2024-12-06 03:35:04.764377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.872 [2024-12-06 03:35:04.764392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.872 [2024-12-06 03:35:04.764400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.872 [2024-12-06 03:35:04.764406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.872 [2024-12-06 03:35:04.764420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-12-06 03:35:04.774362] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.872 [2024-12-06 03:35:04.774424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.872 [2024-12-06 03:35:04.774442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.872 [2024-12-06 03:35:04.774449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.872 [2024-12-06 03:35:04.774455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.872 [2024-12-06 03:35:04.774470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-12-06 03:35:04.784371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.872 [2024-12-06 03:35:04.784429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.872 [2024-12-06 03:35:04.784444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.872 [2024-12-06 03:35:04.784451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.872 [2024-12-06 03:35:04.784457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.872 [2024-12-06 03:35:04.784471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-12-06 03:35:04.794341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.872 [2024-12-06 03:35:04.794396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.872 [2024-12-06 03:35:04.794411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.872 [2024-12-06 03:35:04.794418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.872 [2024-12-06 03:35:04.794424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.872 [2024-12-06 03:35:04.794439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-12-06 03:35:04.804440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.872 [2024-12-06 03:35:04.804507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.872 [2024-12-06 03:35:04.804522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.872 [2024-12-06 03:35:04.804529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.872 [2024-12-06 03:35:04.804535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.872 [2024-12-06 03:35:04.804549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-12-06 03:35:04.814480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.872 [2024-12-06 03:35:04.814568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.872 [2024-12-06 03:35:04.814582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.872 [2024-12-06 03:35:04.814589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.872 [2024-12-06 03:35:04.814599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.872 [2024-12-06 03:35:04.814614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-12-06 03:35:04.824490] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.872 [2024-12-06 03:35:04.824541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.872 [2024-12-06 03:35:04.824555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.872 [2024-12-06 03:35:04.824562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.872 [2024-12-06 03:35:04.824568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.872 [2024-12-06 03:35:04.824583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-12-06 03:35:04.834511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.872 [2024-12-06 03:35:04.834568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.872 [2024-12-06 03:35:04.834584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.872 [2024-12-06 03:35:04.834591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.872 [2024-12-06 03:35:04.834597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.872 [2024-12-06 03:35:04.834612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-12-06 03:35:04.844553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.872 [2024-12-06 03:35:04.844609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.872 [2024-12-06 03:35:04.844623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.872 [2024-12-06 03:35:04.844630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.872 [2024-12-06 03:35:04.844636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.872 [2024-12-06 03:35:04.844651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.872 qpair failed and we were unable to recover it. 00:26:44.872 [2024-12-06 03:35:04.854562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.873 [2024-12-06 03:35:04.854620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.873 [2024-12-06 03:35:04.854635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.873 [2024-12-06 03:35:04.854642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.873 [2024-12-06 03:35:04.854648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.873 [2024-12-06 03:35:04.854663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-12-06 03:35:04.864609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.873 [2024-12-06 03:35:04.864668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.873 [2024-12-06 03:35:04.864682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.873 [2024-12-06 03:35:04.864689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.873 [2024-12-06 03:35:04.864696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.873 [2024-12-06 03:35:04.864710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-12-06 03:35:04.874646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.873 [2024-12-06 03:35:04.874705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.873 [2024-12-06 03:35:04.874719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.873 [2024-12-06 03:35:04.874727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.873 [2024-12-06 03:35:04.874733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.873 [2024-12-06 03:35:04.874747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-12-06 03:35:04.884674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.873 [2024-12-06 03:35:04.884735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.873 [2024-12-06 03:35:04.884750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.873 [2024-12-06 03:35:04.884757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.873 [2024-12-06 03:35:04.884763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.873 [2024-12-06 03:35:04.884777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-12-06 03:35:04.894710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.873 [2024-12-06 03:35:04.894790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.873 [2024-12-06 03:35:04.894806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.873 [2024-12-06 03:35:04.894813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.873 [2024-12-06 03:35:04.894819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.873 [2024-12-06 03:35:04.894833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-12-06 03:35:04.904733] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.873 [2024-12-06 03:35:04.904820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.873 [2024-12-06 03:35:04.904838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.873 [2024-12-06 03:35:04.904845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.873 [2024-12-06 03:35:04.904851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.873 [2024-12-06 03:35:04.904866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-12-06 03:35:04.914747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.873 [2024-12-06 03:35:04.914812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.873 [2024-12-06 03:35:04.914826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.873 [2024-12-06 03:35:04.914833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.873 [2024-12-06 03:35:04.914839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.873 [2024-12-06 03:35:04.914854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-12-06 03:35:04.924827] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.873 [2024-12-06 03:35:04.924889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.873 [2024-12-06 03:35:04.924903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.873 [2024-12-06 03:35:04.924910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.873 [2024-12-06 03:35:04.924916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.873 [2024-12-06 03:35:04.924931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-12-06 03:35:04.934810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.873 [2024-12-06 03:35:04.934867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.873 [2024-12-06 03:35:04.934882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.873 [2024-12-06 03:35:04.934889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.873 [2024-12-06 03:35:04.934895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.873 [2024-12-06 03:35:04.934910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-12-06 03:35:04.944840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.873 [2024-12-06 03:35:04.944901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.873 [2024-12-06 03:35:04.944916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.873 [2024-12-06 03:35:04.944923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.873 [2024-12-06 03:35:04.944933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.873 [2024-12-06 03:35:04.944951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-12-06 03:35:04.954822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.873 [2024-12-06 03:35:04.954904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.873 [2024-12-06 03:35:04.954918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.873 [2024-12-06 03:35:04.954925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.873 [2024-12-06 03:35:04.954931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.873 [2024-12-06 03:35:04.954945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-12-06 03:35:04.964895] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.873 [2024-12-06 03:35:04.964960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.873 [2024-12-06 03:35:04.964974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.873 [2024-12-06 03:35:04.964981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.873 [2024-12-06 03:35:04.964988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.873 [2024-12-06 03:35:04.965003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-12-06 03:35:04.974963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.873 [2024-12-06 03:35:04.975045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.873 [2024-12-06 03:35:04.975060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.873 [2024-12-06 03:35:04.975066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.873 [2024-12-06 03:35:04.975073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.873 [2024-12-06 03:35:04.975088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.873 qpair failed and we were unable to recover it. 00:26:44.873 [2024-12-06 03:35:04.984963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.873 [2024-12-06 03:35:04.985033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.873 [2024-12-06 03:35:04.985047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.874 [2024-12-06 03:35:04.985054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.874 [2024-12-06 03:35:04.985060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.874 [2024-12-06 03:35:04.985075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 [2024-12-06 03:35:04.994980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.874 [2024-12-06 03:35:04.995042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.874 [2024-12-06 03:35:04.995056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.874 [2024-12-06 03:35:04.995063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.874 [2024-12-06 03:35:04.995069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.874 [2024-12-06 03:35:04.995084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.874 qpair failed and we were unable to recover it. 00:26:44.874 [2024-12-06 03:35:05.005013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.874 [2024-12-06 03:35:05.005073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.874 [2024-12-06 03:35:05.005088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.874 [2024-12-06 03:35:05.005095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.874 [2024-12-06 03:35:05.005101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:44.874 [2024-12-06 03:35:05.005117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.874 qpair failed and we were unable to recover it. 00:26:45.140 [2024-12-06 03:35:05.015049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.141 [2024-12-06 03:35:05.015110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.141 [2024-12-06 03:35:05.015124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.141 [2024-12-06 03:35:05.015131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.141 [2024-12-06 03:35:05.015138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.141 [2024-12-06 03:35:05.015152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 03:35:05.025071] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.141 [2024-12-06 03:35:05.025127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.141 [2024-12-06 03:35:05.025141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.141 [2024-12-06 03:35:05.025148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.141 [2024-12-06 03:35:05.025155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.141 [2024-12-06 03:35:05.025169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 03:35:05.035097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.141 [2024-12-06 03:35:05.035155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.141 [2024-12-06 03:35:05.035174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.141 [2024-12-06 03:35:05.035182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.141 [2024-12-06 03:35:05.035189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.141 [2024-12-06 03:35:05.035204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 03:35:05.045122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.141 [2024-12-06 03:35:05.045181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.141 [2024-12-06 03:35:05.045195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.141 [2024-12-06 03:35:05.045202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.141 [2024-12-06 03:35:05.045208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.141 [2024-12-06 03:35:05.045222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 03:35:05.055178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.141 [2024-12-06 03:35:05.055231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.141 [2024-12-06 03:35:05.055246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.141 [2024-12-06 03:35:05.055253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.141 [2024-12-06 03:35:05.055259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.141 [2024-12-06 03:35:05.055274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 03:35:05.065244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.141 [2024-12-06 03:35:05.065304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.141 [2024-12-06 03:35:05.065318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.141 [2024-12-06 03:35:05.065325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.142 [2024-12-06 03:35:05.065332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.142 [2024-12-06 03:35:05.065346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 03:35:05.075224] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.142 [2024-12-06 03:35:05.075286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.142 [2024-12-06 03:35:05.075301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.142 [2024-12-06 03:35:05.075308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.142 [2024-12-06 03:35:05.075317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.142 [2024-12-06 03:35:05.075332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 03:35:05.085247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.142 [2024-12-06 03:35:05.085303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.142 [2024-12-06 03:35:05.085318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.142 [2024-12-06 03:35:05.085325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.142 [2024-12-06 03:35:05.085331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.142 [2024-12-06 03:35:05.085346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 03:35:05.095211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.142 [2024-12-06 03:35:05.095300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.142 [2024-12-06 03:35:05.095315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.142 [2024-12-06 03:35:05.095322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.142 [2024-12-06 03:35:05.095328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.142 [2024-12-06 03:35:05.095343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 03:35:05.105291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.143 [2024-12-06 03:35:05.105365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.143 [2024-12-06 03:35:05.105380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.143 [2024-12-06 03:35:05.105387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.143 [2024-12-06 03:35:05.105393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.143 [2024-12-06 03:35:05.105408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 03:35:05.115299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.143 [2024-12-06 03:35:05.115356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.143 [2024-12-06 03:35:05.115371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.143 [2024-12-06 03:35:05.115378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.143 [2024-12-06 03:35:05.115385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.143 [2024-12-06 03:35:05.115399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 03:35:05.125366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.143 [2024-12-06 03:35:05.125434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.143 [2024-12-06 03:35:05.125449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.143 [2024-12-06 03:35:05.125456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.143 [2024-12-06 03:35:05.125463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.143 [2024-12-06 03:35:05.125477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 03:35:05.135383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.143 [2024-12-06 03:35:05.135444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.143 [2024-12-06 03:35:05.135459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.143 [2024-12-06 03:35:05.135466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.143 [2024-12-06 03:35:05.135472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.143 [2024-12-06 03:35:05.135487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 03:35:05.145395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.143 [2024-12-06 03:35:05.145452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.146 [2024-12-06 03:35:05.145465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.146 [2024-12-06 03:35:05.145472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.146 [2024-12-06 03:35:05.145479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.146 [2024-12-06 03:35:05.145493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 03:35:05.155432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.146 [2024-12-06 03:35:05.155487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.146 [2024-12-06 03:35:05.155502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.146 [2024-12-06 03:35:05.155509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.146 [2024-12-06 03:35:05.155515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.146 [2024-12-06 03:35:05.155530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 03:35:05.165467] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.146 [2024-12-06 03:35:05.165525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.146 [2024-12-06 03:35:05.165543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.146 [2024-12-06 03:35:05.165550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.146 [2024-12-06 03:35:05.165556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.146 [2024-12-06 03:35:05.165570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 03:35:05.175492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.146 [2024-12-06 03:35:05.175548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.146 [2024-12-06 03:35:05.175562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.146 [2024-12-06 03:35:05.175569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.146 [2024-12-06 03:35:05.175575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.146 [2024-12-06 03:35:05.175589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 03:35:05.185513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.146 [2024-12-06 03:35:05.185573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.146 [2024-12-06 03:35:05.185587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.146 [2024-12-06 03:35:05.185595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.146 [2024-12-06 03:35:05.185600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.146 [2024-12-06 03:35:05.185614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 03:35:05.195525] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.146 [2024-12-06 03:35:05.195580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.146 [2024-12-06 03:35:05.195594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.146 [2024-12-06 03:35:05.195602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.146 [2024-12-06 03:35:05.195608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.146 [2024-12-06 03:35:05.195622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 03:35:05.205620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.146 [2024-12-06 03:35:05.205718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.146 [2024-12-06 03:35:05.205734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.146 [2024-12-06 03:35:05.205741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.146 [2024-12-06 03:35:05.205751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.146 [2024-12-06 03:35:05.205766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 03:35:05.215607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.146 [2024-12-06 03:35:05.215686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.146 [2024-12-06 03:35:05.215701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.146 [2024-12-06 03:35:05.215708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.146 [2024-12-06 03:35:05.215714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.146 [2024-12-06 03:35:05.215728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 03:35:05.225635] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.146 [2024-12-06 03:35:05.225690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.146 [2024-12-06 03:35:05.225704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.146 [2024-12-06 03:35:05.225711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.146 [2024-12-06 03:35:05.225717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.146 [2024-12-06 03:35:05.225732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 03:35:05.235664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.146 [2024-12-06 03:35:05.235719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.146 [2024-12-06 03:35:05.235733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.146 [2024-12-06 03:35:05.235740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.146 [2024-12-06 03:35:05.235746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.146 [2024-12-06 03:35:05.235760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 03:35:05.245699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.146 [2024-12-06 03:35:05.245758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.146 [2024-12-06 03:35:05.245775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.146 [2024-12-06 03:35:05.245782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.146 [2024-12-06 03:35:05.245789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.146 [2024-12-06 03:35:05.245804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 03:35:05.255720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.146 [2024-12-06 03:35:05.255777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.146 [2024-12-06 03:35:05.255792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.146 [2024-12-06 03:35:05.255799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.146 [2024-12-06 03:35:05.255806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.146 [2024-12-06 03:35:05.255820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 03:35:05.265745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.146 [2024-12-06 03:35:05.265803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.146 [2024-12-06 03:35:05.265819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.146 [2024-12-06 03:35:05.265826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.146 [2024-12-06 03:35:05.265832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.146 [2024-12-06 03:35:05.265847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 03:35:05.275766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.146 [2024-12-06 03:35:05.275826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.146 [2024-12-06 03:35:05.275841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.146 [2024-12-06 03:35:05.275848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.146 [2024-12-06 03:35:05.275854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.146 [2024-12-06 03:35:05.275868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.410 [2024-12-06 03:35:05.285810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.410 [2024-12-06 03:35:05.285885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.410 [2024-12-06 03:35:05.285899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.410 [2024-12-06 03:35:05.285907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.410 [2024-12-06 03:35:05.285913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.410 [2024-12-06 03:35:05.285927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.410 qpair failed and we were unable to recover it. 00:26:45.410 [2024-12-06 03:35:05.295888] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.410 [2024-12-06 03:35:05.295956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.410 [2024-12-06 03:35:05.295975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.410 [2024-12-06 03:35:05.295982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.410 [2024-12-06 03:35:05.295988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.410 [2024-12-06 03:35:05.296003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.410 qpair failed and we were unable to recover it. 00:26:45.410 [2024-12-06 03:35:05.305863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.410 [2024-12-06 03:35:05.305942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.410 [2024-12-06 03:35:05.305961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.410 [2024-12-06 03:35:05.305968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.410 [2024-12-06 03:35:05.305974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.410 [2024-12-06 03:35:05.305989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.410 qpair failed and we were unable to recover it. 00:26:45.410 [2024-12-06 03:35:05.315880] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.410 [2024-12-06 03:35:05.315936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.410 [2024-12-06 03:35:05.315955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.410 [2024-12-06 03:35:05.315962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.410 [2024-12-06 03:35:05.315969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.410 [2024-12-06 03:35:05.315984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.410 qpair failed and we were unable to recover it. 00:26:45.410 [2024-12-06 03:35:05.325919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.410 [2024-12-06 03:35:05.325982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.410 [2024-12-06 03:35:05.325996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.410 [2024-12-06 03:35:05.326004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.410 [2024-12-06 03:35:05.326010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.410 [2024-12-06 03:35:05.326025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.410 qpair failed and we were unable to recover it. 00:26:45.410 [2024-12-06 03:35:05.335971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.410 [2024-12-06 03:35:05.336026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.410 [2024-12-06 03:35:05.336041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.410 [2024-12-06 03:35:05.336051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.410 [2024-12-06 03:35:05.336057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.410 [2024-12-06 03:35:05.336073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.410 qpair failed and we were unable to recover it. 00:26:45.410 [2024-12-06 03:35:05.346030] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.410 [2024-12-06 03:35:05.346086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.410 [2024-12-06 03:35:05.346100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.410 [2024-12-06 03:35:05.346107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.410 [2024-12-06 03:35:05.346114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.410 [2024-12-06 03:35:05.346128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.410 qpair failed and we were unable to recover it. 00:26:45.410 [2024-12-06 03:35:05.355998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.410 [2024-12-06 03:35:05.356057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.410 [2024-12-06 03:35:05.356071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.410 [2024-12-06 03:35:05.356078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.410 [2024-12-06 03:35:05.356084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.410 [2024-12-06 03:35:05.356098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.410 qpair failed and we were unable to recover it. 00:26:45.410 [2024-12-06 03:35:05.366041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.410 [2024-12-06 03:35:05.366099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.410 [2024-12-06 03:35:05.366113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.410 [2024-12-06 03:35:05.366120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.410 [2024-12-06 03:35:05.366126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.410 [2024-12-06 03:35:05.366141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.410 qpair failed and we were unable to recover it. 00:26:45.410 [2024-12-06 03:35:05.376078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.410 [2024-12-06 03:35:05.376138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.410 [2024-12-06 03:35:05.376152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.410 [2024-12-06 03:35:05.376160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.410 [2024-12-06 03:35:05.376165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.410 [2024-12-06 03:35:05.376180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.410 qpair failed and we were unable to recover it. 00:26:45.410 [2024-12-06 03:35:05.386074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.410 [2024-12-06 03:35:05.386132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.410 [2024-12-06 03:35:05.386146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.410 [2024-12-06 03:35:05.386153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.410 [2024-12-06 03:35:05.386160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.410 [2024-12-06 03:35:05.386175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.410 qpair failed and we were unable to recover it. 00:26:45.410 [2024-12-06 03:35:05.396119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.410 [2024-12-06 03:35:05.396174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.410 [2024-12-06 03:35:05.396188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.410 [2024-12-06 03:35:05.396195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.410 [2024-12-06 03:35:05.396201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.410 [2024-12-06 03:35:05.396217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.410 qpair failed and we were unable to recover it. 00:26:45.410 [2024-12-06 03:35:05.406136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.410 [2024-12-06 03:35:05.406214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.410 [2024-12-06 03:35:05.406228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.410 [2024-12-06 03:35:05.406235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.410 [2024-12-06 03:35:05.406241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.410 [2024-12-06 03:35:05.406256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.410 qpair failed and we were unable to recover it. 00:26:45.410 [2024-12-06 03:35:05.416175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.410 [2024-12-06 03:35:05.416280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.410 [2024-12-06 03:35:05.416294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.410 [2024-12-06 03:35:05.416301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.411 [2024-12-06 03:35:05.416308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.411 [2024-12-06 03:35:05.416322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.411 qpair failed and we were unable to recover it. 00:26:45.411 [2024-12-06 03:35:05.426204] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.411 [2024-12-06 03:35:05.426260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.411 [2024-12-06 03:35:05.426277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.411 [2024-12-06 03:35:05.426285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.411 [2024-12-06 03:35:05.426291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.411 [2024-12-06 03:35:05.426305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.411 qpair failed and we were unable to recover it. 00:26:45.411 [2024-12-06 03:35:05.436219] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.411 [2024-12-06 03:35:05.436274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.411 [2024-12-06 03:35:05.436288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.411 [2024-12-06 03:35:05.436295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.411 [2024-12-06 03:35:05.436302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.411 [2024-12-06 03:35:05.436316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.411 qpair failed and we were unable to recover it. 00:26:45.411 [2024-12-06 03:35:05.446264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.411 [2024-12-06 03:35:05.446371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.411 [2024-12-06 03:35:05.446385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.411 [2024-12-06 03:35:05.446392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.411 [2024-12-06 03:35:05.446398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.411 [2024-12-06 03:35:05.446413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.411 qpair failed and we were unable to recover it. 00:26:45.411 [2024-12-06 03:35:05.456310] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.411 [2024-12-06 03:35:05.456400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.411 [2024-12-06 03:35:05.456414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.411 [2024-12-06 03:35:05.456421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.411 [2024-12-06 03:35:05.456428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.411 [2024-12-06 03:35:05.456441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.411 qpair failed and we were unable to recover it. 00:26:45.411 [2024-12-06 03:35:05.466283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.411 [2024-12-06 03:35:05.466337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.411 [2024-12-06 03:35:05.466351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.411 [2024-12-06 03:35:05.466361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.411 [2024-12-06 03:35:05.466368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.411 [2024-12-06 03:35:05.466383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.411 qpair failed and we were unable to recover it. 00:26:45.411 [2024-12-06 03:35:05.476292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.411 [2024-12-06 03:35:05.476347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.411 [2024-12-06 03:35:05.476362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.411 [2024-12-06 03:35:05.476369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.411 [2024-12-06 03:35:05.476376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.411 [2024-12-06 03:35:05.476391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.411 qpair failed and we were unable to recover it. 00:26:45.411 [2024-12-06 03:35:05.486395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.411 [2024-12-06 03:35:05.486455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.411 [2024-12-06 03:35:05.486469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.411 [2024-12-06 03:35:05.486476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.411 [2024-12-06 03:35:05.486483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.411 [2024-12-06 03:35:05.486498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.411 qpair failed and we were unable to recover it. 00:26:45.411 [2024-12-06 03:35:05.496381] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.411 [2024-12-06 03:35:05.496479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.411 [2024-12-06 03:35:05.496494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.411 [2024-12-06 03:35:05.496501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.411 [2024-12-06 03:35:05.496507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.411 [2024-12-06 03:35:05.496522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.411 qpair failed and we were unable to recover it. 00:26:45.411 [2024-12-06 03:35:05.506422] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.411 [2024-12-06 03:35:05.506476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.411 [2024-12-06 03:35:05.506491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.411 [2024-12-06 03:35:05.506498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.411 [2024-12-06 03:35:05.506504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.411 [2024-12-06 03:35:05.506519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.411 qpair failed and we were unable to recover it. 00:26:45.411 [2024-12-06 03:35:05.516444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.411 [2024-12-06 03:35:05.516500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.411 [2024-12-06 03:35:05.516514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.411 [2024-12-06 03:35:05.516522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.411 [2024-12-06 03:35:05.516528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.411 [2024-12-06 03:35:05.516542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.411 qpair failed and we were unable to recover it. 00:26:45.411 [2024-12-06 03:35:05.526488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.411 [2024-12-06 03:35:05.526547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.411 [2024-12-06 03:35:05.526561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.411 [2024-12-06 03:35:05.526568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.411 [2024-12-06 03:35:05.526575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.411 [2024-12-06 03:35:05.526589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.411 qpair failed and we were unable to recover it. 00:26:45.411 [2024-12-06 03:35:05.536440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.411 [2024-12-06 03:35:05.536500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.411 [2024-12-06 03:35:05.536515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.411 [2024-12-06 03:35:05.536522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.411 [2024-12-06 03:35:05.536528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.411 [2024-12-06 03:35:05.536543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.411 qpair failed and we were unable to recover it. 00:26:45.742 [2024-12-06 03:35:05.546541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.742 [2024-12-06 03:35:05.546607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.742 [2024-12-06 03:35:05.546628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.742 [2024-12-06 03:35:05.546636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.742 [2024-12-06 03:35:05.546643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.742 [2024-12-06 03:35:05.546661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.742 qpair failed and we were unable to recover it. 00:26:45.742 [2024-12-06 03:35:05.556638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.742 [2024-12-06 03:35:05.556731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.742 [2024-12-06 03:35:05.556750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.742 [2024-12-06 03:35:05.556757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.742 [2024-12-06 03:35:05.556763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.742 [2024-12-06 03:35:05.556778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.742 qpair failed and we were unable to recover it. 00:26:45.742 [2024-12-06 03:35:05.566642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.742 [2024-12-06 03:35:05.566703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.742 [2024-12-06 03:35:05.566718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.742 [2024-12-06 03:35:05.566725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.743 [2024-12-06 03:35:05.566731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.743 [2024-12-06 03:35:05.566746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 03:35:05.576579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.743 [2024-12-06 03:35:05.576641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.743 [2024-12-06 03:35:05.576656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.743 [2024-12-06 03:35:05.576663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.743 [2024-12-06 03:35:05.576670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.743 [2024-12-06 03:35:05.576684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 03:35:05.586616] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.743 [2024-12-06 03:35:05.586710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.743 [2024-12-06 03:35:05.586725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.743 [2024-12-06 03:35:05.586732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.743 [2024-12-06 03:35:05.586738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.743 [2024-12-06 03:35:05.586754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 03:35:05.596748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.743 [2024-12-06 03:35:05.596805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.743 [2024-12-06 03:35:05.596820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.743 [2024-12-06 03:35:05.596830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.743 [2024-12-06 03:35:05.596836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.743 [2024-12-06 03:35:05.596851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 03:35:05.606665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.743 [2024-12-06 03:35:05.606723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.743 [2024-12-06 03:35:05.606737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.743 [2024-12-06 03:35:05.606744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.743 [2024-12-06 03:35:05.606751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.743 [2024-12-06 03:35:05.606765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 03:35:05.616768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.743 [2024-12-06 03:35:05.616831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.743 [2024-12-06 03:35:05.616847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.743 [2024-12-06 03:35:05.616854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.743 [2024-12-06 03:35:05.616860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.743 [2024-12-06 03:35:05.616874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 03:35:05.626776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.743 [2024-12-06 03:35:05.626836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.743 [2024-12-06 03:35:05.626851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.743 [2024-12-06 03:35:05.626858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.743 [2024-12-06 03:35:05.626864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.743 [2024-12-06 03:35:05.626879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 03:35:05.636760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.743 [2024-12-06 03:35:05.636843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.743 [2024-12-06 03:35:05.636857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.743 [2024-12-06 03:35:05.636864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.743 [2024-12-06 03:35:05.636870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.743 [2024-12-06 03:35:05.636884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 03:35:05.646780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.743 [2024-12-06 03:35:05.646841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.743 [2024-12-06 03:35:05.646856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.743 [2024-12-06 03:35:05.646863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.743 [2024-12-06 03:35:05.646870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.743 [2024-12-06 03:35:05.646884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 03:35:05.656859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.743 [2024-12-06 03:35:05.656918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.743 [2024-12-06 03:35:05.656932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.743 [2024-12-06 03:35:05.656940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.743 [2024-12-06 03:35:05.656950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.743 [2024-12-06 03:35:05.656966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 03:35:05.666846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.743 [2024-12-06 03:35:05.666928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.743 [2024-12-06 03:35:05.666942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.743 [2024-12-06 03:35:05.666954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.743 [2024-12-06 03:35:05.666960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.743 [2024-12-06 03:35:05.666975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 03:35:05.676900] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.743 [2024-12-06 03:35:05.676962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.743 [2024-12-06 03:35:05.676976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.743 [2024-12-06 03:35:05.676983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.743 [2024-12-06 03:35:05.676989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.743 [2024-12-06 03:35:05.677004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 03:35:05.686890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.743 [2024-12-06 03:35:05.686955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.743 [2024-12-06 03:35:05.686972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.743 [2024-12-06 03:35:05.686979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.743 [2024-12-06 03:35:05.686985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.743 [2024-12-06 03:35:05.687000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 03:35:05.696991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.743 [2024-12-06 03:35:05.697051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.743 [2024-12-06 03:35:05.697066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.743 [2024-12-06 03:35:05.697073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.744 [2024-12-06 03:35:05.697079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.744 [2024-12-06 03:35:05.697094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 03:35:05.707027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.744 [2024-12-06 03:35:05.707094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.744 [2024-12-06 03:35:05.707108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.744 [2024-12-06 03:35:05.707115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.744 [2024-12-06 03:35:05.707121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.744 [2024-12-06 03:35:05.707136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 03:35:05.717051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.744 [2024-12-06 03:35:05.717124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.744 [2024-12-06 03:35:05.717140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.744 [2024-12-06 03:35:05.717147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.744 [2024-12-06 03:35:05.717153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.744 [2024-12-06 03:35:05.717168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 03:35:05.727064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.744 [2024-12-06 03:35:05.727134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.744 [2024-12-06 03:35:05.727149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.744 [2024-12-06 03:35:05.727159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.744 [2024-12-06 03:35:05.727165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.744 [2024-12-06 03:35:05.727180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 03:35:05.737096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.744 [2024-12-06 03:35:05.737153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.744 [2024-12-06 03:35:05.737168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.744 [2024-12-06 03:35:05.737174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.744 [2024-12-06 03:35:05.737181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.744 [2024-12-06 03:35:05.737196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 03:35:05.747126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.744 [2024-12-06 03:35:05.747182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.744 [2024-12-06 03:35:05.747196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.744 [2024-12-06 03:35:05.747203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.744 [2024-12-06 03:35:05.747209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.744 [2024-12-06 03:35:05.747223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 03:35:05.757103] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.744 [2024-12-06 03:35:05.757190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.744 [2024-12-06 03:35:05.757204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.744 [2024-12-06 03:35:05.757212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.744 [2024-12-06 03:35:05.757218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.744 [2024-12-06 03:35:05.757232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 03:35:05.767194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.744 [2024-12-06 03:35:05.767259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.744 [2024-12-06 03:35:05.767273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.744 [2024-12-06 03:35:05.767280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.744 [2024-12-06 03:35:05.767286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.744 [2024-12-06 03:35:05.767300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 03:35:05.777154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.744 [2024-12-06 03:35:05.777210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.744 [2024-12-06 03:35:05.777225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.744 [2024-12-06 03:35:05.777232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.744 [2024-12-06 03:35:05.777239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.744 [2024-12-06 03:35:05.777253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 03:35:05.787192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.744 [2024-12-06 03:35:05.787246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.744 [2024-12-06 03:35:05.787261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.744 [2024-12-06 03:35:05.787267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.744 [2024-12-06 03:35:05.787274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.744 [2024-12-06 03:35:05.787289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 03:35:05.797201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.744 [2024-12-06 03:35:05.797262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.744 [2024-12-06 03:35:05.797276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.744 [2024-12-06 03:35:05.797283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.744 [2024-12-06 03:35:05.797289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.744 [2024-12-06 03:35:05.797304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 03:35:05.807240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.744 [2024-12-06 03:35:05.807298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.744 [2024-12-06 03:35:05.807312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.744 [2024-12-06 03:35:05.807320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.744 [2024-12-06 03:35:05.807326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.744 [2024-12-06 03:35:05.807341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 03:35:05.817272] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.744 [2024-12-06 03:35:05.817333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.744 [2024-12-06 03:35:05.817347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.744 [2024-12-06 03:35:05.817354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.744 [2024-12-06 03:35:05.817360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.744 [2024-12-06 03:35:05.817374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 03:35:05.827264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.744 [2024-12-06 03:35:05.827326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.744 [2024-12-06 03:35:05.827341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.744 [2024-12-06 03:35:05.827348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.744 [2024-12-06 03:35:05.827353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.745 [2024-12-06 03:35:05.827368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.745 qpair failed and we were unable to recover it. 00:26:45.745 [2024-12-06 03:35:05.837370] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.745 [2024-12-06 03:35:05.837459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.745 [2024-12-06 03:35:05.837475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.745 [2024-12-06 03:35:05.837482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.745 [2024-12-06 03:35:05.837489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.745 [2024-12-06 03:35:05.837503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.745 qpair failed and we were unable to recover it. 00:26:45.745 [2024-12-06 03:35:05.847392] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.745 [2024-12-06 03:35:05.847450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.745 [2024-12-06 03:35:05.847465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.745 [2024-12-06 03:35:05.847472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.745 [2024-12-06 03:35:05.847478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.745 [2024-12-06 03:35:05.847492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.745 qpair failed and we were unable to recover it. 00:26:45.745 [2024-12-06 03:35:05.857367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.745 [2024-12-06 03:35:05.857419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.745 [2024-12-06 03:35:05.857434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.745 [2024-12-06 03:35:05.857444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.745 [2024-12-06 03:35:05.857450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:45.745 [2024-12-06 03:35:05.857464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.745 qpair failed and we were unable to recover it. 00:26:46.026 [2024-12-06 03:35:05.867461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.026 [2024-12-06 03:35:05.867531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.026 [2024-12-06 03:35:05.867545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.026 [2024-12-06 03:35:05.867551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.026 [2024-12-06 03:35:05.867558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:46.027 [2024-12-06 03:35:05.867572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.027 qpair failed and we were unable to recover it. 00:26:46.027 [2024-12-06 03:35:05.877504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.027 [2024-12-06 03:35:05.877592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.027 [2024-12-06 03:35:05.877606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.027 [2024-12-06 03:35:05.877613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.027 [2024-12-06 03:35:05.877619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:46.027 [2024-12-06 03:35:05.877633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.027 qpair failed and we were unable to recover it. 00:26:46.027 [2024-12-06 03:35:05.887505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.027 [2024-12-06 03:35:05.887563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.027 [2024-12-06 03:35:05.887579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.027 [2024-12-06 03:35:05.887586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.027 [2024-12-06 03:35:05.887592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:46.027 [2024-12-06 03:35:05.887608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.027 qpair failed and we were unable to recover it. 00:26:46.027 [2024-12-06 03:35:05.897552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.027 [2024-12-06 03:35:05.897636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.027 [2024-12-06 03:35:05.897650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.027 [2024-12-06 03:35:05.897657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.027 [2024-12-06 03:35:05.897663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:46.028 [2024-12-06 03:35:05.897678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.028 qpair failed and we were unable to recover it. 00:26:46.028 [2024-12-06 03:35:05.907513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.028 [2024-12-06 03:35:05.907571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.028 [2024-12-06 03:35:05.907585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.028 [2024-12-06 03:35:05.907592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.028 [2024-12-06 03:35:05.907599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:46.028 [2024-12-06 03:35:05.907613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.028 qpair failed and we were unable to recover it. 00:26:46.028 [2024-12-06 03:35:05.917640] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.028 [2024-12-06 03:35:05.917691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.028 [2024-12-06 03:35:05.917705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.028 [2024-12-06 03:35:05.917712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.028 [2024-12-06 03:35:05.917718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:46.028 [2024-12-06 03:35:05.917733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.028 qpair failed and we were unable to recover it. 00:26:46.028 [2024-12-06 03:35:05.927567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.028 [2024-12-06 03:35:05.927628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.028 [2024-12-06 03:35:05.927642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.028 [2024-12-06 03:35:05.927649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.028 [2024-12-06 03:35:05.927655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:46.028 [2024-12-06 03:35:05.927669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.029 qpair failed and we were unable to recover it. 00:26:46.029 [2024-12-06 03:35:05.937593] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.029 [2024-12-06 03:35:05.937680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.029 [2024-12-06 03:35:05.937694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.029 [2024-12-06 03:35:05.937700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.029 [2024-12-06 03:35:05.937707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:46.029 [2024-12-06 03:35:05.937720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.029 qpair failed and we were unable to recover it. 00:26:46.029 [2024-12-06 03:35:05.947644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.029 [2024-12-06 03:35:05.947705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.029 [2024-12-06 03:35:05.947719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.029 [2024-12-06 03:35:05.947726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.029 [2024-12-06 03:35:05.947732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:46.031 [2024-12-06 03:35:05.947747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.031 qpair failed and we were unable to recover it. 00:26:46.031 [2024-12-06 03:35:05.957714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.032 [2024-12-06 03:35:05.957768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.032 [2024-12-06 03:35:05.957783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.032 [2024-12-06 03:35:05.957790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.032 [2024-12-06 03:35:05.957796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:46.032 [2024-12-06 03:35:05.957810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.032 qpair failed and we were unable to recover it. 00:26:46.032 [2024-12-06 03:35:05.967717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.032 [2024-12-06 03:35:05.967779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.032 [2024-12-06 03:35:05.967795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.032 [2024-12-06 03:35:05.967802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.032 [2024-12-06 03:35:05.967809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:46.032 [2024-12-06 03:35:05.967823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.032 qpair failed and we were unable to recover it. 00:26:46.032 [2024-12-06 03:35:05.977791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.032 [2024-12-06 03:35:05.977852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.032 [2024-12-06 03:35:05.977866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.032 [2024-12-06 03:35:05.977873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.033 [2024-12-06 03:35:05.977879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:46.033 [2024-12-06 03:35:05.977893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.033 qpair failed and we were unable to recover it. 00:26:46.033 [2024-12-06 03:35:05.987803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.033 [2024-12-06 03:35:05.987862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.033 [2024-12-06 03:35:05.987877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.033 [2024-12-06 03:35:05.987888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.033 [2024-12-06 03:35:05.987894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:46.033 [2024-12-06 03:35:05.987908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.033 qpair failed and we were unable to recover it. 00:26:46.033 [2024-12-06 03:35:05.997885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.033 [2024-12-06 03:35:05.997953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.033 [2024-12-06 03:35:05.997968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.033 [2024-12-06 03:35:05.997975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.033 [2024-12-06 03:35:05.997981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x115cbe0 00:26:46.033 [2024-12-06 03:35:05.997996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.033 qpair failed and we were unable to recover it. 00:26:46.033 [2024-12-06 03:35:06.007882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.033 [2024-12-06 03:35:06.007965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.033 [2024-12-06 03:35:06.007987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.033 [2024-12-06 03:35:06.007996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.033 [2024-12-06 03:35:06.008002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cd8000b90 00:26:46.033 [2024-12-06 03:35:06.008021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.033 qpair failed and we were unable to recover it. 00:26:46.033 [2024-12-06 03:35:06.017835] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.033 [2024-12-06 03:35:06.017900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.033 [2024-12-06 03:35:06.017916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.033 [2024-12-06 03:35:06.017923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.033 [2024-12-06 03:35:06.017929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cd8000b90 00:26:46.034 [2024-12-06 03:35:06.017945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.034 qpair failed and we were unable to recover it. 00:26:46.034 [2024-12-06 03:35:06.018018] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:26:46.034 A controller has encountered a failure and is being reset. 00:26:46.034 Controller properly reset. 00:26:46.034 Initializing NVMe Controllers 00:26:46.034 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:46.034 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:46.034 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:46.034 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:46.034 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:46.034 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:46.034 Initialization complete. Launching workers. 00:26:46.034 Starting thread on core 1 00:26:46.034 Starting thread on core 2 00:26:46.034 Starting thread on core 3 00:26:46.034 Starting thread on core 0 00:26:46.034 03:35:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:26:46.034 00:26:46.034 real 0m10.727s 00:26:46.034 user 0m19.222s 00:26:46.034 sys 0m4.304s 00:26:46.034 03:35:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:46.034 03:35:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:46.034 ************************************ 00:26:46.034 END TEST nvmf_target_disconnect_tc2 00:26:46.034 ************************************ 00:26:46.034 03:35:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:26:46.034 03:35:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:26:46.034 03:35:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:26:46.035 03:35:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:46.035 03:35:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:26:46.035 03:35:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:46.035 03:35:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:26:46.035 03:35:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:46.035 03:35:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:46.035 rmmod nvme_tcp 00:26:46.035 rmmod nvme_fabrics 00:26:46.035 rmmod nvme_keyring 00:26:46.035 03:35:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:46.035 03:35:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:26:46.035 03:35:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:26:46.035 03:35:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2772504 ']' 00:26:46.035 03:35:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2772504 00:26:46.035 03:35:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2772504 ']' 00:26:46.035 03:35:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2772504 00:26:46.035 03:35:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:26:46.035 03:35:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:46.305 03:35:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2772504 00:26:46.305 03:35:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:26:46.305 03:35:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:26:46.305 03:35:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2772504' 00:26:46.305 killing process with pid 2772504 00:26:46.305 03:35:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2772504 00:26:46.305 03:35:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2772504 00:26:46.305 03:35:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:46.305 03:35:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:46.305 03:35:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:46.305 03:35:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:26:46.305 03:35:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:46.305 03:35:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:26:46.305 03:35:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:26:46.305 03:35:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:46.305 03:35:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:46.305 03:35:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:46.305 03:35:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:46.305 03:35:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.841 03:35:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:48.841 00:26:48.841 real 0m19.279s 00:26:48.841 user 0m46.511s 00:26:48.841 sys 0m9.046s 00:26:48.841 03:35:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:48.841 03:35:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:48.841 ************************************ 00:26:48.841 END TEST nvmf_target_disconnect 00:26:48.841 ************************************ 00:26:48.841 03:35:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:26:48.841 00:26:48.841 real 5m42.764s 00:26:48.841 user 10m23.228s 00:26:48.841 sys 1m52.887s 00:26:48.841 03:35:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:48.841 03:35:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.841 ************************************ 00:26:48.841 END TEST nvmf_host 00:26:48.841 ************************************ 00:26:48.841 03:35:08 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:26:48.841 03:35:08 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:26:48.841 03:35:08 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:26:48.841 03:35:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:48.841 03:35:08 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:48.841 03:35:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:48.841 ************************************ 00:26:48.841 START TEST nvmf_target_core_interrupt_mode 00:26:48.841 ************************************ 00:26:48.841 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:26:48.841 * Looking for test storage... 00:26:48.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:26:48.841 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:48.841 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:26:48.841 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:48.841 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:48.841 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:48.841 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:48.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.842 --rc genhtml_branch_coverage=1 00:26:48.842 --rc genhtml_function_coverage=1 00:26:48.842 --rc genhtml_legend=1 00:26:48.842 --rc geninfo_all_blocks=1 00:26:48.842 --rc geninfo_unexecuted_blocks=1 00:26:48.842 00:26:48.842 ' 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:48.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.842 --rc genhtml_branch_coverage=1 00:26:48.842 --rc genhtml_function_coverage=1 00:26:48.842 --rc genhtml_legend=1 00:26:48.842 --rc geninfo_all_blocks=1 00:26:48.842 --rc geninfo_unexecuted_blocks=1 00:26:48.842 00:26:48.842 ' 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:48.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.842 --rc genhtml_branch_coverage=1 00:26:48.842 --rc genhtml_function_coverage=1 00:26:48.842 --rc genhtml_legend=1 00:26:48.842 --rc geninfo_all_blocks=1 00:26:48.842 --rc geninfo_unexecuted_blocks=1 00:26:48.842 00:26:48.842 ' 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:48.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.842 --rc genhtml_branch_coverage=1 00:26:48.842 --rc genhtml_function_coverage=1 00:26:48.842 --rc genhtml_legend=1 00:26:48.842 --rc geninfo_all_blocks=1 00:26:48.842 --rc geninfo_unexecuted_blocks=1 00:26:48.842 00:26:48.842 ' 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:48.842 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:48.843 ************************************ 00:26:48.843 START TEST nvmf_abort 00:26:48.843 ************************************ 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:26:48.843 * Looking for test storage... 00:26:48.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:26:48.843 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:49.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.104 --rc genhtml_branch_coverage=1 00:26:49.104 --rc genhtml_function_coverage=1 00:26:49.104 --rc genhtml_legend=1 00:26:49.104 --rc geninfo_all_blocks=1 00:26:49.104 --rc geninfo_unexecuted_blocks=1 00:26:49.104 00:26:49.104 ' 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:49.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.104 --rc genhtml_branch_coverage=1 00:26:49.104 --rc genhtml_function_coverage=1 00:26:49.104 --rc genhtml_legend=1 00:26:49.104 --rc geninfo_all_blocks=1 00:26:49.104 --rc geninfo_unexecuted_blocks=1 00:26:49.104 00:26:49.104 ' 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:49.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.104 --rc genhtml_branch_coverage=1 00:26:49.104 --rc genhtml_function_coverage=1 00:26:49.104 --rc genhtml_legend=1 00:26:49.104 --rc geninfo_all_blocks=1 00:26:49.104 --rc geninfo_unexecuted_blocks=1 00:26:49.104 00:26:49.104 ' 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:49.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.104 --rc genhtml_branch_coverage=1 00:26:49.104 --rc genhtml_function_coverage=1 00:26:49.104 --rc genhtml_legend=1 00:26:49.104 --rc geninfo_all_blocks=1 00:26:49.104 --rc geninfo_unexecuted_blocks=1 00:26:49.104 00:26:49.104 ' 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:49.104 03:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.104 03:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:49.104 03:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:49.104 03:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:26:49.104 03:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:54.381 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:54.381 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:26:54.381 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:54.381 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:54.381 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:54.381 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:54.381 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:54.382 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:54.382 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:54.382 Found net devices under 0000:86:00.0: cvl_0_0 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:54.382 Found net devices under 0000:86:00.1: cvl_0_1 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:54.382 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:54.383 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:54.383 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:54.383 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:54.383 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:54.383 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:54.383 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:54.383 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:54.383 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:54.383 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:54.383 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:54.383 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:54.383 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:54.383 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:54.383 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:54.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:54.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:26:54.383 00:26:54.383 --- 10.0.0.2 ping statistics --- 00:26:54.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.383 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:26:54.383 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:54.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:54.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:26:54.383 00:26:54.383 --- 10.0.0.1 ping statistics --- 00:26:54.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.383 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:26:54.383 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:54.383 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:26:54.383 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:54.383 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:54.383 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:54.383 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:54.383 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:54.383 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:54.383 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:54.383 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:26:54.383 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:54.383 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:54.383 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:54.383 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2777251 00:26:54.383 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:26:54.383 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2777251 00:26:54.383 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2777251 ']' 00:26:54.383 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:54.383 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:54.383 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:54.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:54.383 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:54.383 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:54.642 [2024-12-06 03:35:14.559015] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:54.642 [2024-12-06 03:35:14.559941] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:26:54.642 [2024-12-06 03:35:14.559985] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:54.642 [2024-12-06 03:35:14.626877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:54.642 [2024-12-06 03:35:14.669764] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:54.642 [2024-12-06 03:35:14.669801] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:54.642 [2024-12-06 03:35:14.669808] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:54.642 [2024-12-06 03:35:14.669814] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:54.642 [2024-12-06 03:35:14.669820] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:54.642 [2024-12-06 03:35:14.671133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:54.642 [2024-12-06 03:35:14.671222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:54.642 [2024-12-06 03:35:14.671224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:54.642 [2024-12-06 03:35:14.740309] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:54.642 [2024-12-06 03:35:14.740330] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:54.642 [2024-12-06 03:35:14.740513] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:54.642 [2024-12-06 03:35:14.740585] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:54.642 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:54.642 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:26:54.642 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:54.642 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:54.642 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:54.901 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:54.901 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:26:54.901 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.901 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:54.901 [2024-12-06 03:35:14.803702] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:54.901 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.901 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:26:54.901 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.901 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:54.901 Malloc0 00:26:54.901 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.901 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:54.901 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.901 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:54.901 Delay0 00:26:54.901 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.901 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:26:54.902 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.902 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:54.902 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.902 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:26:54.902 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.902 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:54.902 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.902 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:54.902 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.902 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:54.902 [2024-12-06 03:35:14.871860] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:54.902 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.902 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:54.902 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.902 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:54.902 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.902 03:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:26:54.902 [2024-12-06 03:35:15.030096] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:26:57.433 Initializing NVMe Controllers 00:26:57.433 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:26:57.433 controller IO queue size 128 less than required 00:26:57.433 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:26:57.433 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:26:57.433 Initialization complete. Launching workers. 00:26:57.433 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36676 00:26:57.433 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36733, failed to submit 66 00:26:57.433 success 36676, unsuccessful 57, failed 0 00:26:57.433 03:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:57.433 03:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.433 03:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:57.433 03:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.433 03:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:26:57.433 03:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:26:57.433 03:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:57.433 03:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:26:57.433 03:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:57.433 03:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:26:57.433 03:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:57.433 03:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:57.433 rmmod nvme_tcp 00:26:57.433 rmmod nvme_fabrics 00:26:57.433 rmmod nvme_keyring 00:26:57.433 03:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:57.433 03:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:26:57.433 03:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:26:57.433 03:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2777251 ']' 00:26:57.433 03:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2777251 00:26:57.433 03:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2777251 ']' 00:26:57.433 03:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2777251 00:26:57.433 03:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:26:57.433 03:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:57.433 03:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2777251 00:26:57.433 03:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:57.433 03:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:57.433 03:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2777251' 00:26:57.433 killing process with pid 2777251 00:26:57.433 03:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2777251 00:26:57.433 03:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2777251 00:26:57.433 03:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:57.433 03:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:57.433 03:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:57.433 03:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:26:57.433 03:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:26:57.433 03:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:57.433 03:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:26:57.433 03:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:57.433 03:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:57.433 03:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:57.433 03:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:57.433 03:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.965 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:59.965 00:26:59.965 real 0m10.743s 00:26:59.965 user 0m10.549s 00:26:59.965 sys 0m5.420s 00:26:59.965 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:59.966 ************************************ 00:26:59.966 END TEST nvmf_abort 00:26:59.966 ************************************ 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:59.966 ************************************ 00:26:59.966 START TEST nvmf_ns_hotplug_stress 00:26:59.966 ************************************ 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:26:59.966 * Looking for test storage... 00:26:59.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:59.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:59.966 --rc genhtml_branch_coverage=1 00:26:59.966 --rc genhtml_function_coverage=1 00:26:59.966 --rc genhtml_legend=1 00:26:59.966 --rc geninfo_all_blocks=1 00:26:59.966 --rc geninfo_unexecuted_blocks=1 00:26:59.966 00:26:59.966 ' 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:59.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:59.966 --rc genhtml_branch_coverage=1 00:26:59.966 --rc genhtml_function_coverage=1 00:26:59.966 --rc genhtml_legend=1 00:26:59.966 --rc geninfo_all_blocks=1 00:26:59.966 --rc geninfo_unexecuted_blocks=1 00:26:59.966 00:26:59.966 ' 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:59.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:59.966 --rc genhtml_branch_coverage=1 00:26:59.966 --rc genhtml_function_coverage=1 00:26:59.966 --rc genhtml_legend=1 00:26:59.966 --rc geninfo_all_blocks=1 00:26:59.966 --rc geninfo_unexecuted_blocks=1 00:26:59.966 00:26:59.966 ' 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:59.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:59.966 --rc genhtml_branch_coverage=1 00:26:59.966 --rc genhtml_function_coverage=1 00:26:59.966 --rc genhtml_legend=1 00:26:59.966 --rc geninfo_all_blocks=1 00:26:59.966 --rc geninfo_unexecuted_blocks=1 00:26:59.966 00:26:59.966 ' 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:59.966 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:59.967 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.967 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.967 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.967 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:26:59.967 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.967 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:26:59.967 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:59.967 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:59.967 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:59.967 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:59.967 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:59.967 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:59.967 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:59.967 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:59.967 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:59.967 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:59.967 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:59.967 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:26:59.967 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:59.967 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:59.967 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:59.967 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:59.967 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:59.967 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:59.967 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:59.967 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.967 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:59.967 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:59.967 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:26:59.967 03:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:05.235 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:05.236 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:05.236 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:05.236 Found net devices under 0000:86:00.0: cvl_0_0 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:05.236 Found net devices under 0000:86:00.1: cvl_0_1 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:05.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:05.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.446 ms 00:27:05.236 00:27:05.236 --- 10.0.0.2 ping statistics --- 00:27:05.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.236 rtt min/avg/max/mdev = 0.446/0.446/0.446/0.000 ms 00:27:05.236 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:05.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:05.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:27:05.236 00:27:05.236 --- 10.0.0.1 ping statistics --- 00:27:05.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.236 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:27:05.237 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:05.237 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:27:05.237 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:05.237 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:05.237 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:05.237 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:05.237 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:05.237 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:05.237 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:05.495 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:27:05.495 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:05.495 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:05.495 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:05.495 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2781033 00:27:05.495 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2781033 00:27:05.495 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:05.495 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2781033 ']' 00:27:05.495 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:05.495 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:05.495 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:05.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:05.495 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:05.495 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:05.495 [2024-12-06 03:35:25.455230] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:05.495 [2024-12-06 03:35:25.456188] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:27:05.495 [2024-12-06 03:35:25.456221] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:05.495 [2024-12-06 03:35:25.521575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:05.495 [2024-12-06 03:35:25.563242] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:05.495 [2024-12-06 03:35:25.563281] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:05.495 [2024-12-06 03:35:25.563288] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:05.495 [2024-12-06 03:35:25.563294] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:05.495 [2024-12-06 03:35:25.563300] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:05.495 [2024-12-06 03:35:25.564675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:05.495 [2024-12-06 03:35:25.564765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:05.495 [2024-12-06 03:35:25.564766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:05.495 [2024-12-06 03:35:25.632665] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:05.495 [2024-12-06 03:35:25.632695] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:05.495 [2024-12-06 03:35:25.632886] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:05.495 [2024-12-06 03:35:25.632972] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:05.779 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:05.779 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:27:05.779 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:05.779 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:05.779 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:05.779 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:05.779 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:27:05.779 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:05.779 [2024-12-06 03:35:25.865267] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:05.779 03:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:06.038 03:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:06.296 [2024-12-06 03:35:26.246181] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:06.296 03:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:06.555 03:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:27:06.555 Malloc0 00:27:06.555 03:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:06.813 Delay0 00:27:06.813 03:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:07.072 03:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:27:07.330 NULL1 00:27:07.330 03:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:27:07.330 03:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2781505 00:27:07.330 03:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2781505 00:27:07.331 03:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:27:07.331 03:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:08.707 Read completed with error (sct=0, sc=11) 00:27:08.707 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:08.707 03:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:08.707 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:08.707 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:08.707 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:08.707 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:08.707 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:08.966 03:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:27:08.966 03:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:27:08.966 true 00:27:08.966 03:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2781505 00:27:08.966 03:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:09.902 03:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:10.161 03:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:27:10.161 03:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:27:10.161 true 00:27:10.161 03:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2781505 00:27:10.161 03:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:10.420 03:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:10.678 03:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:27:10.678 03:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:27:10.937 true 00:27:10.937 03:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2781505 00:27:10.937 03:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:11.873 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:11.873 03:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:12.132 03:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:27:12.132 03:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:27:12.390 true 00:27:12.390 03:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2781505 00:27:12.390 03:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:12.390 03:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:12.646 03:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:27:12.646 03:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:27:12.905 true 00:27:12.905 03:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2781505 00:27:12.905 03:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:14.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:14.279 03:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:14.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:14.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:14.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:14.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:14.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:14.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:14.279 03:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:27:14.279 03:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:27:14.537 true 00:27:14.537 03:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2781505 00:27:14.537 03:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:15.475 03:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:15.475 03:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:27:15.475 03:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:27:15.746 true 00:27:15.746 03:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2781505 00:27:15.746 03:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:16.009 03:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:16.009 03:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:27:16.009 03:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:27:16.268 true 00:27:16.268 03:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2781505 00:27:16.268 03:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:17.206 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:17.206 03:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:17.206 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:17.466 03:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:27:17.466 03:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:27:17.725 true 00:27:17.725 03:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2781505 00:27:17.725 03:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:17.985 03:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:17.985 03:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:27:17.985 03:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:27:18.244 true 00:27:18.244 03:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2781505 00:27:18.244 03:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:19.622 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:19.622 03:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:19.622 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:19.622 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:19.622 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:19.622 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:19.622 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:19.622 03:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:27:19.622 03:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:27:19.882 true 00:27:19.882 03:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2781505 00:27:19.883 03:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:20.820 03:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:20.820 03:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:27:20.820 03:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:27:21.079 true 00:27:21.079 03:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2781505 00:27:21.079 03:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:21.358 03:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:21.358 03:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:27:21.358 03:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:27:21.617 true 00:27:21.617 03:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2781505 00:27:21.617 03:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:22.552 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:22.552 03:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:22.810 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:22.810 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:22.810 03:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:27:22.810 03:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:27:23.068 true 00:27:23.068 03:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2781505 00:27:23.068 03:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:23.345 03:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:23.603 03:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:27:23.603 03:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:27:23.603 true 00:27:23.603 03:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2781505 00:27:23.603 03:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:24.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:24.977 03:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:24.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:24.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:24.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:24.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:24.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:24.977 03:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:27:24.977 03:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:27:25.235 true 00:27:25.235 03:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2781505 00:27:25.235 03:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:26.169 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:26.169 03:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:26.169 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:26.426 03:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:27:26.426 03:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:27:26.426 true 00:27:26.426 03:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2781505 00:27:26.426 03:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:26.701 03:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:26.959 03:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:27:26.959 03:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:27:26.959 true 00:27:27.218 03:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2781505 00:27:27.218 03:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:28.154 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:28.154 03:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:28.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:28.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:28.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:28.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:28.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:28.414 03:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:27:28.414 03:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:27:28.673 true 00:27:28.673 03:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2781505 00:27:28.673 03:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:29.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:29.614 03:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:29.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:29.614 03:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:27:29.614 03:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:27:29.873 true 00:27:29.873 03:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2781505 00:27:29.873 03:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:30.131 03:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:30.389 03:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:27:30.389 03:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:27:30.389 true 00:27:30.647 03:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2781505 00:27:30.647 03:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:31.582 03:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:31.582 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:31.841 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:31.841 03:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:27:31.841 03:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:27:32.100 true 00:27:32.100 03:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2781505 00:27:32.100 03:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:32.358 03:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:32.359 03:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:27:32.359 03:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:27:32.617 true 00:27:32.617 03:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2781505 00:27:32.617 03:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:33.996 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:33.996 03:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:33.996 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:33.996 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:33.996 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:33.996 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:33.996 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:33.996 03:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:27:33.996 03:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:27:34.255 true 00:27:34.255 03:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2781505 00:27:34.255 03:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:35.191 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:35.192 03:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:35.192 03:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:27:35.192 03:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:27:35.449 true 00:27:35.449 03:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2781505 00:27:35.449 03:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:35.707 03:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:35.966 03:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:27:35.966 03:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:27:35.966 true 00:27:35.966 03:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2781505 00:27:35.966 03:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:37.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:37.344 03:35:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:37.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:37.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:37.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:37.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:37.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:37.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:37.344 03:35:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:27:37.344 03:35:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:27:37.603 true 00:27:37.603 03:35:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2781505 00:27:37.603 03:35:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:38.541 03:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:38.541 Initializing NVMe Controllers 00:27:38.541 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:38.541 Controller IO queue size 128, less than required. 00:27:38.541 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:38.541 Controller IO queue size 128, less than required. 00:27:38.541 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:38.541 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:38.541 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:38.541 Initialization complete. Launching workers. 00:27:38.541 ======================================================== 00:27:38.541 Latency(us) 00:27:38.541 Device Information : IOPS MiB/s Average min max 00:27:38.541 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1995.37 0.97 44151.94 2737.83 1012575.95 00:27:38.541 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17285.73 8.44 7404.51 1614.31 385740.43 00:27:38.541 ======================================================== 00:27:38.541 Total : 19281.10 9.41 11207.43 1614.31 1012575.95 00:27:38.541 00:27:38.541 03:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:27:38.541 03:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:27:38.800 true 00:27:38.800 03:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2781505 00:27:38.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2781505) - No such process 00:27:38.800 03:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2781505 00:27:38.800 03:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:38.800 03:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:39.060 03:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:27:39.060 03:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:27:39.060 03:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:27:39.060 03:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:39.060 03:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:27:39.319 null0 00:27:39.319 03:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:39.319 03:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:39.319 03:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:27:39.578 null1 00:27:39.578 03:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:39.578 03:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:39.578 03:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:27:39.578 null2 00:27:39.578 03:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:39.578 03:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:39.578 03:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:27:39.837 null3 00:27:39.837 03:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:39.837 03:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:39.837 03:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:27:40.096 null4 00:27:40.096 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:40.096 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:40.096 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:27:40.354 null5 00:27:40.354 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:40.354 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:40.354 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:27:40.354 null6 00:27:40.354 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:40.354 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:40.354 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:27:40.612 null7 00:27:40.612 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:40.612 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:40.612 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:27:40.612 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:40.612 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:27:40.612 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2786884 2786886 2786887 2786889 2786891 2786893 2786895 2786897 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:40.613 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:40.871 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:40.871 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:40.871 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:40.871 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:40.871 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:40.871 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:40.871 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:40.871 03:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:41.131 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.131 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.131 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:41.131 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.131 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.131 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:41.131 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.131 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.131 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:41.131 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.131 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.131 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.131 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.131 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:41.131 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:41.131 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.131 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.131 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:41.131 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.131 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.131 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:41.131 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.131 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.131 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:41.390 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:41.390 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:41.390 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:41.390 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:41.390 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:41.390 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:41.390 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:41.390 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:41.390 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.390 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.390 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:41.390 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.390 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.390 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:41.390 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.390 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.390 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:41.390 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.390 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.390 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:41.391 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.391 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.391 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:41.391 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.391 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.391 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:41.391 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.391 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.391 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:41.391 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.391 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.391 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:41.650 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:41.650 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:41.650 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:41.650 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:41.650 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:41.650 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:41.650 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:41.650 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:41.909 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.909 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.909 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:41.909 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.909 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.909 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:41.909 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.909 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.909 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.909 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.909 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:41.909 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:41.909 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.909 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.909 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:41.909 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.909 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.909 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:41.909 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.909 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.909 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:41.909 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.909 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.909 03:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:42.169 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:42.169 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:42.169 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:42.169 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:42.169 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:42.169 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:42.169 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:42.169 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:42.429 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:42.429 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:42.429 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:42.429 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:42.429 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:42.429 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:42.429 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:42.429 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:42.429 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:42.429 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:42.429 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:42.429 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:42.429 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:42.429 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:42.429 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:42.429 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:42.429 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:42.429 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:42.429 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:42.429 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:42.429 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:42.429 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:42.429 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:42.429 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:42.429 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:42.429 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:42.429 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:42.689 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:42.689 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:42.689 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:42.689 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:42.689 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:42.689 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:42.689 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:42.689 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:42.689 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:42.689 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:42.689 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:42.689 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:42.689 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:42.689 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:42.689 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:42.689 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:42.689 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:42.689 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:42.689 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:42.689 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:42.689 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:42.689 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:42.689 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:42.689 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:42.689 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:42.689 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:42.689 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:42.689 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:42.689 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:42.948 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:42.948 03:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:42.948 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:42.948 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:42.948 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:42.948 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:42.948 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:42.948 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:43.207 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:43.207 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:43.207 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:43.207 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:43.208 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:43.208 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:43.208 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:43.208 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:43.208 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:43.208 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:43.208 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:43.208 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:43.208 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:43.208 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:43.208 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:43.208 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:43.208 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:43.208 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:43.208 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:43.208 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:43.208 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:43.208 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:43.208 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:43.208 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:43.468 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:43.468 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:43.468 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:43.468 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:43.468 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:43.468 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:43.468 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:43.468 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:43.468 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:43.468 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:43.468 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:43.727 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:43.727 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:43.727 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:43.727 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:43.727 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:43.727 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:43.727 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:43.727 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:43.727 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:43.727 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:43.727 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:43.727 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:43.727 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:43.727 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:43.727 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:43.727 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:43.727 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:43.727 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:43.727 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:43.727 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:43.728 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:43.728 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:43.728 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:43.728 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:43.728 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:43.728 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:43.728 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:43.987 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:43.987 03:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:43.987 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:43.987 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:43.987 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:43.987 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:43.987 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:43.987 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:43.987 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:43.987 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:43.987 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:43.987 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:43.987 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:43.987 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:43.987 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:43.987 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:43.987 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:43.987 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:43.987 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:43.987 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:43.987 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:43.987 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:43.987 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:43.987 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:43.987 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:43.987 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:44.247 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:44.247 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:44.247 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:44.247 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:44.247 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:44.247 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:44.247 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:44.247 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:44.506 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:44.506 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:44.506 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:44.506 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:44.506 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:44.506 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:44.506 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:44.506 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:44.506 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:44.506 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:44.506 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:44.506 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:44.506 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:44.506 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:44.506 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:44.506 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:44.506 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:44.506 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:44.506 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:44.506 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:44.506 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:44.506 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:44.506 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:44.506 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:44.506 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:44.765 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:44.765 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:44.765 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:44.765 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:44.765 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:44.765 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:44.765 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:44.765 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:44.765 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:44.765 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:44.765 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:44.765 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:44.765 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:45.024 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:45.024 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:45.024 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:45.024 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:45.024 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:45.024 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:45.024 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:45.025 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:45.025 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:45.025 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:45.025 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:27:45.025 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:27:45.025 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:45.025 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:27:45.025 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:45.025 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:27:45.025 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:45.025 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:45.025 rmmod nvme_tcp 00:27:45.025 rmmod nvme_fabrics 00:27:45.025 rmmod nvme_keyring 00:27:45.025 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:45.025 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:27:45.025 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:27:45.025 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2781033 ']' 00:27:45.025 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2781033 00:27:45.025 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2781033 ']' 00:27:45.025 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2781033 00:27:45.025 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:27:45.025 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:45.025 03:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2781033 00:27:45.025 03:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:45.025 03:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:45.025 03:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2781033' 00:27:45.025 killing process with pid 2781033 00:27:45.025 03:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2781033 00:27:45.025 03:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2781033 00:27:45.283 03:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:45.283 03:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:45.283 03:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:45.283 03:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:27:45.283 03:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:27:45.283 03:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:45.283 03:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:27:45.283 03:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:45.283 03:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:45.283 03:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:45.283 03:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:45.283 03:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:47.181 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:47.181 00:27:47.181 real 0m47.659s 00:27:47.181 user 2m59.817s 00:27:47.181 sys 0m19.785s 00:27:47.181 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:47.181 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:47.181 ************************************ 00:27:47.181 END TEST nvmf_ns_hotplug_stress 00:27:47.181 ************************************ 00:27:47.440 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:27:47.440 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:47.440 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:47.440 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:47.440 ************************************ 00:27:47.440 START TEST nvmf_delete_subsystem 00:27:47.440 ************************************ 00:27:47.440 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:27:47.440 * Looking for test storage... 00:27:47.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:47.440 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:47.440 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:27:47.440 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:47.440 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:47.440 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:47.440 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:47.440 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:47.440 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:47.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:47.441 --rc genhtml_branch_coverage=1 00:27:47.441 --rc genhtml_function_coverage=1 00:27:47.441 --rc genhtml_legend=1 00:27:47.441 --rc geninfo_all_blocks=1 00:27:47.441 --rc geninfo_unexecuted_blocks=1 00:27:47.441 00:27:47.441 ' 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:47.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:47.441 --rc genhtml_branch_coverage=1 00:27:47.441 --rc genhtml_function_coverage=1 00:27:47.441 --rc genhtml_legend=1 00:27:47.441 --rc geninfo_all_blocks=1 00:27:47.441 --rc geninfo_unexecuted_blocks=1 00:27:47.441 00:27:47.441 ' 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:47.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:47.441 --rc genhtml_branch_coverage=1 00:27:47.441 --rc genhtml_function_coverage=1 00:27:47.441 --rc genhtml_legend=1 00:27:47.441 --rc geninfo_all_blocks=1 00:27:47.441 --rc geninfo_unexecuted_blocks=1 00:27:47.441 00:27:47.441 ' 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:47.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:47.441 --rc genhtml_branch_coverage=1 00:27:47.441 --rc genhtml_function_coverage=1 00:27:47.441 --rc genhtml_legend=1 00:27:47.441 --rc geninfo_all_blocks=1 00:27:47.441 --rc geninfo_unexecuted_blocks=1 00:27:47.441 00:27:47.441 ' 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:47.441 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.442 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.442 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.442 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:27:47.442 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.442 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:27:47.442 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:47.442 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:47.442 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:47.442 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:47.442 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:47.442 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:47.442 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:47.442 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:47.442 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:47.442 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:47.442 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:27:47.442 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:47.442 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:47.442 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:47.442 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:47.442 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:47.442 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:47.442 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:47.442 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:47.442 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:47.442 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:47.442 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:27:47.442 03:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:52.710 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:52.710 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:27:52.710 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:52.710 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:52.710 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:52.710 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:52.710 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:52.710 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:27:52.710 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:52.710 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:27:52.710 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:27:52.710 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:27:52.710 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:27:52.710 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:27:52.710 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:27:52.710 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:52.710 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:52.710 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:52.711 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:52.711 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:52.711 Found net devices under 0000:86:00.0: cvl_0_0 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:52.711 Found net devices under 0000:86:00.1: cvl_0_1 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:52.711 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:52.971 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:52.971 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:52.971 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:52.971 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:52.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:52.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:27:52.971 00:27:52.971 --- 10.0.0.2 ping statistics --- 00:27:52.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.971 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:27:52.971 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:52.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:52.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:27:52.971 00:27:52.971 --- 10.0.0.1 ping statistics --- 00:27:52.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.971 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:27:52.971 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:52.971 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:27:52.971 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:52.971 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:52.971 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:52.971 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:52.971 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:52.971 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:52.971 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:52.971 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:27:52.971 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:52.971 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:52.971 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:52.971 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2791639 00:27:52.971 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2791639 00:27:52.971 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:27:52.971 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2791639 ']' 00:27:52.971 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:52.971 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:52.971 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:52.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:52.971 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:52.971 03:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:52.971 [2024-12-06 03:36:13.016334] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:52.971 [2024-12-06 03:36:13.017313] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:27:52.971 [2024-12-06 03:36:13.017349] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:52.971 [2024-12-06 03:36:13.085687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:53.231 [2024-12-06 03:36:13.130732] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:53.231 [2024-12-06 03:36:13.130765] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:53.231 [2024-12-06 03:36:13.130773] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:53.231 [2024-12-06 03:36:13.130779] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:53.231 [2024-12-06 03:36:13.130784] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:53.231 [2024-12-06 03:36:13.131980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:53.231 [2024-12-06 03:36:13.131984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.231 [2024-12-06 03:36:13.201374] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:53.231 [2024-12-06 03:36:13.201502] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:53.231 [2024-12-06 03:36:13.201585] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:53.231 03:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:53.231 03:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:27:53.231 03:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:53.231 03:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:53.231 03:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:53.231 03:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:53.231 03:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:53.231 03:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.231 03:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:53.231 [2024-12-06 03:36:13.268489] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:53.231 03:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.231 03:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:53.231 03:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.231 03:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:53.231 03:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.231 03:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:53.231 03:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.231 03:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:53.231 [2024-12-06 03:36:13.288615] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:53.231 03:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.231 03:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:27:53.231 03:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.231 03:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:53.231 NULL1 00:27:53.231 03:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.231 03:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:53.231 03:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.231 03:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:53.231 Delay0 00:27:53.231 03:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.231 03:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:53.231 03:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.231 03:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:53.231 03:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.231 03:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2791750 00:27:53.231 03:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:27:53.231 03:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:27:53.490 [2024-12-06 03:36:13.376977] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:55.395 03:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:55.395 03:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.395 03:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 starting I/O failed: -6 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Write completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 starting I/O failed: -6 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Write completed with error (sct=0, sc=8) 00:27:55.395 Write completed with error (sct=0, sc=8) 00:27:55.395 Write completed with error (sct=0, sc=8) 00:27:55.395 starting I/O failed: -6 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Write completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 starting I/O failed: -6 00:27:55.395 Write completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 starting I/O failed: -6 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Write completed with error (sct=0, sc=8) 00:27:55.395 starting I/O failed: -6 00:27:55.395 Write completed with error (sct=0, sc=8) 00:27:55.395 Write completed with error (sct=0, sc=8) 00:27:55.395 Write completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 starting I/O failed: -6 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Write completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Write completed with error (sct=0, sc=8) 00:27:55.395 starting I/O failed: -6 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 starting I/O failed: -6 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Write completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Write completed with error (sct=0, sc=8) 00:27:55.395 starting I/O failed: -6 00:27:55.395 Write completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Write completed with error (sct=0, sc=8) 00:27:55.395 starting I/O failed: -6 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Write completed with error (sct=0, sc=8) 00:27:55.395 starting I/O failed: -6 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Write completed with error (sct=0, sc=8) 00:27:55.395 starting I/O failed: -6 00:27:55.395 [2024-12-06 03:36:15.513650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf0680 is same with the state(6) to be set 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 starting I/O failed: -6 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 starting I/O failed: -6 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 starting I/O failed: -6 00:27:55.395 Write completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 starting I/O failed: -6 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 starting I/O failed: -6 00:27:55.395 Write completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Write completed with error (sct=0, sc=8) 00:27:55.395 starting I/O failed: -6 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 starting I/O failed: -6 00:27:55.395 Write completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 starting I/O failed: -6 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 starting I/O failed: -6 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 Write completed with error (sct=0, sc=8) 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.395 starting I/O failed: -6 00:27:55.395 Read completed with error (sct=0, sc=8) 00:27:55.396 [2024-12-06 03:36:15.514018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f165c000c40 is same with the state(6) to be set 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Write completed with error (sct=0, sc=8) 00:27:55.396 Write completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Write completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Write completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Write completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Write completed with error (sct=0, sc=8) 00:27:55.396 Write completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Write completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Write completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Write completed with error (sct=0, sc=8) 00:27:55.396 Write completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Write completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Write completed with error (sct=0, sc=8) 00:27:55.396 Write completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Write completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Write completed with error (sct=0, sc=8) 00:27:55.396 Write completed with error (sct=0, sc=8) 00:27:55.396 Write completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Write completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Write completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Write completed with error (sct=0, sc=8) 00:27:55.396 Write completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Write completed with error (sct=0, sc=8) 00:27:55.396 Write completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Write completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Write completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Write completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Write completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:55.396 Read completed with error (sct=0, sc=8) 00:27:56.773 [2024-12-06 03:36:16.473178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf19b0 is same with the state(6) to be set 00:27:56.773 Read completed with error (sct=0, sc=8) 00:27:56.773 Read completed with error (sct=0, sc=8) 00:27:56.773 Read completed with error (sct=0, sc=8) 00:27:56.773 Write completed with error (sct=0, sc=8) 00:27:56.773 Write completed with error (sct=0, sc=8) 00:27:56.773 Read completed with error (sct=0, sc=8) 00:27:56.773 Read completed with error (sct=0, sc=8) 00:27:56.773 Read completed with error (sct=0, sc=8) 00:27:56.773 Read completed with error (sct=0, sc=8) 00:27:56.773 Read completed with error (sct=0, sc=8) 00:27:56.773 Write completed with error (sct=0, sc=8) 00:27:56.773 Read completed with error (sct=0, sc=8) 00:27:56.773 Write completed with error (sct=0, sc=8) 00:27:56.773 Read completed with error (sct=0, sc=8) 00:27:56.773 Read completed with error (sct=0, sc=8) 00:27:56.773 [2024-12-06 03:36:16.514284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f165c00d350 is same with the state(6) to be set 00:27:56.773 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Write completed with error (sct=0, sc=8) 00:27:56.774 Write completed with error (sct=0, sc=8) 00:27:56.774 Write completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Write completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Write completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Write completed with error (sct=0, sc=8) 00:27:56.774 Write completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Write completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Write completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 [2024-12-06 03:36:16.516023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf0860 is same with the state(6) to be set 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Write completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Write completed with error (sct=0, sc=8) 00:27:56.774 Write completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Write completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Write completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Write completed with error (sct=0, sc=8) 00:27:56.774 Write completed with error (sct=0, sc=8) 00:27:56.774 Write completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Write completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Write completed with error (sct=0, sc=8) 00:27:56.774 [2024-12-06 03:36:16.516188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf04a0 is same with the state(6) to be set 00:27:56.774 Write completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Write completed with error (sct=0, sc=8) 00:27:56.774 Write completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Write completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Write completed with error (sct=0, sc=8) 00:27:56.774 Write completed with error (sct=0, sc=8) 00:27:56.774 Write completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Write completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Write completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Read completed with error (sct=0, sc=8) 00:27:56.774 Write completed with error (sct=0, sc=8) 00:27:56.774 [2024-12-06 03:36:16.516958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf02c0 is same with the state(6) to be set 00:27:56.774 03:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.774 Initializing NVMe Controllers 00:27:56.774 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:56.774 Controller IO queue size 128, less than required. 00:27:56.774 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:56.774 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:27:56.774 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:27:56.774 Initialization complete. Launching workers. 00:27:56.774 ======================================================== 00:27:56.774 Latency(us) 00:27:56.774 Device Information : IOPS MiB/s Average min max 00:27:56.774 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 181.59 0.09 955910.91 817.25 1013300.08 00:27:56.774 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 152.81 0.07 885953.86 235.52 1013387.08 00:27:56.774 ======================================================== 00:27:56.774 Total : 334.41 0.16 923942.41 235.52 1013387.08 00:27:56.774 00:27:56.774 [2024-12-06 03:36:16.517691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf19b0 (9): Bad file descriptor 00:27:56.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:27:56.774 03:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:27:56.774 03:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2791750 00:27:56.774 03:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:27:57.034 03:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:27:57.034 03:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2791750 00:27:57.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2791750) - No such process 00:27:57.034 03:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2791750 00:27:57.034 03:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:27:57.034 03:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2791750 00:27:57.034 03:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:27:57.034 03:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:57.034 03:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:27:57.034 03:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:57.034 03:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2791750 00:27:57.034 03:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:27:57.034 03:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:57.034 03:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:57.034 03:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:57.034 03:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:57.034 03:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.034 03:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:57.034 03:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.034 03:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:57.034 03:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.034 03:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:57.034 [2024-12-06 03:36:17.044874] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:57.034 03:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.034 03:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:57.034 03:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.034 03:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:57.034 03:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.034 03:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:27:57.034 03:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2792246 00:27:57.034 03:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:27:57.034 03:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2792246 00:27:57.034 03:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:57.034 [2024-12-06 03:36:17.099970] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:57.603 03:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:57.603 03:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2792246 00:27:57.603 03:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:58.172 03:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:58.172 03:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2792246 00:27:58.172 03:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:58.742 03:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:58.742 03:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2792246 00:27:58.742 03:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:59.001 03:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:59.001 03:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2792246 00:27:59.001 03:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:59.569 03:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:59.569 03:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2792246 00:27:59.569 03:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:00.163 03:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:00.163 03:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2792246 00:28:00.163 03:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:00.163 Initializing NVMe Controllers 00:28:00.163 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:00.163 Controller IO queue size 128, less than required. 00:28:00.163 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:00.163 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:00.163 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:00.163 Initialization complete. Launching workers. 00:28:00.163 ======================================================== 00:28:00.163 Latency(us) 00:28:00.163 Device Information : IOPS MiB/s Average min max 00:28:00.163 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003411.77 1000165.89 1010636.52 00:28:00.163 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005131.44 1000179.69 1011504.19 00:28:00.163 ======================================================== 00:28:00.163 Total : 256.00 0.12 1004271.60 1000165.89 1011504.19 00:28:00.163 00:28:00.734 03:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:00.734 03:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2792246 00:28:00.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2792246) - No such process 00:28:00.734 03:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2792246 00:28:00.734 03:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:00.734 03:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:28:00.734 03:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:00.734 03:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:28:00.734 03:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:00.734 03:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:28:00.734 03:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:00.734 03:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:00.734 rmmod nvme_tcp 00:28:00.734 rmmod nvme_fabrics 00:28:00.734 rmmod nvme_keyring 00:28:00.734 03:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:00.734 03:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:28:00.734 03:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:28:00.734 03:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2791639 ']' 00:28:00.734 03:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2791639 00:28:00.734 03:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2791639 ']' 00:28:00.734 03:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2791639 00:28:00.734 03:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:28:00.734 03:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:00.734 03:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2791639 00:28:00.734 03:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:00.734 03:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:00.734 03:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2791639' 00:28:00.734 killing process with pid 2791639 00:28:00.734 03:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2791639 00:28:00.734 03:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2791639 00:28:00.993 03:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:00.993 03:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:00.993 03:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:00.993 03:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:28:00.993 03:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:28:00.993 03:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:28:00.993 03:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:00.994 03:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:00.994 03:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:00.994 03:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.994 03:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:00.994 03:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:02.900 03:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:02.900 00:28:02.900 real 0m15.586s 00:28:02.900 user 0m25.760s 00:28:02.900 sys 0m5.715s 00:28:02.900 03:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:02.900 03:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:02.900 ************************************ 00:28:02.900 END TEST nvmf_delete_subsystem 00:28:02.900 ************************************ 00:28:02.900 03:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:02.900 03:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:02.900 03:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:02.900 03:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:02.900 ************************************ 00:28:02.900 START TEST nvmf_host_management 00:28:02.900 ************************************ 00:28:02.900 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:03.160 * Looking for test storage... 00:28:03.160 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:03.160 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:03.160 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:28:03.160 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:03.160 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:03.160 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:03.160 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:03.160 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:03.160 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:28:03.160 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:28:03.160 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:28:03.160 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:28:03.160 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:28:03.160 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:28:03.160 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:28:03.160 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:03.160 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:28:03.160 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:28:03.160 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:03.160 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:03.160 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:28:03.160 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:28:03.160 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:03.160 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:03.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.161 --rc genhtml_branch_coverage=1 00:28:03.161 --rc genhtml_function_coverage=1 00:28:03.161 --rc genhtml_legend=1 00:28:03.161 --rc geninfo_all_blocks=1 00:28:03.161 --rc geninfo_unexecuted_blocks=1 00:28:03.161 00:28:03.161 ' 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:03.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.161 --rc genhtml_branch_coverage=1 00:28:03.161 --rc genhtml_function_coverage=1 00:28:03.161 --rc genhtml_legend=1 00:28:03.161 --rc geninfo_all_blocks=1 00:28:03.161 --rc geninfo_unexecuted_blocks=1 00:28:03.161 00:28:03.161 ' 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:03.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.161 --rc genhtml_branch_coverage=1 00:28:03.161 --rc genhtml_function_coverage=1 00:28:03.161 --rc genhtml_legend=1 00:28:03.161 --rc geninfo_all_blocks=1 00:28:03.161 --rc geninfo_unexecuted_blocks=1 00:28:03.161 00:28:03.161 ' 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:03.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.161 --rc genhtml_branch_coverage=1 00:28:03.161 --rc genhtml_function_coverage=1 00:28:03.161 --rc genhtml_legend=1 00:28:03.161 --rc geninfo_all_blocks=1 00:28:03.161 --rc geninfo_unexecuted_blocks=1 00:28:03.161 00:28:03.161 ' 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:28:03.161 03:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:08.443 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:08.443 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:28:08.443 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:08.443 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:08.443 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:08.443 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:08.443 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:08.443 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:28:08.443 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:08.443 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:28:08.443 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:28:08.443 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:28:08.443 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:08.444 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:08.444 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:08.444 Found net devices under 0000:86:00.0: cvl_0_0 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:08.444 Found net devices under 0000:86:00.1: cvl_0_1 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:08.444 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:08.445 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:08.445 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:08.445 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:08.445 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:08.445 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:08.742 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:08.742 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:08.742 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:08.742 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:08.742 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:08.742 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:08.742 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:08.742 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:08.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:08.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:28:08.742 00:28:08.742 --- 10.0.0.2 ping statistics --- 00:28:08.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:08.742 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:28:08.742 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:08.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:08.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:28:08.742 00:28:08.742 --- 10.0.0.1 ping statistics --- 00:28:08.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:08.742 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:28:08.742 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:08.742 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:28:08.742 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:08.742 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:08.742 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:08.742 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:08.742 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:08.742 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:08.742 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:08.742 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:28:08.742 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:28:08.742 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:28:08.742 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:08.742 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:08.742 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:08.742 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2796407 00:28:08.742 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2796407 00:28:08.742 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:28:08.742 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2796407 ']' 00:28:08.742 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:08.742 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:08.742 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:08.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:08.742 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:08.742 03:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:08.742 [2024-12-06 03:36:28.813730] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:08.742 [2024-12-06 03:36:28.814673] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:28:08.742 [2024-12-06 03:36:28.814708] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:09.048 [2024-12-06 03:36:28.881198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:09.048 [2024-12-06 03:36:28.922831] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:09.048 [2024-12-06 03:36:28.922871] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:09.048 [2024-12-06 03:36:28.922878] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:09.048 [2024-12-06 03:36:28.922885] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:09.048 [2024-12-06 03:36:28.922890] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:09.048 [2024-12-06 03:36:28.924516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:09.048 [2024-12-06 03:36:28.924604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:09.048 [2024-12-06 03:36:28.924713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:09.048 [2024-12-06 03:36:28.924713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:09.048 [2024-12-06 03:36:28.992194] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:09.048 [2024-12-06 03:36:28.992349] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:09.048 [2024-12-06 03:36:28.992773] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:09.048 [2024-12-06 03:36:28.992774] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:09.048 [2024-12-06 03:36:28.993021] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:09.048 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:09.048 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:28:09.048 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:09.048 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:09.048 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:09.048 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:09.048 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:09.048 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.048 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:09.048 [2024-12-06 03:36:29.057460] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:09.048 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.048 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:28:09.048 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:09.048 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:09.048 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:09.048 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:28:09.048 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:28:09.048 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.048 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:09.048 Malloc0 00:28:09.048 [2024-12-06 03:36:29.133387] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:09.048 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.048 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:28:09.048 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:09.048 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:09.330 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2796462 00:28:09.330 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2796462 /var/tmp/bdevperf.sock 00:28:09.330 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2796462 ']' 00:28:09.330 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:09.330 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:09.330 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:28:09.330 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:09.330 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:09.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:09.330 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:09.330 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:09.330 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:09.330 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:09.330 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:09.330 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:09.330 { 00:28:09.330 "params": { 00:28:09.330 "name": "Nvme$subsystem", 00:28:09.330 "trtype": "$TEST_TRANSPORT", 00:28:09.330 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.330 "adrfam": "ipv4", 00:28:09.330 "trsvcid": "$NVMF_PORT", 00:28:09.330 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.330 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.330 "hdgst": ${hdgst:-false}, 00:28:09.330 "ddgst": ${ddgst:-false} 00:28:09.330 }, 00:28:09.330 "method": "bdev_nvme_attach_controller" 00:28:09.330 } 00:28:09.330 EOF 00:28:09.330 )") 00:28:09.330 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:09.330 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:09.330 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:09.330 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:09.330 "params": { 00:28:09.330 "name": "Nvme0", 00:28:09.330 "trtype": "tcp", 00:28:09.330 "traddr": "10.0.0.2", 00:28:09.330 "adrfam": "ipv4", 00:28:09.330 "trsvcid": "4420", 00:28:09.330 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:09.330 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:09.330 "hdgst": false, 00:28:09.330 "ddgst": false 00:28:09.330 }, 00:28:09.330 "method": "bdev_nvme_attach_controller" 00:28:09.330 }' 00:28:09.330 [2024-12-06 03:36:29.230839] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:28:09.330 [2024-12-06 03:36:29.230887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2796462 ] 00:28:09.330 [2024-12-06 03:36:29.295202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.330 [2024-12-06 03:36:29.336593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.613 Running I/O for 10 seconds... 00:28:09.613 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:09.613 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:28:09.613 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:09.613 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.613 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:09.613 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.613 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:09.613 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:28:09.613 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:09.613 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:28:09.613 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:28:09.613 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:28:09.613 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:28:09.613 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:09.613 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:09.613 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:09.613 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.613 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:09.613 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.613 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:28:09.613 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:28:09.613 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:28:09.984 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:28:09.984 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:09.984 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:09.984 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:09.984 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.984 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:09.984 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.984 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:28:09.984 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:28:09.984 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:28:09.984 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:28:09.984 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:28:09.984 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:09.984 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.984 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:09.984 [2024-12-06 03:36:29.897341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.984 [2024-12-06 03:36:29.897383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.984 [2024-12-06 03:36:29.897399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.984 [2024-12-06 03:36:29.897406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.984 [2024-12-06 03:36:29.897416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.984 [2024-12-06 03:36:29.897428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.984 [2024-12-06 03:36:29.897437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.984 [2024-12-06 03:36:29.897444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.984 [2024-12-06 03:36:29.897453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.984 [2024-12-06 03:36:29.897460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.984 [2024-12-06 03:36:29.897468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.984 [2024-12-06 03:36:29.897475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.984 [2024-12-06 03:36:29.897483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.984 [2024-12-06 03:36:29.897490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.984 [2024-12-06 03:36:29.897499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.984 [2024-12-06 03:36:29.897505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.984 [2024-12-06 03:36:29.897514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.984 [2024-12-06 03:36:29.897522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.984 [2024-12-06 03:36:29.897530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.984 [2024-12-06 03:36:29.897537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.984 [2024-12-06 03:36:29.897545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.984 [2024-12-06 03:36:29.897552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.984 [2024-12-06 03:36:29.897560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.984 [2024-12-06 03:36:29.897566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.984 [2024-12-06 03:36:29.897574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.984 [2024-12-06 03:36:29.897581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.984 [2024-12-06 03:36:29.897589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.984 [2024-12-06 03:36:29.897595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.984 [2024-12-06 03:36:29.897603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.984 [2024-12-06 03:36:29.897610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.984 [2024-12-06 03:36:29.897619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.984 [2024-12-06 03:36:29.897627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.984 [2024-12-06 03:36:29.897635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.984 [2024-12-06 03:36:29.897642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.984 [2024-12-06 03:36:29.897651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.984 [2024-12-06 03:36:29.897659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.984 [2024-12-06 03:36:29.897668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.984 [2024-12-06 03:36:29.897674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.984 [2024-12-06 03:36:29.897682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.984 [2024-12-06 03:36:29.897689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.984 [2024-12-06 03:36:29.897697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.984 [2024-12-06 03:36:29.897704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.984 [2024-12-06 03:36:29.897712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.984 [2024-12-06 03:36:29.897719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.984 [2024-12-06 03:36:29.897727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.984 [2024-12-06 03:36:29.897734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.984 [2024-12-06 03:36:29.897742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.984 [2024-12-06 03:36:29.897749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.984 [2024-12-06 03:36:29.897757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.984 [2024-12-06 03:36:29.897764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.984 [2024-12-06 03:36:29.897773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.984 [2024-12-06 03:36:29.897780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.984 [2024-12-06 03:36:29.897788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.984 [2024-12-06 03:36:29.897794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.984 [2024-12-06 03:36:29.897802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.984 [2024-12-06 03:36:29.897815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.985 [2024-12-06 03:36:29.897824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.985 [2024-12-06 03:36:29.897831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.985 [2024-12-06 03:36:29.897839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.985 [2024-12-06 03:36:29.897846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.985 [2024-12-06 03:36:29.897854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.985 [2024-12-06 03:36:29.897861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.985 [2024-12-06 03:36:29.897869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.985 [2024-12-06 03:36:29.897876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.985 [2024-12-06 03:36:29.897884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.985 [2024-12-06 03:36:29.897891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.985 [2024-12-06 03:36:29.897899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.985 [2024-12-06 03:36:29.897906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.985 [2024-12-06 03:36:29.897914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.985 [2024-12-06 03:36:29.897921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.985 [2024-12-06 03:36:29.897930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.985 [2024-12-06 03:36:29.897937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.985 [2024-12-06 03:36:29.897945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.985 [2024-12-06 03:36:29.897958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.985 [2024-12-06 03:36:29.897966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.985 [2024-12-06 03:36:29.897973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.985 [2024-12-06 03:36:29.897981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.985 [2024-12-06 03:36:29.897988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.985 [2024-12-06 03:36:29.897997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.985 [2024-12-06 03:36:29.898004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.985 [2024-12-06 03:36:29.898016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.985 [2024-12-06 03:36:29.898024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.985 [2024-12-06 03:36:29.898032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.985 [2024-12-06 03:36:29.898040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.985 [2024-12-06 03:36:29.898048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.985 [2024-12-06 03:36:29.898055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.985 [2024-12-06 03:36:29.898063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.985 [2024-12-06 03:36:29.898070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.985 [2024-12-06 03:36:29.898079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.985 [2024-12-06 03:36:29.898086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.985 [2024-12-06 03:36:29.898094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.985 [2024-12-06 03:36:29.898101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.985 [2024-12-06 03:36:29.898109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.985 [2024-12-06 03:36:29.898116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.985 [2024-12-06 03:36:29.898125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.985 [2024-12-06 03:36:29.898132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.985 [2024-12-06 03:36:29.898140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.985 [2024-12-06 03:36:29.898147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.985 [2024-12-06 03:36:29.898155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.985 [2024-12-06 03:36:29.898162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.985 [2024-12-06 03:36:29.898172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.985 [2024-12-06 03:36:29.898180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.985 [2024-12-06 03:36:29.898188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.985 [2024-12-06 03:36:29.898196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.985 [2024-12-06 03:36:29.898204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.985 [2024-12-06 03:36:29.898212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.985 [2024-12-06 03:36:29.898221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.985 [2024-12-06 03:36:29.898228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.985 [2024-12-06 03:36:29.898236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.985 [2024-12-06 03:36:29.898244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.985 [2024-12-06 03:36:29.898252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.985 [2024-12-06 03:36:29.898259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.985 [2024-12-06 03:36:29.898267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.985 [2024-12-06 03:36:29.898274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.985 [2024-12-06 03:36:29.898282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.985 [2024-12-06 03:36:29.898289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.985 [2024-12-06 03:36:29.898298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.985 [2024-12-06 03:36:29.898304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.985 [2024-12-06 03:36:29.898313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.985 [2024-12-06 03:36:29.898320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.985 [2024-12-06 03:36:29.898328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.985 [2024-12-06 03:36:29.898335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.985 [2024-12-06 03:36:29.898344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.985 [2024-12-06 03:36:29.898351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.985 [2024-12-06 03:36:29.898360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.985 [2024-12-06 03:36:29.898366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.985 [2024-12-06 03:36:29.898374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.985 [2024-12-06 03:36:29.898381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.985 [2024-12-06 03:36:29.899337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:09.985 task offset: 98304 on job bdev=Nvme0n1 fails 00:28:09.985 00:28:09.985 Latency(us) 00:28:09.985 [2024-12-06T02:36:30.126Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:09.985 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:09.985 Job: Nvme0n1 ended in about 0.40 seconds with error 00:28:09.985 Verification LBA range: start 0x0 length 0x400 00:28:09.985 Nvme0n1 : 0.40 1897.42 118.59 158.12 0.00 30289.74 1574.29 27126.21 00:28:09.985 [2024-12-06T02:36:30.126Z] =================================================================================================================== 00:28:09.985 [2024-12-06T02:36:30.126Z] Total : 1897.42 118.59 158.12 0.00 30289.74 1574.29 27126.21 00:28:09.986 [2024-12-06 03:36:29.901763] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:09.986 [2024-12-06 03:36:29.901786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1191120 (9): Bad file descriptor 00:28:09.986 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.986 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:09.986 [2024-12-06 03:36:29.902680] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:28:09.986 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.986 [2024-12-06 03:36:29.902754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:09.986 [2024-12-06 03:36:29.902777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.986 [2024-12-06 03:36:29.902790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:28:09.986 [2024-12-06 03:36:29.902799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:28:09.986 [2024-12-06 03:36:29.902808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.986 [2024-12-06 03:36:29.902815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1191120 00:28:09.986 [2024-12-06 03:36:29.902835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1191120 (9): Bad file descriptor 00:28:09.986 [2024-12-06 03:36:29.902847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:09.986 [2024-12-06 03:36:29.902854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:09.986 [2024-12-06 03:36:29.902864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:09.986 [2024-12-06 03:36:29.902873] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:09.986 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:09.986 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.986 03:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:28:10.921 03:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2796462 00:28:10.921 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2796462) - No such process 00:28:10.921 03:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:28:10.921 03:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:28:10.921 03:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:10.921 03:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:28:10.921 03:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:10.921 03:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:10.921 03:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:10.921 03:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:10.921 { 00:28:10.921 "params": { 00:28:10.921 "name": "Nvme$subsystem", 00:28:10.921 "trtype": "$TEST_TRANSPORT", 00:28:10.921 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.921 "adrfam": "ipv4", 00:28:10.921 "trsvcid": "$NVMF_PORT", 00:28:10.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.921 "hdgst": ${hdgst:-false}, 00:28:10.921 "ddgst": ${ddgst:-false} 00:28:10.921 }, 00:28:10.921 "method": "bdev_nvme_attach_controller" 00:28:10.921 } 00:28:10.921 EOF 00:28:10.921 )") 00:28:10.921 03:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:10.921 03:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:10.921 03:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:10.921 03:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:10.921 "params": { 00:28:10.921 "name": "Nvme0", 00:28:10.921 "trtype": "tcp", 00:28:10.921 "traddr": "10.0.0.2", 00:28:10.921 "adrfam": "ipv4", 00:28:10.921 "trsvcid": "4420", 00:28:10.921 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:10.921 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:10.921 "hdgst": false, 00:28:10.921 "ddgst": false 00:28:10.921 }, 00:28:10.921 "method": "bdev_nvme_attach_controller" 00:28:10.921 }' 00:28:10.921 [2024-12-06 03:36:30.971129] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:28:10.921 [2024-12-06 03:36:30.971177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2796724 ] 00:28:10.921 [2024-12-06 03:36:31.034288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:11.179 [2024-12-06 03:36:31.076060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:11.179 Running I/O for 1 seconds... 00:28:12.152 1920.00 IOPS, 120.00 MiB/s 00:28:12.152 Latency(us) 00:28:12.152 [2024-12-06T02:36:32.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:12.152 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:12.152 Verification LBA range: start 0x0 length 0x400 00:28:12.152 Nvme0n1 : 1.00 1978.01 123.63 0.00 0.00 31842.06 6610.59 28038.01 00:28:12.152 [2024-12-06T02:36:32.293Z] =================================================================================================================== 00:28:12.152 [2024-12-06T02:36:32.293Z] Total : 1978.01 123.63 0.00 0.00 31842.06 6610.59 28038.01 00:28:12.410 03:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:28:12.410 03:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:28:12.410 03:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:12.410 03:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:12.410 03:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:28:12.410 03:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:12.410 03:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:28:12.410 03:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:12.410 03:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:28:12.410 03:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:12.410 03:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:12.410 rmmod nvme_tcp 00:28:12.410 rmmod nvme_fabrics 00:28:12.410 rmmod nvme_keyring 00:28:12.410 03:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:12.410 03:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:28:12.410 03:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:28:12.410 03:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2796407 ']' 00:28:12.410 03:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2796407 00:28:12.410 03:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2796407 ']' 00:28:12.410 03:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2796407 00:28:12.410 03:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:28:12.410 03:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:12.411 03:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2796407 00:28:12.411 03:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:12.411 03:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:12.411 03:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2796407' 00:28:12.411 killing process with pid 2796407 00:28:12.411 03:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2796407 00:28:12.411 03:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2796407 00:28:12.669 [2024-12-06 03:36:32.690520] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:28:12.669 03:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:12.669 03:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:12.669 03:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:12.669 03:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:28:12.669 03:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:28:12.669 03:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:12.669 03:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:28:12.669 03:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:12.669 03:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:12.669 03:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.669 03:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:12.669 03:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.201 03:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:15.201 03:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:28:15.201 00:28:15.201 real 0m11.780s 00:28:15.201 user 0m17.102s 00:28:15.201 sys 0m5.886s 00:28:15.201 03:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:15.201 03:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:15.201 ************************************ 00:28:15.201 END TEST nvmf_host_management 00:28:15.201 ************************************ 00:28:15.201 03:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:15.201 03:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:15.201 03:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:15.201 03:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:15.201 ************************************ 00:28:15.201 START TEST nvmf_lvol 00:28:15.202 ************************************ 00:28:15.202 03:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:15.202 * Looking for test storage... 00:28:15.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:15.202 03:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:15.202 03:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:28:15.202 03:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:15.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.202 --rc genhtml_branch_coverage=1 00:28:15.202 --rc genhtml_function_coverage=1 00:28:15.202 --rc genhtml_legend=1 00:28:15.202 --rc geninfo_all_blocks=1 00:28:15.202 --rc geninfo_unexecuted_blocks=1 00:28:15.202 00:28:15.202 ' 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:15.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.202 --rc genhtml_branch_coverage=1 00:28:15.202 --rc genhtml_function_coverage=1 00:28:15.202 --rc genhtml_legend=1 00:28:15.202 --rc geninfo_all_blocks=1 00:28:15.202 --rc geninfo_unexecuted_blocks=1 00:28:15.202 00:28:15.202 ' 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:15.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.202 --rc genhtml_branch_coverage=1 00:28:15.202 --rc genhtml_function_coverage=1 00:28:15.202 --rc genhtml_legend=1 00:28:15.202 --rc geninfo_all_blocks=1 00:28:15.202 --rc geninfo_unexecuted_blocks=1 00:28:15.202 00:28:15.202 ' 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:15.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.202 --rc genhtml_branch_coverage=1 00:28:15.202 --rc genhtml_function_coverage=1 00:28:15.202 --rc genhtml_legend=1 00:28:15.202 --rc geninfo_all_blocks=1 00:28:15.202 --rc geninfo_unexecuted_blocks=1 00:28:15.202 00:28:15.202 ' 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:15.202 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:15.203 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:28:15.203 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:28:15.203 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:15.203 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:28:15.203 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:15.203 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:15.203 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:15.203 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:15.203 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:15.203 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:15.203 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:15.203 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.203 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:15.203 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:15.203 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:28:15.203 03:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:20.512 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:20.512 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:20.512 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:20.513 Found net devices under 0000:86:00.0: cvl_0_0 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:20.513 Found net devices under 0000:86:00.1: cvl_0_1 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:20.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:20.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.404 ms 00:28:20.513 00:28:20.513 --- 10.0.0.2 ping statistics --- 00:28:20.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.513 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:20.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:20.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:28:20.513 00:28:20.513 --- 10.0.0.1 ping statistics --- 00:28:20.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.513 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2800469 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2800469 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2800469 ']' 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:20.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:20.513 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:20.513 [2024-12-06 03:36:40.533196] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:20.513 [2024-12-06 03:36:40.534106] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:28:20.513 [2024-12-06 03:36:40.534138] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:20.513 [2024-12-06 03:36:40.600985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:20.513 [2024-12-06 03:36:40.643963] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:20.513 [2024-12-06 03:36:40.644001] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:20.513 [2024-12-06 03:36:40.644008] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:20.513 [2024-12-06 03:36:40.644015] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:20.513 [2024-12-06 03:36:40.644021] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:20.513 [2024-12-06 03:36:40.645334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.513 [2024-12-06 03:36:40.645430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.513 [2024-12-06 03:36:40.645430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:20.773 [2024-12-06 03:36:40.715549] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:20.773 [2024-12-06 03:36:40.715587] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:20.773 [2024-12-06 03:36:40.715658] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:20.773 [2024-12-06 03:36:40.715769] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:20.773 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:20.773 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:28:20.773 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:20.773 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:20.773 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:20.773 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:20.773 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:21.033 [2024-12-06 03:36:40.953917] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:21.033 03:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:21.293 03:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:28:21.293 03:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:21.293 03:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:28:21.293 03:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:28:21.552 03:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:28:21.811 03:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=8e74ab8f-8d2f-40ab-9154-f6fc4054cdce 00:28:21.811 03:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8e74ab8f-8d2f-40ab-9154-f6fc4054cdce lvol 20 00:28:22.070 03:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=1a1ff660-1263-4629-acea-7df97b65f0e2 00:28:22.071 03:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:22.330 03:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1a1ff660-1263-4629-acea-7df97b65f0e2 00:28:22.330 03:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:22.587 [2024-12-06 03:36:42.574068] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:22.587 03:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:22.844 03:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2800829 00:28:22.844 03:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:28:22.844 03:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:28:23.781 03:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 1a1ff660-1263-4629-acea-7df97b65f0e2 MY_SNAPSHOT 00:28:24.040 03:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=05aac781-af48-460a-9e5e-016eed6f86e3 00:28:24.040 03:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 1a1ff660-1263-4629-acea-7df97b65f0e2 30 00:28:24.300 03:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 05aac781-af48-460a-9e5e-016eed6f86e3 MY_CLONE 00:28:24.559 03:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=4b83827c-e690-4b10-80fe-3827e976e3f6 00:28:24.559 03:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 4b83827c-e690-4b10-80fe-3827e976e3f6 00:28:25.128 03:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2800829 00:28:33.246 Initializing NVMe Controllers 00:28:33.246 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:33.246 Controller IO queue size 128, less than required. 00:28:33.246 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:33.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:28:33.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:28:33.246 Initialization complete. Launching workers. 00:28:33.246 ======================================================== 00:28:33.246 Latency(us) 00:28:33.246 Device Information : IOPS MiB/s Average min max 00:28:33.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12214.16 47.71 10486.84 5617.01 52172.35 00:28:33.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12056.86 47.10 10620.61 4309.60 46497.07 00:28:33.246 ======================================================== 00:28:33.246 Total : 24271.02 94.81 10553.29 4309.60 52172.35 00:28:33.246 00:28:33.246 03:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:33.505 03:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1a1ff660-1263-4629-acea-7df97b65f0e2 00:28:33.765 03:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8e74ab8f-8d2f-40ab-9154-f6fc4054cdce 00:28:34.024 03:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:28:34.024 03:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:28:34.024 03:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:28:34.024 03:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:34.024 03:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:28:34.024 03:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:34.024 03:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:28:34.024 03:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:34.024 03:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:34.024 rmmod nvme_tcp 00:28:34.024 rmmod nvme_fabrics 00:28:34.024 rmmod nvme_keyring 00:28:34.024 03:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:34.024 03:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:28:34.024 03:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:28:34.024 03:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2800469 ']' 00:28:34.024 03:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2800469 00:28:34.024 03:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2800469 ']' 00:28:34.024 03:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2800469 00:28:34.024 03:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:28:34.024 03:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:34.024 03:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2800469 00:28:34.024 03:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:34.024 03:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:34.024 03:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2800469' 00:28:34.024 killing process with pid 2800469 00:28:34.024 03:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2800469 00:28:34.024 03:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2800469 00:28:34.284 03:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:34.284 03:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:34.284 03:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:34.284 03:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:28:34.284 03:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:28:34.284 03:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:28:34.284 03:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:34.284 03:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:34.284 03:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:34.284 03:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.284 03:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:34.284 03:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:36.188 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:36.188 00:28:36.188 real 0m21.443s 00:28:36.188 user 0m55.808s 00:28:36.188 sys 0m9.592s 00:28:36.188 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:36.188 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:36.188 ************************************ 00:28:36.188 END TEST nvmf_lvol 00:28:36.188 ************************************ 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:36.448 ************************************ 00:28:36.448 START TEST nvmf_lvs_grow 00:28:36.448 ************************************ 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:28:36.448 * Looking for test storage... 00:28:36.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:36.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:36.448 --rc genhtml_branch_coverage=1 00:28:36.448 --rc genhtml_function_coverage=1 00:28:36.448 --rc genhtml_legend=1 00:28:36.448 --rc geninfo_all_blocks=1 00:28:36.448 --rc geninfo_unexecuted_blocks=1 00:28:36.448 00:28:36.448 ' 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:36.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:36.448 --rc genhtml_branch_coverage=1 00:28:36.448 --rc genhtml_function_coverage=1 00:28:36.448 --rc genhtml_legend=1 00:28:36.448 --rc geninfo_all_blocks=1 00:28:36.448 --rc geninfo_unexecuted_blocks=1 00:28:36.448 00:28:36.448 ' 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:36.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:36.448 --rc genhtml_branch_coverage=1 00:28:36.448 --rc genhtml_function_coverage=1 00:28:36.448 --rc genhtml_legend=1 00:28:36.448 --rc geninfo_all_blocks=1 00:28:36.448 --rc geninfo_unexecuted_blocks=1 00:28:36.448 00:28:36.448 ' 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:36.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:36.448 --rc genhtml_branch_coverage=1 00:28:36.448 --rc genhtml_function_coverage=1 00:28:36.448 --rc genhtml_legend=1 00:28:36.448 --rc geninfo_all_blocks=1 00:28:36.448 --rc geninfo_unexecuted_blocks=1 00:28:36.448 00:28:36.448 ' 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.448 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.449 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.449 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:28:36.449 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.449 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:28:36.449 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:36.449 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:36.449 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:36.449 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:36.449 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:36.449 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:36.449 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:36.449 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:36.449 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:36.449 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:36.708 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:36.708 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:36.708 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:28:36.708 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:36.708 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:36.708 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:36.708 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:36.708 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:36.708 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:36.708 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:36.708 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:36.708 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:36.708 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:36.708 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:28:36.708 03:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:42.037 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:42.037 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:28:42.037 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:42.037 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:42.037 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:42.037 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:42.037 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:42.037 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:28:42.037 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:42.037 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:28:42.037 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:28:42.037 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:28:42.037 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:28:42.037 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:28:42.037 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:28:42.037 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:42.037 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:42.037 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:42.037 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:42.037 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:42.037 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:42.037 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:42.037 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:42.037 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:42.037 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:42.037 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:42.037 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:42.037 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:42.037 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:42.037 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:42.037 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:42.037 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:42.037 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:42.037 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:42.037 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:42.037 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:42.037 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:42.037 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:42.037 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:42.037 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:42.037 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:42.038 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:42.038 Found net devices under 0000:86:00.0: cvl_0_0 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:42.038 Found net devices under 0000:86:00.1: cvl_0_1 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:42.038 03:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:42.038 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:42.038 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:42.038 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:42.038 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:42.038 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:42.038 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:28:42.038 00:28:42.038 --- 10.0.0.2 ping statistics --- 00:28:42.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:42.038 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:28:42.038 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:42.038 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:42.038 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:28:42.038 00:28:42.038 --- 10.0.0.1 ping statistics --- 00:28:42.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:42.038 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:28:42.038 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:42.038 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:28:42.038 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:42.038 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:42.038 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:42.038 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:42.038 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:42.038 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:42.038 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:42.038 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:28:42.038 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:42.038 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:42.038 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:42.038 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2806094 00:28:42.038 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:28:42.038 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2806094 00:28:42.038 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2806094 ']' 00:28:42.038 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:42.038 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:42.038 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:42.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:42.038 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:42.038 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:42.038 [2024-12-06 03:37:02.119451] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:42.038 [2024-12-06 03:37:02.120404] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:28:42.038 [2024-12-06 03:37:02.120437] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:42.298 [2024-12-06 03:37:02.186500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.298 [2024-12-06 03:37:02.227325] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:42.298 [2024-12-06 03:37:02.227362] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:42.298 [2024-12-06 03:37:02.227370] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:42.298 [2024-12-06 03:37:02.227376] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:42.298 [2024-12-06 03:37:02.227382] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:42.298 [2024-12-06 03:37:02.227925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.298 [2024-12-06 03:37:02.296571] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:42.298 [2024-12-06 03:37:02.296772] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:42.298 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:42.298 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:28:42.298 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:42.298 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:42.298 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:42.298 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:42.298 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:42.557 [2024-12-06 03:37:02.528430] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:42.557 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:28:42.557 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:42.557 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:42.557 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:42.557 ************************************ 00:28:42.557 START TEST lvs_grow_clean 00:28:42.557 ************************************ 00:28:42.557 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:28:42.557 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:28:42.557 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:28:42.557 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:28:42.557 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:28:42.557 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:28:42.557 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:28:42.557 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:42.557 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:42.557 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:42.816 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:28:42.816 03:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:28:43.074 03:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=f377bcf1-2bf5-445b-ae71-38c24f8c0c27 00:28:43.074 03:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f377bcf1-2bf5-445b-ae71-38c24f8c0c27 00:28:43.074 03:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:28:43.074 03:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:28:43.074 03:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:28:43.074 03:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f377bcf1-2bf5-445b-ae71-38c24f8c0c27 lvol 150 00:28:43.333 03:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=6bc50b59-8bcd-44d9-a4af-ea0846edfc66 00:28:43.333 03:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:43.333 03:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:28:43.592 [2024-12-06 03:37:03.568322] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:28:43.592 [2024-12-06 03:37:03.568451] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:28:43.592 true 00:28:43.592 03:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:28:43.592 03:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f377bcf1-2bf5-445b-ae71-38c24f8c0c27 00:28:43.851 03:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:28:43.851 03:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:44.110 03:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6bc50b59-8bcd-44d9-a4af-ea0846edfc66 00:28:44.110 03:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:44.369 [2024-12-06 03:37:04.360541] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:44.369 03:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:44.627 03:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2806594 00:28:44.628 03:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:28:44.628 03:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:44.628 03:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2806594 /var/tmp/bdevperf.sock 00:28:44.628 03:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2806594 ']' 00:28:44.628 03:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:44.628 03:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:44.628 03:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:44.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:44.628 03:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:44.628 03:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:28:44.628 [2024-12-06 03:37:04.621765] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:28:44.628 [2024-12-06 03:37:04.621813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2806594 ] 00:28:44.628 [2024-12-06 03:37:04.683835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.628 [2024-12-06 03:37:04.726442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.886 03:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:44.886 03:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:28:44.886 03:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:28:45.152 Nvme0n1 00:28:45.152 03:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:28:45.413 [ 00:28:45.413 { 00:28:45.413 "name": "Nvme0n1", 00:28:45.413 "aliases": [ 00:28:45.413 "6bc50b59-8bcd-44d9-a4af-ea0846edfc66" 00:28:45.413 ], 00:28:45.413 "product_name": "NVMe disk", 00:28:45.413 "block_size": 4096, 00:28:45.413 "num_blocks": 38912, 00:28:45.413 "uuid": "6bc50b59-8bcd-44d9-a4af-ea0846edfc66", 00:28:45.413 "numa_id": 1, 00:28:45.413 "assigned_rate_limits": { 00:28:45.413 "rw_ios_per_sec": 0, 00:28:45.413 "rw_mbytes_per_sec": 0, 00:28:45.413 "r_mbytes_per_sec": 0, 00:28:45.413 "w_mbytes_per_sec": 0 00:28:45.413 }, 00:28:45.413 "claimed": false, 00:28:45.413 "zoned": false, 00:28:45.413 "supported_io_types": { 00:28:45.413 "read": true, 00:28:45.413 "write": true, 00:28:45.413 "unmap": true, 00:28:45.413 "flush": true, 00:28:45.413 "reset": true, 00:28:45.413 "nvme_admin": true, 00:28:45.413 "nvme_io": true, 00:28:45.413 "nvme_io_md": false, 00:28:45.413 "write_zeroes": true, 00:28:45.413 "zcopy": false, 00:28:45.413 "get_zone_info": false, 00:28:45.413 "zone_management": false, 00:28:45.413 "zone_append": false, 00:28:45.413 "compare": true, 00:28:45.413 "compare_and_write": true, 00:28:45.413 "abort": true, 00:28:45.413 "seek_hole": false, 00:28:45.413 "seek_data": false, 00:28:45.413 "copy": true, 00:28:45.413 "nvme_iov_md": false 00:28:45.413 }, 00:28:45.413 "memory_domains": [ 00:28:45.413 { 00:28:45.413 "dma_device_id": "system", 00:28:45.413 "dma_device_type": 1 00:28:45.413 } 00:28:45.413 ], 00:28:45.413 "driver_specific": { 00:28:45.413 "nvme": [ 00:28:45.413 { 00:28:45.413 "trid": { 00:28:45.413 "trtype": "TCP", 00:28:45.413 "adrfam": "IPv4", 00:28:45.413 "traddr": "10.0.0.2", 00:28:45.413 "trsvcid": "4420", 00:28:45.413 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:45.413 }, 00:28:45.413 "ctrlr_data": { 00:28:45.413 "cntlid": 1, 00:28:45.413 "vendor_id": "0x8086", 00:28:45.413 "model_number": "SPDK bdev Controller", 00:28:45.413 "serial_number": "SPDK0", 00:28:45.413 "firmware_revision": "25.01", 00:28:45.413 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:45.413 "oacs": { 00:28:45.413 "security": 0, 00:28:45.413 "format": 0, 00:28:45.413 "firmware": 0, 00:28:45.413 "ns_manage": 0 00:28:45.413 }, 00:28:45.413 "multi_ctrlr": true, 00:28:45.413 "ana_reporting": false 00:28:45.413 }, 00:28:45.413 "vs": { 00:28:45.413 "nvme_version": "1.3" 00:28:45.413 }, 00:28:45.413 "ns_data": { 00:28:45.413 "id": 1, 00:28:45.413 "can_share": true 00:28:45.413 } 00:28:45.413 } 00:28:45.414 ], 00:28:45.414 "mp_policy": "active_passive" 00:28:45.414 } 00:28:45.414 } 00:28:45.414 ] 00:28:45.414 03:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2806606 00:28:45.414 03:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:45.414 03:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:28:45.414 Running I/O for 10 seconds... 00:28:46.791 Latency(us) 00:28:46.791 [2024-12-06T02:37:06.932Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.791 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:46.791 Nvme0n1 : 1.00 22543.00 88.06 0.00 0.00 0.00 0.00 0.00 00:28:46.791 [2024-12-06T02:37:06.932Z] =================================================================================================================== 00:28:46.791 [2024-12-06T02:37:06.932Z] Total : 22543.00 88.06 0.00 0.00 0.00 0.00 0.00 00:28:46.791 00:28:47.360 03:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f377bcf1-2bf5-445b-ae71-38c24f8c0c27 00:28:47.619 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:47.619 Nvme0n1 : 2.00 22694.00 88.65 0.00 0.00 0.00 0.00 0.00 00:28:47.619 [2024-12-06T02:37:07.760Z] =================================================================================================================== 00:28:47.619 [2024-12-06T02:37:07.760Z] Total : 22694.00 88.65 0.00 0.00 0.00 0.00 0.00 00:28:47.619 00:28:47.619 true 00:28:47.619 03:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:28:47.619 03:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f377bcf1-2bf5-445b-ae71-38c24f8c0c27 00:28:47.878 03:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:28:47.878 03:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:28:47.878 03:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2806606 00:28:48.448 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:48.448 Nvme0n1 : 3.00 22749.33 88.86 0.00 0.00 0.00 0.00 0.00 00:28:48.448 [2024-12-06T02:37:08.589Z] =================================================================================================================== 00:28:48.448 [2024-12-06T02:37:08.589Z] Total : 22749.33 88.86 0.00 0.00 0.00 0.00 0.00 00:28:48.448 00:28:49.386 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:49.386 Nvme0n1 : 4.00 22777.00 88.97 0.00 0.00 0.00 0.00 0.00 00:28:49.386 [2024-12-06T02:37:09.527Z] =================================================================================================================== 00:28:49.386 [2024-12-06T02:37:09.527Z] Total : 22777.00 88.97 0.00 0.00 0.00 0.00 0.00 00:28:49.386 00:28:50.764 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:50.764 Nvme0n1 : 5.00 22793.60 89.04 0.00 0.00 0.00 0.00 0.00 00:28:50.764 [2024-12-06T02:37:10.905Z] =================================================================================================================== 00:28:50.764 [2024-12-06T02:37:10.905Z] Total : 22793.60 89.04 0.00 0.00 0.00 0.00 0.00 00:28:50.764 00:28:51.701 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:51.701 Nvme0n1 : 6.00 22847.00 89.25 0.00 0.00 0.00 0.00 0.00 00:28:51.701 [2024-12-06T02:37:11.842Z] =================================================================================================================== 00:28:51.701 [2024-12-06T02:37:11.842Z] Total : 22847.00 89.25 0.00 0.00 0.00 0.00 0.00 00:28:51.701 00:28:52.637 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:52.637 Nvme0n1 : 7.00 22867.00 89.32 0.00 0.00 0.00 0.00 0.00 00:28:52.637 [2024-12-06T02:37:12.778Z] =================================================================================================================== 00:28:52.637 [2024-12-06T02:37:12.778Z] Total : 22867.00 89.32 0.00 0.00 0.00 0.00 0.00 00:28:52.637 00:28:53.573 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:53.573 Nvme0n1 : 8.00 22913.75 89.51 0.00 0.00 0.00 0.00 0.00 00:28:53.573 [2024-12-06T02:37:13.714Z] =================================================================================================================== 00:28:53.573 [2024-12-06T02:37:13.714Z] Total : 22913.75 89.51 0.00 0.00 0.00 0.00 0.00 00:28:53.573 00:28:54.511 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:54.511 Nvme0n1 : 9.00 22937.89 89.60 0.00 0.00 0.00 0.00 0.00 00:28:54.511 [2024-12-06T02:37:14.652Z] =================================================================================================================== 00:28:54.511 [2024-12-06T02:37:14.652Z] Total : 22937.89 89.60 0.00 0.00 0.00 0.00 0.00 00:28:54.511 00:28:55.448 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:55.448 Nvme0n1 : 10.00 22968.20 89.72 0.00 0.00 0.00 0.00 0.00 00:28:55.448 [2024-12-06T02:37:15.589Z] =================================================================================================================== 00:28:55.448 [2024-12-06T02:37:15.589Z] Total : 22968.20 89.72 0.00 0.00 0.00 0.00 0.00 00:28:55.448 00:28:55.448 00:28:55.448 Latency(us) 00:28:55.448 [2024-12-06T02:37:15.589Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:55.448 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:55.448 Nvme0n1 : 10.01 22966.98 89.71 0.00 0.00 5570.09 3234.06 15728.64 00:28:55.448 [2024-12-06T02:37:15.589Z] =================================================================================================================== 00:28:55.448 [2024-12-06T02:37:15.589Z] Total : 22966.98 89.71 0.00 0.00 5570.09 3234.06 15728.64 00:28:55.448 { 00:28:55.448 "results": [ 00:28:55.448 { 00:28:55.448 "job": "Nvme0n1", 00:28:55.448 "core_mask": "0x2", 00:28:55.448 "workload": "randwrite", 00:28:55.448 "status": "finished", 00:28:55.448 "queue_depth": 128, 00:28:55.448 "io_size": 4096, 00:28:55.448 "runtime": 10.006103, 00:28:55.448 "iops": 22966.983250122452, 00:28:55.448 "mibps": 89.71477832079083, 00:28:55.448 "io_failed": 0, 00:28:55.448 "io_timeout": 0, 00:28:55.448 "avg_latency_us": 5570.089399734753, 00:28:55.448 "min_latency_us": 3234.0591304347827, 00:28:55.448 "max_latency_us": 15728.64 00:28:55.448 } 00:28:55.448 ], 00:28:55.448 "core_count": 1 00:28:55.448 } 00:28:55.448 03:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2806594 00:28:55.448 03:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2806594 ']' 00:28:55.448 03:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2806594 00:28:55.448 03:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:28:55.448 03:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:55.448 03:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2806594 00:28:55.706 03:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:55.706 03:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:55.706 03:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2806594' 00:28:55.706 killing process with pid 2806594 00:28:55.706 03:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2806594 00:28:55.706 Received shutdown signal, test time was about 10.000000 seconds 00:28:55.706 00:28:55.706 Latency(us) 00:28:55.706 [2024-12-06T02:37:15.847Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:55.706 [2024-12-06T02:37:15.847Z] =================================================================================================================== 00:28:55.706 [2024-12-06T02:37:15.847Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:55.706 03:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2806594 00:28:55.707 03:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:55.965 03:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:56.225 03:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f377bcf1-2bf5-445b-ae71-38c24f8c0c27 00:28:56.225 03:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:28:56.485 03:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:28:56.485 03:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:28:56.485 03:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:56.485 [2024-12-06 03:37:16.560384] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:28:56.485 03:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f377bcf1-2bf5-445b-ae71-38c24f8c0c27 00:28:56.485 03:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:28:56.485 03:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f377bcf1-2bf5-445b-ae71-38c24f8c0c27 00:28:56.485 03:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:56.485 03:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:56.485 03:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:56.485 03:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:56.485 03:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:56.485 03:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:56.485 03:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:56.485 03:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:28:56.485 03:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f377bcf1-2bf5-445b-ae71-38c24f8c0c27 00:28:56.744 request: 00:28:56.744 { 00:28:56.744 "uuid": "f377bcf1-2bf5-445b-ae71-38c24f8c0c27", 00:28:56.744 "method": "bdev_lvol_get_lvstores", 00:28:56.744 "req_id": 1 00:28:56.744 } 00:28:56.744 Got JSON-RPC error response 00:28:56.744 response: 00:28:56.744 { 00:28:56.744 "code": -19, 00:28:56.744 "message": "No such device" 00:28:56.744 } 00:28:56.744 03:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:28:56.744 03:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:56.744 03:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:56.744 03:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:56.744 03:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:57.003 aio_bdev 00:28:57.003 03:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6bc50b59-8bcd-44d9-a4af-ea0846edfc66 00:28:57.003 03:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=6bc50b59-8bcd-44d9-a4af-ea0846edfc66 00:28:57.003 03:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:57.003 03:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:28:57.003 03:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:57.004 03:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:57.004 03:37:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:57.263 03:37:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6bc50b59-8bcd-44d9-a4af-ea0846edfc66 -t 2000 00:28:57.263 [ 00:28:57.263 { 00:28:57.263 "name": "6bc50b59-8bcd-44d9-a4af-ea0846edfc66", 00:28:57.263 "aliases": [ 00:28:57.263 "lvs/lvol" 00:28:57.263 ], 00:28:57.263 "product_name": "Logical Volume", 00:28:57.263 "block_size": 4096, 00:28:57.263 "num_blocks": 38912, 00:28:57.263 "uuid": "6bc50b59-8bcd-44d9-a4af-ea0846edfc66", 00:28:57.263 "assigned_rate_limits": { 00:28:57.263 "rw_ios_per_sec": 0, 00:28:57.263 "rw_mbytes_per_sec": 0, 00:28:57.263 "r_mbytes_per_sec": 0, 00:28:57.263 "w_mbytes_per_sec": 0 00:28:57.263 }, 00:28:57.263 "claimed": false, 00:28:57.263 "zoned": false, 00:28:57.263 "supported_io_types": { 00:28:57.263 "read": true, 00:28:57.263 "write": true, 00:28:57.263 "unmap": true, 00:28:57.263 "flush": false, 00:28:57.263 "reset": true, 00:28:57.263 "nvme_admin": false, 00:28:57.263 "nvme_io": false, 00:28:57.263 "nvme_io_md": false, 00:28:57.263 "write_zeroes": true, 00:28:57.263 "zcopy": false, 00:28:57.263 "get_zone_info": false, 00:28:57.263 "zone_management": false, 00:28:57.263 "zone_append": false, 00:28:57.263 "compare": false, 00:28:57.263 "compare_and_write": false, 00:28:57.263 "abort": false, 00:28:57.263 "seek_hole": true, 00:28:57.263 "seek_data": true, 00:28:57.263 "copy": false, 00:28:57.263 "nvme_iov_md": false 00:28:57.263 }, 00:28:57.263 "driver_specific": { 00:28:57.263 "lvol": { 00:28:57.263 "lvol_store_uuid": "f377bcf1-2bf5-445b-ae71-38c24f8c0c27", 00:28:57.263 "base_bdev": "aio_bdev", 00:28:57.263 "thin_provision": false, 00:28:57.263 "num_allocated_clusters": 38, 00:28:57.263 "snapshot": false, 00:28:57.263 "clone": false, 00:28:57.263 "esnap_clone": false 00:28:57.263 } 00:28:57.263 } 00:28:57.263 } 00:28:57.263 ] 00:28:57.263 03:37:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:28:57.263 03:37:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f377bcf1-2bf5-445b-ae71-38c24f8c0c27 00:28:57.263 03:37:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:28:57.523 03:37:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:28:57.523 03:37:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:28:57.523 03:37:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f377bcf1-2bf5-445b-ae71-38c24f8c0c27 00:28:57.782 03:37:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:28:57.782 03:37:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6bc50b59-8bcd-44d9-a4af-ea0846edfc66 00:28:58.042 03:37:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f377bcf1-2bf5-445b-ae71-38c24f8c0c27 00:28:58.042 03:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:58.302 03:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:58.302 00:28:58.302 real 0m15.800s 00:28:58.302 user 0m15.306s 00:28:58.302 sys 0m1.505s 00:28:58.302 03:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:58.302 03:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:28:58.302 ************************************ 00:28:58.302 END TEST lvs_grow_clean 00:28:58.302 ************************************ 00:28:58.302 03:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:28:58.302 03:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:58.302 03:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:58.302 03:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:58.561 ************************************ 00:28:58.561 START TEST lvs_grow_dirty 00:28:58.561 ************************************ 00:28:58.561 03:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:28:58.561 03:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:28:58.561 03:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:28:58.561 03:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:28:58.561 03:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:28:58.561 03:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:28:58.561 03:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:28:58.561 03:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:58.561 03:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:58.561 03:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:58.561 03:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:28:58.561 03:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:28:58.820 03:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=52075c43-791e-4c9b-8b2c-7a69a334a16f 00:28:58.820 03:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 52075c43-791e-4c9b-8b2c-7a69a334a16f 00:28:58.820 03:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:28:59.080 03:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:28:59.080 03:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:28:59.080 03:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 52075c43-791e-4c9b-8b2c-7a69a334a16f lvol 150 00:28:59.339 03:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f5c044fd-2d63-4bc4-8a99-6853bbc4b898 00:28:59.339 03:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:59.339 03:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:28:59.339 [2024-12-06 03:37:19.476348] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:28:59.339 [2024-12-06 03:37:19.476483] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:28:59.598 true 00:28:59.598 03:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:28:59.598 03:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 52075c43-791e-4c9b-8b2c-7a69a334a16f 00:28:59.598 03:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:28:59.598 03:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:59.857 03:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f5c044fd-2d63-4bc4-8a99-6853bbc4b898 00:29:00.117 03:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:00.376 [2024-12-06 03:37:20.268608] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:00.376 03:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:00.376 03:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2809173 00:29:00.376 03:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:00.376 03:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:00.376 03:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2809173 /var/tmp/bdevperf.sock 00:29:00.377 03:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2809173 ']' 00:29:00.377 03:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:00.377 03:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:00.377 03:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:00.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:00.377 03:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:00.377 03:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:00.666 [2024-12-06 03:37:20.552418] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:29:00.666 [2024-12-06 03:37:20.552468] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2809173 ] 00:29:00.666 [2024-12-06 03:37:20.614506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.666 [2024-12-06 03:37:20.657989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.666 03:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:00.666 03:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:29:00.666 03:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:01.233 Nvme0n1 00:29:01.233 03:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:01.233 [ 00:29:01.233 { 00:29:01.233 "name": "Nvme0n1", 00:29:01.233 "aliases": [ 00:29:01.233 "f5c044fd-2d63-4bc4-8a99-6853bbc4b898" 00:29:01.233 ], 00:29:01.233 "product_name": "NVMe disk", 00:29:01.233 "block_size": 4096, 00:29:01.233 "num_blocks": 38912, 00:29:01.233 "uuid": "f5c044fd-2d63-4bc4-8a99-6853bbc4b898", 00:29:01.233 "numa_id": 1, 00:29:01.233 "assigned_rate_limits": { 00:29:01.233 "rw_ios_per_sec": 0, 00:29:01.233 "rw_mbytes_per_sec": 0, 00:29:01.233 "r_mbytes_per_sec": 0, 00:29:01.233 "w_mbytes_per_sec": 0 00:29:01.233 }, 00:29:01.233 "claimed": false, 00:29:01.233 "zoned": false, 00:29:01.233 "supported_io_types": { 00:29:01.233 "read": true, 00:29:01.233 "write": true, 00:29:01.233 "unmap": true, 00:29:01.233 "flush": true, 00:29:01.233 "reset": true, 00:29:01.233 "nvme_admin": true, 00:29:01.233 "nvme_io": true, 00:29:01.233 "nvme_io_md": false, 00:29:01.233 "write_zeroes": true, 00:29:01.233 "zcopy": false, 00:29:01.233 "get_zone_info": false, 00:29:01.233 "zone_management": false, 00:29:01.233 "zone_append": false, 00:29:01.233 "compare": true, 00:29:01.233 "compare_and_write": true, 00:29:01.233 "abort": true, 00:29:01.233 "seek_hole": false, 00:29:01.233 "seek_data": false, 00:29:01.233 "copy": true, 00:29:01.233 "nvme_iov_md": false 00:29:01.233 }, 00:29:01.233 "memory_domains": [ 00:29:01.233 { 00:29:01.233 "dma_device_id": "system", 00:29:01.233 "dma_device_type": 1 00:29:01.233 } 00:29:01.233 ], 00:29:01.233 "driver_specific": { 00:29:01.233 "nvme": [ 00:29:01.233 { 00:29:01.233 "trid": { 00:29:01.233 "trtype": "TCP", 00:29:01.233 "adrfam": "IPv4", 00:29:01.233 "traddr": "10.0.0.2", 00:29:01.233 "trsvcid": "4420", 00:29:01.233 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:01.233 }, 00:29:01.233 "ctrlr_data": { 00:29:01.233 "cntlid": 1, 00:29:01.233 "vendor_id": "0x8086", 00:29:01.233 "model_number": "SPDK bdev Controller", 00:29:01.233 "serial_number": "SPDK0", 00:29:01.233 "firmware_revision": "25.01", 00:29:01.233 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:01.233 "oacs": { 00:29:01.233 "security": 0, 00:29:01.233 "format": 0, 00:29:01.233 "firmware": 0, 00:29:01.233 "ns_manage": 0 00:29:01.233 }, 00:29:01.233 "multi_ctrlr": true, 00:29:01.233 "ana_reporting": false 00:29:01.233 }, 00:29:01.233 "vs": { 00:29:01.233 "nvme_version": "1.3" 00:29:01.233 }, 00:29:01.233 "ns_data": { 00:29:01.233 "id": 1, 00:29:01.233 "can_share": true 00:29:01.233 } 00:29:01.233 } 00:29:01.233 ], 00:29:01.233 "mp_policy": "active_passive" 00:29:01.233 } 00:29:01.233 } 00:29:01.233 ] 00:29:01.233 03:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:01.233 03:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2809234 00:29:01.233 03:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:01.491 Running I/O for 10 seconds... 00:29:02.427 Latency(us) 00:29:02.427 [2024-12-06T02:37:22.568Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:02.427 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:02.427 Nvme0n1 : 1.00 22606.00 88.30 0.00 0.00 0.00 0.00 0.00 00:29:02.427 [2024-12-06T02:37:22.568Z] =================================================================================================================== 00:29:02.427 [2024-12-06T02:37:22.568Z] Total : 22606.00 88.30 0.00 0.00 0.00 0.00 0.00 00:29:02.427 00:29:03.361 03:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 52075c43-791e-4c9b-8b2c-7a69a334a16f 00:29:03.361 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:03.361 Nvme0n1 : 2.00 22796.50 89.05 0.00 0.00 0.00 0.00 0.00 00:29:03.361 [2024-12-06T02:37:23.502Z] =================================================================================================================== 00:29:03.361 [2024-12-06T02:37:23.502Z] Total : 22796.50 89.05 0.00 0.00 0.00 0.00 0.00 00:29:03.361 00:29:03.619 true 00:29:03.619 03:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 52075c43-791e-4c9b-8b2c-7a69a334a16f 00:29:03.619 03:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:03.619 03:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:03.619 03:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:03.619 03:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2809234 00:29:04.553 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:04.553 Nvme0n1 : 3.00 22817.67 89.13 0.00 0.00 0.00 0.00 0.00 00:29:04.553 [2024-12-06T02:37:24.694Z] =================================================================================================================== 00:29:04.553 [2024-12-06T02:37:24.694Z] Total : 22817.67 89.13 0.00 0.00 0.00 0.00 0.00 00:29:04.553 00:29:05.494 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:05.494 Nvme0n1 : 4.00 22891.75 89.42 0.00 0.00 0.00 0.00 0.00 00:29:05.494 [2024-12-06T02:37:25.635Z] =================================================================================================================== 00:29:05.494 [2024-12-06T02:37:25.635Z] Total : 22891.75 89.42 0.00 0.00 0.00 0.00 0.00 00:29:05.494 00:29:06.429 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:06.429 Nvme0n1 : 5.00 22822.00 89.15 0.00 0.00 0.00 0.00 0.00 00:29:06.429 [2024-12-06T02:37:26.570Z] =================================================================================================================== 00:29:06.429 [2024-12-06T02:37:26.570Z] Total : 22822.00 89.15 0.00 0.00 0.00 0.00 0.00 00:29:06.429 00:29:07.366 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:07.366 Nvme0n1 : 6.00 22870.67 89.34 0.00 0.00 0.00 0.00 0.00 00:29:07.366 [2024-12-06T02:37:27.507Z] =================================================================================================================== 00:29:07.366 [2024-12-06T02:37:27.507Z] Total : 22870.67 89.34 0.00 0.00 0.00 0.00 0.00 00:29:07.366 00:29:08.747 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:08.747 Nvme0n1 : 7.00 22905.43 89.47 0.00 0.00 0.00 0.00 0.00 00:29:08.747 [2024-12-06T02:37:28.888Z] =================================================================================================================== 00:29:08.747 [2024-12-06T02:37:28.888Z] Total : 22905.43 89.47 0.00 0.00 0.00 0.00 0.00 00:29:08.747 00:29:09.680 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:09.680 Nvme0n1 : 8.00 22931.50 89.58 0.00 0.00 0.00 0.00 0.00 00:29:09.680 [2024-12-06T02:37:29.821Z] =================================================================================================================== 00:29:09.680 [2024-12-06T02:37:29.821Z] Total : 22931.50 89.58 0.00 0.00 0.00 0.00 0.00 00:29:09.680 00:29:10.614 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:10.614 Nvme0n1 : 9.00 22923.56 89.55 0.00 0.00 0.00 0.00 0.00 00:29:10.614 [2024-12-06T02:37:30.755Z] =================================================================================================================== 00:29:10.614 [2024-12-06T02:37:30.755Z] Total : 22923.56 89.55 0.00 0.00 0.00 0.00 0.00 00:29:10.614 00:29:11.551 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:11.551 Nvme0n1 : 10.00 22942.60 89.62 0.00 0.00 0.00 0.00 0.00 00:29:11.551 [2024-12-06T02:37:31.692Z] =================================================================================================================== 00:29:11.551 [2024-12-06T02:37:31.692Z] Total : 22942.60 89.62 0.00 0.00 0.00 0.00 0.00 00:29:11.551 00:29:11.551 00:29:11.551 Latency(us) 00:29:11.551 [2024-12-06T02:37:31.692Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.551 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:11.551 Nvme0n1 : 10.00 22947.21 89.64 0.00 0.00 5575.01 3547.49 15614.66 00:29:11.551 [2024-12-06T02:37:31.692Z] =================================================================================================================== 00:29:11.551 [2024-12-06T02:37:31.692Z] Total : 22947.21 89.64 0.00 0.00 5575.01 3547.49 15614.66 00:29:11.551 { 00:29:11.551 "results": [ 00:29:11.551 { 00:29:11.551 "job": "Nvme0n1", 00:29:11.551 "core_mask": "0x2", 00:29:11.551 "workload": "randwrite", 00:29:11.551 "status": "finished", 00:29:11.551 "queue_depth": 128, 00:29:11.551 "io_size": 4096, 00:29:11.551 "runtime": 10.003568, 00:29:11.551 "iops": 22947.212434603334, 00:29:11.551 "mibps": 89.63754857266927, 00:29:11.551 "io_failed": 0, 00:29:11.551 "io_timeout": 0, 00:29:11.551 "avg_latency_us": 5575.0074182867265, 00:29:11.551 "min_latency_us": 3547.4921739130436, 00:29:11.551 "max_latency_us": 15614.664347826087 00:29:11.551 } 00:29:11.551 ], 00:29:11.551 "core_count": 1 00:29:11.551 } 00:29:11.551 03:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2809173 00:29:11.551 03:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2809173 ']' 00:29:11.551 03:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2809173 00:29:11.551 03:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:29:11.551 03:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:11.551 03:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2809173 00:29:11.551 03:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:11.551 03:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:11.551 03:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2809173' 00:29:11.551 killing process with pid 2809173 00:29:11.551 03:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2809173 00:29:11.551 Received shutdown signal, test time was about 10.000000 seconds 00:29:11.551 00:29:11.551 Latency(us) 00:29:11.551 [2024-12-06T02:37:31.692Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.551 [2024-12-06T02:37:31.692Z] =================================================================================================================== 00:29:11.551 [2024-12-06T02:37:31.692Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:11.551 03:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2809173 00:29:11.811 03:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:11.811 03:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:12.069 03:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 52075c43-791e-4c9b-8b2c-7a69a334a16f 00:29:12.069 03:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:12.329 03:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:12.329 03:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:29:12.329 03:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2806094 00:29:12.329 03:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2806094 00:29:12.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2806094 Killed "${NVMF_APP[@]}" "$@" 00:29:12.329 03:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:29:12.329 03:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:29:12.329 03:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:12.329 03:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:12.329 03:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:12.329 03:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2811027 00:29:12.329 03:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2811027 00:29:12.329 03:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:12.329 03:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2811027 ']' 00:29:12.329 03:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:12.329 03:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:12.329 03:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:12.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:12.329 03:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:12.329 03:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:12.329 [2024-12-06 03:37:32.400233] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:12.329 [2024-12-06 03:37:32.401169] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:29:12.329 [2024-12-06 03:37:32.401207] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:12.329 [2024-12-06 03:37:32.466811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.589 [2024-12-06 03:37:32.507789] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:12.589 [2024-12-06 03:37:32.507825] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:12.589 [2024-12-06 03:37:32.507832] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:12.589 [2024-12-06 03:37:32.507838] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:12.589 [2024-12-06 03:37:32.507844] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:12.589 [2024-12-06 03:37:32.508379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:12.589 [2024-12-06 03:37:32.575831] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:12.589 [2024-12-06 03:37:32.576063] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:12.589 03:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:12.589 03:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:29:12.589 03:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:12.589 03:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:12.589 03:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:12.589 03:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:12.589 03:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:12.848 [2024-12-06 03:37:32.811735] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:29:12.848 [2024-12-06 03:37:32.811845] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:29:12.848 [2024-12-06 03:37:32.811883] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:29:12.848 03:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:29:12.848 03:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f5c044fd-2d63-4bc4-8a99-6853bbc4b898 00:29:12.848 03:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f5c044fd-2d63-4bc4-8a99-6853bbc4b898 00:29:12.848 03:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:12.848 03:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:29:12.848 03:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:12.848 03:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:12.848 03:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:13.107 03:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f5c044fd-2d63-4bc4-8a99-6853bbc4b898 -t 2000 00:29:13.107 [ 00:29:13.107 { 00:29:13.107 "name": "f5c044fd-2d63-4bc4-8a99-6853bbc4b898", 00:29:13.107 "aliases": [ 00:29:13.107 "lvs/lvol" 00:29:13.107 ], 00:29:13.107 "product_name": "Logical Volume", 00:29:13.107 "block_size": 4096, 00:29:13.107 "num_blocks": 38912, 00:29:13.107 "uuid": "f5c044fd-2d63-4bc4-8a99-6853bbc4b898", 00:29:13.107 "assigned_rate_limits": { 00:29:13.107 "rw_ios_per_sec": 0, 00:29:13.107 "rw_mbytes_per_sec": 0, 00:29:13.107 "r_mbytes_per_sec": 0, 00:29:13.107 "w_mbytes_per_sec": 0 00:29:13.107 }, 00:29:13.107 "claimed": false, 00:29:13.107 "zoned": false, 00:29:13.107 "supported_io_types": { 00:29:13.107 "read": true, 00:29:13.107 "write": true, 00:29:13.107 "unmap": true, 00:29:13.107 "flush": false, 00:29:13.107 "reset": true, 00:29:13.107 "nvme_admin": false, 00:29:13.107 "nvme_io": false, 00:29:13.107 "nvme_io_md": false, 00:29:13.108 "write_zeroes": true, 00:29:13.108 "zcopy": false, 00:29:13.108 "get_zone_info": false, 00:29:13.108 "zone_management": false, 00:29:13.108 "zone_append": false, 00:29:13.108 "compare": false, 00:29:13.108 "compare_and_write": false, 00:29:13.108 "abort": false, 00:29:13.108 "seek_hole": true, 00:29:13.108 "seek_data": true, 00:29:13.108 "copy": false, 00:29:13.108 "nvme_iov_md": false 00:29:13.108 }, 00:29:13.108 "driver_specific": { 00:29:13.108 "lvol": { 00:29:13.108 "lvol_store_uuid": "52075c43-791e-4c9b-8b2c-7a69a334a16f", 00:29:13.108 "base_bdev": "aio_bdev", 00:29:13.108 "thin_provision": false, 00:29:13.108 "num_allocated_clusters": 38, 00:29:13.108 "snapshot": false, 00:29:13.108 "clone": false, 00:29:13.108 "esnap_clone": false 00:29:13.108 } 00:29:13.108 } 00:29:13.108 } 00:29:13.108 ] 00:29:13.108 03:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:29:13.108 03:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 52075c43-791e-4c9b-8b2c-7a69a334a16f 00:29:13.108 03:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:29:13.374 03:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:29:13.374 03:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:29:13.374 03:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 52075c43-791e-4c9b-8b2c-7a69a334a16f 00:29:13.633 03:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:29:13.633 03:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:13.892 [2024-12-06 03:37:33.812810] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:13.892 03:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 52075c43-791e-4c9b-8b2c-7a69a334a16f 00:29:13.892 03:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:29:13.892 03:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 52075c43-791e-4c9b-8b2c-7a69a334a16f 00:29:13.892 03:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:13.892 03:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:13.892 03:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:13.892 03:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:13.892 03:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:13.892 03:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:13.893 03:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:13.893 03:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:13.893 03:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 52075c43-791e-4c9b-8b2c-7a69a334a16f 00:29:14.152 request: 00:29:14.152 { 00:29:14.153 "uuid": "52075c43-791e-4c9b-8b2c-7a69a334a16f", 00:29:14.153 "method": "bdev_lvol_get_lvstores", 00:29:14.153 "req_id": 1 00:29:14.153 } 00:29:14.153 Got JSON-RPC error response 00:29:14.153 response: 00:29:14.153 { 00:29:14.153 "code": -19, 00:29:14.153 "message": "No such device" 00:29:14.153 } 00:29:14.153 03:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:29:14.153 03:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:14.153 03:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:14.153 03:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:14.153 03:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:14.153 aio_bdev 00:29:14.153 03:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f5c044fd-2d63-4bc4-8a99-6853bbc4b898 00:29:14.153 03:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f5c044fd-2d63-4bc4-8a99-6853bbc4b898 00:29:14.153 03:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:14.153 03:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:29:14.153 03:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:14.153 03:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:14.153 03:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:14.412 03:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f5c044fd-2d63-4bc4-8a99-6853bbc4b898 -t 2000 00:29:14.671 [ 00:29:14.671 { 00:29:14.671 "name": "f5c044fd-2d63-4bc4-8a99-6853bbc4b898", 00:29:14.671 "aliases": [ 00:29:14.671 "lvs/lvol" 00:29:14.671 ], 00:29:14.671 "product_name": "Logical Volume", 00:29:14.671 "block_size": 4096, 00:29:14.671 "num_blocks": 38912, 00:29:14.671 "uuid": "f5c044fd-2d63-4bc4-8a99-6853bbc4b898", 00:29:14.671 "assigned_rate_limits": { 00:29:14.671 "rw_ios_per_sec": 0, 00:29:14.671 "rw_mbytes_per_sec": 0, 00:29:14.671 "r_mbytes_per_sec": 0, 00:29:14.671 "w_mbytes_per_sec": 0 00:29:14.671 }, 00:29:14.671 "claimed": false, 00:29:14.671 "zoned": false, 00:29:14.671 "supported_io_types": { 00:29:14.671 "read": true, 00:29:14.671 "write": true, 00:29:14.671 "unmap": true, 00:29:14.671 "flush": false, 00:29:14.671 "reset": true, 00:29:14.671 "nvme_admin": false, 00:29:14.671 "nvme_io": false, 00:29:14.671 "nvme_io_md": false, 00:29:14.671 "write_zeroes": true, 00:29:14.671 "zcopy": false, 00:29:14.671 "get_zone_info": false, 00:29:14.671 "zone_management": false, 00:29:14.671 "zone_append": false, 00:29:14.671 "compare": false, 00:29:14.671 "compare_and_write": false, 00:29:14.671 "abort": false, 00:29:14.671 "seek_hole": true, 00:29:14.671 "seek_data": true, 00:29:14.671 "copy": false, 00:29:14.671 "nvme_iov_md": false 00:29:14.671 }, 00:29:14.671 "driver_specific": { 00:29:14.671 "lvol": { 00:29:14.671 "lvol_store_uuid": "52075c43-791e-4c9b-8b2c-7a69a334a16f", 00:29:14.671 "base_bdev": "aio_bdev", 00:29:14.671 "thin_provision": false, 00:29:14.671 "num_allocated_clusters": 38, 00:29:14.671 "snapshot": false, 00:29:14.671 "clone": false, 00:29:14.671 "esnap_clone": false 00:29:14.671 } 00:29:14.671 } 00:29:14.671 } 00:29:14.671 ] 00:29:14.671 03:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:29:14.671 03:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 52075c43-791e-4c9b-8b2c-7a69a334a16f 00:29:14.671 03:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:14.930 03:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:14.930 03:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:14.930 03:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 52075c43-791e-4c9b-8b2c-7a69a334a16f 00:29:15.189 03:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:15.189 03:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f5c044fd-2d63-4bc4-8a99-6853bbc4b898 00:29:15.189 03:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 52075c43-791e-4c9b-8b2c-7a69a334a16f 00:29:15.448 03:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:15.707 03:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:15.707 00:29:15.707 real 0m17.228s 00:29:15.707 user 0m34.641s 00:29:15.707 sys 0m3.769s 00:29:15.707 03:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:15.707 03:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:15.707 ************************************ 00:29:15.707 END TEST lvs_grow_dirty 00:29:15.707 ************************************ 00:29:15.707 03:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:29:15.707 03:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:29:15.707 03:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:29:15.707 03:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:29:15.707 03:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:29:15.707 03:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:29:15.707 03:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:29:15.707 03:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:29:15.707 03:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:29:15.707 nvmf_trace.0 00:29:15.707 03:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:29:15.707 03:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:29:15.707 03:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:15.707 03:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:29:15.707 03:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:15.707 03:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:29:15.707 03:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:15.707 03:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:15.707 rmmod nvme_tcp 00:29:15.707 rmmod nvme_fabrics 00:29:15.707 rmmod nvme_keyring 00:29:15.707 03:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:15.707 03:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:29:15.707 03:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:29:15.707 03:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2811027 ']' 00:29:15.707 03:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2811027 00:29:15.707 03:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2811027 ']' 00:29:15.707 03:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2811027 00:29:15.707 03:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:29:15.707 03:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:15.707 03:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2811027 00:29:15.967 03:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:15.967 03:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:15.967 03:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2811027' 00:29:15.967 killing process with pid 2811027 00:29:15.967 03:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2811027 00:29:15.967 03:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2811027 00:29:15.967 03:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:15.967 03:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:15.967 03:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:15.967 03:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:29:15.967 03:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:29:15.967 03:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:15.967 03:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:29:15.967 03:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:15.967 03:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:15.967 03:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.967 03:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:15.967 03:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:18.506 00:29:18.506 real 0m41.727s 00:29:18.506 user 0m52.337s 00:29:18.506 sys 0m9.789s 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:18.506 ************************************ 00:29:18.506 END TEST nvmf_lvs_grow 00:29:18.506 ************************************ 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:18.506 ************************************ 00:29:18.506 START TEST nvmf_bdev_io_wait 00:29:18.506 ************************************ 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:18.506 * Looking for test storage... 00:29:18.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:18.506 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:18.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.506 --rc genhtml_branch_coverage=1 00:29:18.506 --rc genhtml_function_coverage=1 00:29:18.506 --rc genhtml_legend=1 00:29:18.506 --rc geninfo_all_blocks=1 00:29:18.506 --rc geninfo_unexecuted_blocks=1 00:29:18.506 00:29:18.506 ' 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:18.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.507 --rc genhtml_branch_coverage=1 00:29:18.507 --rc genhtml_function_coverage=1 00:29:18.507 --rc genhtml_legend=1 00:29:18.507 --rc geninfo_all_blocks=1 00:29:18.507 --rc geninfo_unexecuted_blocks=1 00:29:18.507 00:29:18.507 ' 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:18.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.507 --rc genhtml_branch_coverage=1 00:29:18.507 --rc genhtml_function_coverage=1 00:29:18.507 --rc genhtml_legend=1 00:29:18.507 --rc geninfo_all_blocks=1 00:29:18.507 --rc geninfo_unexecuted_blocks=1 00:29:18.507 00:29:18.507 ' 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:18.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.507 --rc genhtml_branch_coverage=1 00:29:18.507 --rc genhtml_function_coverage=1 00:29:18.507 --rc genhtml_legend=1 00:29:18.507 --rc geninfo_all_blocks=1 00:29:18.507 --rc geninfo_unexecuted_blocks=1 00:29:18.507 00:29:18.507 ' 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:29:18.507 03:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:23.955 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:23.955 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:23.955 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:23.956 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:23.956 Found net devices under 0000:86:00.0: cvl_0_0 00:29:23.956 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:23.956 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:23.956 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:23.956 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:23.956 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:23.956 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:23.956 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:23.956 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:23.956 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:23.956 Found net devices under 0000:86:00.1: cvl_0_1 00:29:23.956 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:23.956 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:23.956 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:29:23.956 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:23.956 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:23.956 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:23.956 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:23.956 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:23.956 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:23.956 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:23.956 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:23.956 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:23.956 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:23.956 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:23.956 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:23.956 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:23.956 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:23.956 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:23.956 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:23.956 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:23.956 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:23.956 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:23.956 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:23.956 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:23.956 03:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:23.956 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:23.956 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:23.956 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:23.956 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:23.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:23.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:29:23.956 00:29:23.956 --- 10.0.0.2 ping statistics --- 00:29:23.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.956 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:29:23.956 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:23.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:23.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:29:23.956 00:29:23.956 --- 10.0.0.1 ping statistics --- 00:29:23.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.956 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:29:23.956 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:23.956 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:29:23.956 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:23.956 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:23.956 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:23.956 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:23.956 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:23.956 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:23.956 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:23.956 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:23.956 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:23.956 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:23.956 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:24.216 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2815141 00:29:24.216 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2815141 00:29:24.216 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:29:24.216 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2815141 ']' 00:29:24.216 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.216 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:24.216 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:24.216 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:24.216 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:24.216 [2024-12-06 03:37:44.141815] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:24.216 [2024-12-06 03:37:44.142713] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:29:24.216 [2024-12-06 03:37:44.142748] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:24.216 [2024-12-06 03:37:44.209606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:24.216 [2024-12-06 03:37:44.252110] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:24.216 [2024-12-06 03:37:44.252152] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:24.216 [2024-12-06 03:37:44.252159] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:24.216 [2024-12-06 03:37:44.252165] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:24.216 [2024-12-06 03:37:44.252170] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:24.216 [2024-12-06 03:37:44.253764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.216 [2024-12-06 03:37:44.253860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:24.216 [2024-12-06 03:37:44.253972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:24.216 [2024-12-06 03:37:44.253974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.216 [2024-12-06 03:37:44.254272] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:24.216 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:24.216 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:29:24.216 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:24.216 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:24.216 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:24.216 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:24.216 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:29:24.216 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.216 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:24.216 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.216 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:29:24.216 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.216 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:24.476 [2024-12-06 03:37:44.390472] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:24.476 [2024-12-06 03:37:44.390583] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:24.476 [2024-12-06 03:37:44.391206] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:24.476 [2024-12-06 03:37:44.391648] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:24.476 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.476 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:24.476 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.476 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:24.476 [2024-12-06 03:37:44.398649] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:24.476 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.476 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:24.476 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.476 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:24.476 Malloc0 00:29:24.476 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:24.477 [2024-12-06 03:37:44.450637] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2815319 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2815321 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:24.477 { 00:29:24.477 "params": { 00:29:24.477 "name": "Nvme$subsystem", 00:29:24.477 "trtype": "$TEST_TRANSPORT", 00:29:24.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:24.477 "adrfam": "ipv4", 00:29:24.477 "trsvcid": "$NVMF_PORT", 00:29:24.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:24.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:24.477 "hdgst": ${hdgst:-false}, 00:29:24.477 "ddgst": ${ddgst:-false} 00:29:24.477 }, 00:29:24.477 "method": "bdev_nvme_attach_controller" 00:29:24.477 } 00:29:24.477 EOF 00:29:24.477 )") 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2815323 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:24.477 { 00:29:24.477 "params": { 00:29:24.477 "name": "Nvme$subsystem", 00:29:24.477 "trtype": "$TEST_TRANSPORT", 00:29:24.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:24.477 "adrfam": "ipv4", 00:29:24.477 "trsvcid": "$NVMF_PORT", 00:29:24.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:24.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:24.477 "hdgst": ${hdgst:-false}, 00:29:24.477 "ddgst": ${ddgst:-false} 00:29:24.477 }, 00:29:24.477 "method": "bdev_nvme_attach_controller" 00:29:24.477 } 00:29:24.477 EOF 00:29:24.477 )") 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2815326 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:24.477 { 00:29:24.477 "params": { 00:29:24.477 "name": "Nvme$subsystem", 00:29:24.477 "trtype": "$TEST_TRANSPORT", 00:29:24.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:24.477 "adrfam": "ipv4", 00:29:24.477 "trsvcid": "$NVMF_PORT", 00:29:24.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:24.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:24.477 "hdgst": ${hdgst:-false}, 00:29:24.477 "ddgst": ${ddgst:-false} 00:29:24.477 }, 00:29:24.477 "method": "bdev_nvme_attach_controller" 00:29:24.477 } 00:29:24.477 EOF 00:29:24.477 )") 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:24.477 { 00:29:24.477 "params": { 00:29:24.477 "name": "Nvme$subsystem", 00:29:24.477 "trtype": "$TEST_TRANSPORT", 00:29:24.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:24.477 "adrfam": "ipv4", 00:29:24.477 "trsvcid": "$NVMF_PORT", 00:29:24.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:24.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:24.477 "hdgst": ${hdgst:-false}, 00:29:24.477 "ddgst": ${ddgst:-false} 00:29:24.477 }, 00:29:24.477 "method": "bdev_nvme_attach_controller" 00:29:24.477 } 00:29:24.477 EOF 00:29:24.477 )") 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2815319 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:24.477 "params": { 00:29:24.477 "name": "Nvme1", 00:29:24.477 "trtype": "tcp", 00:29:24.477 "traddr": "10.0.0.2", 00:29:24.477 "adrfam": "ipv4", 00:29:24.477 "trsvcid": "4420", 00:29:24.477 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:24.477 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:24.477 "hdgst": false, 00:29:24.477 "ddgst": false 00:29:24.477 }, 00:29:24.477 "method": "bdev_nvme_attach_controller" 00:29:24.477 }' 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:24.477 "params": { 00:29:24.477 "name": "Nvme1", 00:29:24.477 "trtype": "tcp", 00:29:24.477 "traddr": "10.0.0.2", 00:29:24.477 "adrfam": "ipv4", 00:29:24.477 "trsvcid": "4420", 00:29:24.477 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:24.477 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:24.477 "hdgst": false, 00:29:24.477 "ddgst": false 00:29:24.477 }, 00:29:24.477 "method": "bdev_nvme_attach_controller" 00:29:24.477 }' 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:24.477 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:24.477 "params": { 00:29:24.477 "name": "Nvme1", 00:29:24.477 "trtype": "tcp", 00:29:24.477 "traddr": "10.0.0.2", 00:29:24.477 "adrfam": "ipv4", 00:29:24.477 "trsvcid": "4420", 00:29:24.477 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:24.477 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:24.477 "hdgst": false, 00:29:24.477 "ddgst": false 00:29:24.477 }, 00:29:24.478 "method": "bdev_nvme_attach_controller" 00:29:24.478 }' 00:29:24.478 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:24.478 03:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:24.478 "params": { 00:29:24.478 "name": "Nvme1", 00:29:24.478 "trtype": "tcp", 00:29:24.478 "traddr": "10.0.0.2", 00:29:24.478 "adrfam": "ipv4", 00:29:24.478 "trsvcid": "4420", 00:29:24.478 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:24.478 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:24.478 "hdgst": false, 00:29:24.478 "ddgst": false 00:29:24.478 }, 00:29:24.478 "method": "bdev_nvme_attach_controller" 00:29:24.478 }' 00:29:24.478 [2024-12-06 03:37:44.501251] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:29:24.478 [2024-12-06 03:37:44.501266] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:29:24.478 [2024-12-06 03:37:44.501301] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:24.478 [2024-12-06 03:37:44.501307] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:29:24.478 [2024-12-06 03:37:44.504054] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:29:24.478 [2024-12-06 03:37:44.504098] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:29:24.478 [2024-12-06 03:37:44.505049] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:29:24.478 [2024-12-06 03:37:44.505091] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:29:24.737 [2024-12-06 03:37:44.689077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.737 [2024-12-06 03:37:44.731982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:24.737 [2024-12-06 03:37:44.789054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.737 [2024-12-06 03:37:44.832189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:24.995 [2024-12-06 03:37:44.881520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.995 [2024-12-06 03:37:44.941684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:24.995 [2024-12-06 03:37:44.942192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.995 [2024-12-06 03:37:44.985106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:24.995 Running I/O for 1 seconds... 00:29:25.271 Running I/O for 1 seconds... 00:29:25.271 Running I/O for 1 seconds... 00:29:25.271 Running I/O for 1 seconds... 00:29:26.204 8478.00 IOPS, 33.12 MiB/s 00:29:26.204 Latency(us) 00:29:26.204 [2024-12-06T02:37:46.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:26.204 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:29:26.204 Nvme1n1 : 1.02 8468.88 33.08 0.00 0.00 14989.63 3604.48 22453.20 00:29:26.204 [2024-12-06T02:37:46.345Z] =================================================================================================================== 00:29:26.204 [2024-12-06T02:37:46.345Z] Total : 8468.88 33.08 0.00 0.00 14989.63 3604.48 22453.20 00:29:26.204 236984.00 IOPS, 925.72 MiB/s 00:29:26.204 Latency(us) 00:29:26.204 [2024-12-06T02:37:46.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:26.204 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:29:26.204 Nvme1n1 : 1.00 236617.20 924.29 0.00 0.00 538.35 231.51 1538.67 00:29:26.204 [2024-12-06T02:37:46.345Z] =================================================================================================================== 00:29:26.204 [2024-12-06T02:37:46.345Z] Total : 236617.20 924.29 0.00 0.00 538.35 231.51 1538.67 00:29:26.204 7734.00 IOPS, 30.21 MiB/s 00:29:26.204 Latency(us) 00:29:26.204 [2024-12-06T02:37:46.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:26.204 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:29:26.204 Nvme1n1 : 1.01 7843.05 30.64 0.00 0.00 16276.29 4416.56 24732.72 00:29:26.204 [2024-12-06T02:37:46.345Z] =================================================================================================================== 00:29:26.204 [2024-12-06T02:37:46.345Z] Total : 7843.05 30.64 0.00 0.00 16276.29 4416.56 24732.72 00:29:26.204 12880.00 IOPS, 50.31 MiB/s 00:29:26.204 Latency(us) 00:29:26.204 [2024-12-06T02:37:46.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:26.204 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:29:26.204 Nvme1n1 : 1.00 12968.38 50.66 0.00 0.00 9848.93 2564.45 14246.96 00:29:26.204 [2024-12-06T02:37:46.345Z] =================================================================================================================== 00:29:26.204 [2024-12-06T02:37:46.345Z] Total : 12968.38 50.66 0.00 0.00 9848.93 2564.45 14246.96 00:29:26.204 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2815321 00:29:26.204 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2815323 00:29:26.204 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2815326 00:29:26.204 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:26.204 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.204 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:26.463 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.463 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:29:26.463 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:29:26.463 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:26.463 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:29:26.463 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:26.463 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:29:26.463 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:26.463 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:26.463 rmmod nvme_tcp 00:29:26.463 rmmod nvme_fabrics 00:29:26.463 rmmod nvme_keyring 00:29:26.463 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:26.463 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:29:26.463 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:29:26.463 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2815141 ']' 00:29:26.463 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2815141 00:29:26.463 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2815141 ']' 00:29:26.463 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2815141 00:29:26.463 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:29:26.463 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:26.463 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2815141 00:29:26.463 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:26.463 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:26.463 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2815141' 00:29:26.463 killing process with pid 2815141 00:29:26.463 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2815141 00:29:26.464 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2815141 00:29:26.464 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:26.464 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:26.464 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:26.464 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:29:26.464 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:29:26.464 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:26.464 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:29:26.723 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:26.723 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:26.723 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:26.723 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:26.723 03:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.624 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:28.624 00:29:28.624 real 0m10.484s 00:29:28.624 user 0m14.977s 00:29:28.624 sys 0m6.254s 00:29:28.624 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:28.624 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:28.624 ************************************ 00:29:28.624 END TEST nvmf_bdev_io_wait 00:29:28.624 ************************************ 00:29:28.624 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:29:28.624 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:28.624 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:28.624 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:28.624 ************************************ 00:29:28.624 START TEST nvmf_queue_depth 00:29:28.624 ************************************ 00:29:28.624 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:29:28.883 * Looking for test storage... 00:29:28.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:28.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.883 --rc genhtml_branch_coverage=1 00:29:28.883 --rc genhtml_function_coverage=1 00:29:28.883 --rc genhtml_legend=1 00:29:28.883 --rc geninfo_all_blocks=1 00:29:28.883 --rc geninfo_unexecuted_blocks=1 00:29:28.883 00:29:28.883 ' 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:28.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.883 --rc genhtml_branch_coverage=1 00:29:28.883 --rc genhtml_function_coverage=1 00:29:28.883 --rc genhtml_legend=1 00:29:28.883 --rc geninfo_all_blocks=1 00:29:28.883 --rc geninfo_unexecuted_blocks=1 00:29:28.883 00:29:28.883 ' 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:28.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.883 --rc genhtml_branch_coverage=1 00:29:28.883 --rc genhtml_function_coverage=1 00:29:28.883 --rc genhtml_legend=1 00:29:28.883 --rc geninfo_all_blocks=1 00:29:28.883 --rc geninfo_unexecuted_blocks=1 00:29:28.883 00:29:28.883 ' 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:28.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.883 --rc genhtml_branch_coverage=1 00:29:28.883 --rc genhtml_function_coverage=1 00:29:28.883 --rc genhtml_legend=1 00:29:28.883 --rc geninfo_all_blocks=1 00:29:28.883 --rc geninfo_unexecuted_blocks=1 00:29:28.883 00:29:28.883 ' 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:29:28.883 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.884 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:29:28.884 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:28.884 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:28.884 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:28.884 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:28.884 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:28.884 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:28.884 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:28.884 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:28.884 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:28.884 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:28.884 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:29:28.884 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:29:28.884 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:28.884 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:29:28.884 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:28.884 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:28.884 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:28.884 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:28.884 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:28.884 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.884 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:28.884 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.884 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:28.884 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:28.884 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:29:28.884 03:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:34.158 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:34.158 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:29:34.158 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:34.158 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:34.158 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:34.158 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:34.158 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:34.158 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:34.159 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:34.159 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:34.159 Found net devices under 0000:86:00.0: cvl_0_0 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:34.159 Found net devices under 0000:86:00.1: cvl_0_1 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:34.159 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:34.418 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:34.418 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:34.418 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:34.418 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:34.418 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:34.418 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:34.418 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:34.418 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:34.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:34.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:29:34.418 00:29:34.418 --- 10.0.0.2 ping statistics --- 00:29:34.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.418 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:29:34.418 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:34.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:34.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:29:34.418 00:29:34.418 --- 10.0.0.1 ping statistics --- 00:29:34.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.418 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:29:34.418 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:34.419 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:29:34.419 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:34.419 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:34.419 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:34.419 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:34.419 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:34.419 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:34.419 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:34.419 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:29:34.419 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:34.419 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:34.419 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:34.678 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2819092 00:29:34.678 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:29:34.678 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2819092 00:29:34.678 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2819092 ']' 00:29:34.678 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:34.678 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:34.678 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:34.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:34.678 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:34.678 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:34.678 [2024-12-06 03:37:54.607642] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:34.678 [2024-12-06 03:37:54.608594] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:29:34.678 [2024-12-06 03:37:54.608629] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:34.678 [2024-12-06 03:37:54.676844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.678 [2024-12-06 03:37:54.719146] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:34.678 [2024-12-06 03:37:54.719182] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:34.678 [2024-12-06 03:37:54.719191] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:34.678 [2024-12-06 03:37:54.719198] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:34.678 [2024-12-06 03:37:54.719204] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:34.678 [2024-12-06 03:37:54.719739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.678 [2024-12-06 03:37:54.788723] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:34.678 [2024-12-06 03:37:54.788935] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:34.678 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:34.678 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:29:34.678 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:34.678 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:34.678 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:34.938 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:34.938 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:34.938 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.938 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:34.938 [2024-12-06 03:37:54.852167] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:34.938 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.938 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:34.938 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.938 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:34.938 Malloc0 00:29:34.938 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.938 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:34.938 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.938 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:34.938 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.938 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:34.939 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.939 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:34.939 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.939 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:34.939 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.939 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:34.939 [2024-12-06 03:37:54.908299] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:34.939 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.939 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2819121 00:29:34.939 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:29:34.939 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:34.939 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2819121 /var/tmp/bdevperf.sock 00:29:34.939 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2819121 ']' 00:29:34.939 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:34.939 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:34.939 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:34.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:34.939 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:34.939 03:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:34.939 [2024-12-06 03:37:54.959465] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:29:34.939 [2024-12-06 03:37:54.959508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2819121 ] 00:29:34.939 [2024-12-06 03:37:55.021244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.939 [2024-12-06 03:37:55.065140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:35.197 03:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:35.197 03:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:29:35.197 03:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:35.197 03:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.197 03:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:35.197 NVMe0n1 00:29:35.197 03:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.197 03:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:35.197 Running I/O for 10 seconds... 00:29:37.513 11264.00 IOPS, 44.00 MiB/s [2024-12-06T02:37:58.591Z] 11771.00 IOPS, 45.98 MiB/s [2024-12-06T02:37:59.541Z] 11915.00 IOPS, 46.54 MiB/s [2024-12-06T02:38:00.479Z] 11988.75 IOPS, 46.83 MiB/s [2024-12-06T02:38:01.416Z] 12048.60 IOPS, 47.06 MiB/s [2024-12-06T02:38:02.354Z] 12053.67 IOPS, 47.08 MiB/s [2024-12-06T02:38:03.733Z] 12070.57 IOPS, 47.15 MiB/s [2024-12-06T02:38:04.671Z] 12093.25 IOPS, 47.24 MiB/s [2024-12-06T02:38:05.610Z] 12081.67 IOPS, 47.19 MiB/s [2024-12-06T02:38:05.610Z] 12104.20 IOPS, 47.28 MiB/s 00:29:45.469 Latency(us) 00:29:45.469 [2024-12-06T02:38:05.610Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:45.469 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:29:45.469 Verification LBA range: start 0x0 length 0x4000 00:29:45.469 NVMe0n1 : 10.05 12138.53 47.42 0.00 0.00 84065.91 12366.36 54480.36 00:29:45.469 [2024-12-06T02:38:05.611Z] =================================================================================================================== 00:29:45.470 [2024-12-06T02:38:05.611Z] Total : 12138.53 47.42 0.00 0.00 84065.91 12366.36 54480.36 00:29:45.470 { 00:29:45.470 "results": [ 00:29:45.470 { 00:29:45.470 "job": "NVMe0n1", 00:29:45.470 "core_mask": "0x1", 00:29:45.470 "workload": "verify", 00:29:45.470 "status": "finished", 00:29:45.470 "verify_range": { 00:29:45.470 "start": 0, 00:29:45.470 "length": 16384 00:29:45.470 }, 00:29:45.470 "queue_depth": 1024, 00:29:45.470 "io_size": 4096, 00:29:45.470 "runtime": 10.052039, 00:29:45.470 "iops": 12138.532291806667, 00:29:45.470 "mibps": 47.41614176486979, 00:29:45.470 "io_failed": 0, 00:29:45.470 "io_timeout": 0, 00:29:45.470 "avg_latency_us": 84065.91295801618, 00:29:45.470 "min_latency_us": 12366.358260869565, 00:29:45.470 "max_latency_us": 54480.361739130436 00:29:45.470 } 00:29:45.470 ], 00:29:45.470 "core_count": 1 00:29:45.470 } 00:29:45.470 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2819121 00:29:45.470 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2819121 ']' 00:29:45.470 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2819121 00:29:45.470 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:29:45.470 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:45.470 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2819121 00:29:45.470 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:45.470 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:45.470 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2819121' 00:29:45.470 killing process with pid 2819121 00:29:45.470 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2819121 00:29:45.470 Received shutdown signal, test time was about 10.000000 seconds 00:29:45.470 00:29:45.470 Latency(us) 00:29:45.470 [2024-12-06T02:38:05.611Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:45.470 [2024-12-06T02:38:05.611Z] =================================================================================================================== 00:29:45.470 [2024-12-06T02:38:05.611Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:45.470 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2819121 00:29:45.728 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:29:45.728 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:29:45.728 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:45.728 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:29:45.728 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:45.728 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:29:45.728 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:45.728 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:45.728 rmmod nvme_tcp 00:29:45.728 rmmod nvme_fabrics 00:29:45.728 rmmod nvme_keyring 00:29:45.728 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:45.728 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:29:45.728 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:29:45.728 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2819092 ']' 00:29:45.728 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2819092 00:29:45.728 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2819092 ']' 00:29:45.728 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2819092 00:29:45.728 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:29:45.728 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:45.728 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2819092 00:29:45.728 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:45.728 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:45.728 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2819092' 00:29:45.728 killing process with pid 2819092 00:29:45.728 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2819092 00:29:45.728 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2819092 00:29:45.987 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:45.987 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:45.987 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:45.987 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:29:45.987 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:45.987 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:29:45.987 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:29:45.987 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:45.987 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:45.987 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.987 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:45.987 03:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:47.894 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:48.154 00:29:48.154 real 0m19.292s 00:29:48.154 user 0m22.547s 00:29:48.154 sys 0m5.969s 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:48.154 ************************************ 00:29:48.154 END TEST nvmf_queue_depth 00:29:48.154 ************************************ 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:48.154 ************************************ 00:29:48.154 START TEST nvmf_target_multipath 00:29:48.154 ************************************ 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:29:48.154 * Looking for test storage... 00:29:48.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:48.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.154 --rc genhtml_branch_coverage=1 00:29:48.154 --rc genhtml_function_coverage=1 00:29:48.154 --rc genhtml_legend=1 00:29:48.154 --rc geninfo_all_blocks=1 00:29:48.154 --rc geninfo_unexecuted_blocks=1 00:29:48.154 00:29:48.154 ' 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:48.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.154 --rc genhtml_branch_coverage=1 00:29:48.154 --rc genhtml_function_coverage=1 00:29:48.154 --rc genhtml_legend=1 00:29:48.154 --rc geninfo_all_blocks=1 00:29:48.154 --rc geninfo_unexecuted_blocks=1 00:29:48.154 00:29:48.154 ' 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:48.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.154 --rc genhtml_branch_coverage=1 00:29:48.154 --rc genhtml_function_coverage=1 00:29:48.154 --rc genhtml_legend=1 00:29:48.154 --rc geninfo_all_blocks=1 00:29:48.154 --rc geninfo_unexecuted_blocks=1 00:29:48.154 00:29:48.154 ' 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:48.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.154 --rc genhtml_branch_coverage=1 00:29:48.154 --rc genhtml_function_coverage=1 00:29:48.154 --rc genhtml_legend=1 00:29:48.154 --rc geninfo_all_blocks=1 00:29:48.154 --rc geninfo_unexecuted_blocks=1 00:29:48.154 00:29:48.154 ' 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:48.154 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:48.155 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:48.155 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:48.155 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:48.155 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:48.414 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:48.414 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:48.414 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:48.414 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:48.414 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:48.414 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:48.414 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:48.414 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:29:48.414 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:48.414 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:48.414 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:48.414 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.414 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.414 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.414 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:29:48.414 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.414 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:29:48.414 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:48.414 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:48.414 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:48.414 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:48.414 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:48.414 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:48.414 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:48.414 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:48.414 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:48.414 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:48.414 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:48.414 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:48.414 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:48.414 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:48.415 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:29:48.415 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:48.415 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:48.415 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:48.415 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:48.415 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:48.415 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.415 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:48.415 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:48.415 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:48.415 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:48.415 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:29:48.415 03:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:53.683 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:53.683 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:29:53.683 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:53.683 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:53.683 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:53.683 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:53.683 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:53.683 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:29:53.683 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:53.683 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:29:53.683 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:29:53.683 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:29:53.683 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:29:53.683 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:29:53.683 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:29:53.683 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:53.683 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:53.683 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:53.683 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:53.683 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:53.683 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:53.683 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:53.683 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:53.683 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:53.683 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:53.683 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:53.683 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:53.683 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:53.683 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:53.683 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:53.683 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:53.683 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:53.683 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:53.683 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:53.684 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:53.684 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:53.684 Found net devices under 0000:86:00.0: cvl_0_0 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:53.684 Found net devices under 0000:86:00.1: cvl_0_1 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:53.684 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:53.942 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:53.942 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:53.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.407 ms 00:29:53.943 00:29:53.943 --- 10.0.0.2 ping statistics --- 00:29:53.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.943 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:29:53.943 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:53.943 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:53.943 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:29:53.943 00:29:53.943 --- 10.0.0.1 ping statistics --- 00:29:53.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.943 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:29:53.943 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:53.943 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:29:53.943 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:53.943 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:53.943 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:53.943 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:53.943 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:53.943 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:53.943 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:53.943 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:29:53.943 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:29:53.943 only one NIC for nvmf test 00:29:53.943 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:29:53.943 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:53.943 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:29:53.943 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:53.943 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:29:53.943 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:53.943 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:53.943 rmmod nvme_tcp 00:29:53.943 rmmod nvme_fabrics 00:29:53.943 rmmod nvme_keyring 00:29:53.943 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:53.943 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:29:53.943 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:29:53.943 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:29:53.943 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:53.943 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:53.943 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:53.943 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:29:53.943 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:29:53.943 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:53.943 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:29:53.943 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:53.943 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:53.943 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:53.943 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:53.943 03:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:55.847 03:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:55.847 03:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:29:56.106 03:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:29:56.106 03:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:56.106 03:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:29:56.106 03:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:56.106 03:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:29:56.106 03:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:56.106 03:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:56.106 03:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:56.106 03:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:29:56.106 03:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:29:56.106 03:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:29:56.106 03:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:56.106 03:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:56.106 03:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:56.107 03:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:29:56.107 03:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:29:56.107 03:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:56.107 03:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:29:56.107 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:56.107 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:56.107 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:56.107 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:56.107 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:56.107 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:56.107 00:29:56.107 real 0m7.913s 00:29:56.107 user 0m1.690s 00:29:56.107 sys 0m4.225s 00:29:56.107 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:56.107 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:56.107 ************************************ 00:29:56.107 END TEST nvmf_target_multipath 00:29:56.107 ************************************ 00:29:56.107 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:29:56.107 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:56.107 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:56.107 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:56.107 ************************************ 00:29:56.107 START TEST nvmf_zcopy 00:29:56.107 ************************************ 00:29:56.107 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:29:56.107 * Looking for test storage... 00:29:56.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:56.107 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:56.107 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:29:56.107 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:56.107 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:56.107 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:56.107 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:56.107 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:56.107 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:29:56.107 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:29:56.107 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:29:56.107 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:29:56.107 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:29:56.107 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:29:56.107 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:29:56.107 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:56.107 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:29:56.107 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:29:56.107 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:56.107 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:56.107 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:29:56.107 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:29:56.107 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:56.107 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:29:56.107 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:29:56.107 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:29:56.366 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:29:56.366 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:56.366 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:29:56.366 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:29:56.366 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:56.366 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:56.366 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:29:56.366 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:56.366 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:56.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:56.366 --rc genhtml_branch_coverage=1 00:29:56.366 --rc genhtml_function_coverage=1 00:29:56.366 --rc genhtml_legend=1 00:29:56.366 --rc geninfo_all_blocks=1 00:29:56.366 --rc geninfo_unexecuted_blocks=1 00:29:56.366 00:29:56.366 ' 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:56.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:56.367 --rc genhtml_branch_coverage=1 00:29:56.367 --rc genhtml_function_coverage=1 00:29:56.367 --rc genhtml_legend=1 00:29:56.367 --rc geninfo_all_blocks=1 00:29:56.367 --rc geninfo_unexecuted_blocks=1 00:29:56.367 00:29:56.367 ' 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:56.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:56.367 --rc genhtml_branch_coverage=1 00:29:56.367 --rc genhtml_function_coverage=1 00:29:56.367 --rc genhtml_legend=1 00:29:56.367 --rc geninfo_all_blocks=1 00:29:56.367 --rc geninfo_unexecuted_blocks=1 00:29:56.367 00:29:56.367 ' 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:56.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:56.367 --rc genhtml_branch_coverage=1 00:29:56.367 --rc genhtml_function_coverage=1 00:29:56.367 --rc genhtml_legend=1 00:29:56.367 --rc geninfo_all_blocks=1 00:29:56.367 --rc geninfo_unexecuted_blocks=1 00:29:56.367 00:29:56.367 ' 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:29:56.367 03:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:01.645 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:01.645 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:30:01.645 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:01.645 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:01.645 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:01.645 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:01.645 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:01.645 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:30:01.645 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:01.645 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:30:01.645 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:30:01.645 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:30:01.645 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:30:01.645 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:30:01.645 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:30:01.645 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:01.645 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:01.645 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:01.645 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:01.645 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:01.645 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:01.645 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:01.645 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:01.645 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:01.645 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:01.645 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:01.645 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:01.645 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:01.645 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:01.645 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:01.645 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:01.645 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:01.645 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:01.645 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:01.645 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:01.645 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:01.645 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:01.645 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:01.646 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:01.646 Found net devices under 0000:86:00.0: cvl_0_0 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:01.646 Found net devices under 0000:86:00.1: cvl_0_1 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:01.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:01.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:30:01.646 00:30:01.646 --- 10.0.0.2 ping statistics --- 00:30:01.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:01.646 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:01.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:01.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:30:01.646 00:30:01.646 --- 10.0.0.1 ping statistics --- 00:30:01.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:01.646 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:01.646 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:01.906 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:30:01.906 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:01.906 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:01.906 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:01.906 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2827772 00:30:01.906 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2827772 00:30:01.906 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:01.906 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2827772 ']' 00:30:01.906 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:01.906 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:01.906 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:01.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:01.906 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:01.906 03:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:01.906 [2024-12-06 03:38:21.855900] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:01.906 [2024-12-06 03:38:21.856831] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:30:01.906 [2024-12-06 03:38:21.856865] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:01.906 [2024-12-06 03:38:21.922763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.906 [2024-12-06 03:38:21.965203] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:01.906 [2024-12-06 03:38:21.965235] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:01.906 [2024-12-06 03:38:21.965243] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:01.906 [2024-12-06 03:38:21.965249] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:01.907 [2024-12-06 03:38:21.965254] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:01.907 [2024-12-06 03:38:21.965779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:01.907 [2024-12-06 03:38:22.033687] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:01.907 [2024-12-06 03:38:22.033912] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:02.166 03:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:02.166 03:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:30:02.166 03:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:02.166 03:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:02.166 03:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:02.166 03:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:02.166 03:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:30:02.166 03:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:30:02.166 03:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.166 03:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:02.166 [2024-12-06 03:38:22.098445] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:02.166 03:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.166 03:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:02.166 03:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.166 03:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:02.166 03:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.166 03:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:02.166 03:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.166 03:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:02.166 [2024-12-06 03:38:22.122660] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:02.167 03:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.167 03:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:02.167 03:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.167 03:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:02.167 03:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.167 03:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:30:02.167 03:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.167 03:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:02.167 malloc0 00:30:02.167 03:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.167 03:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:30:02.167 03:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.167 03:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:02.167 03:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.167 03:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:30:02.167 03:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:30:02.167 03:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:02.167 03:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:02.167 03:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:02.167 03:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:02.167 { 00:30:02.167 "params": { 00:30:02.167 "name": "Nvme$subsystem", 00:30:02.167 "trtype": "$TEST_TRANSPORT", 00:30:02.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:02.167 "adrfam": "ipv4", 00:30:02.167 "trsvcid": "$NVMF_PORT", 00:30:02.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:02.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:02.167 "hdgst": ${hdgst:-false}, 00:30:02.167 "ddgst": ${ddgst:-false} 00:30:02.167 }, 00:30:02.167 "method": "bdev_nvme_attach_controller" 00:30:02.167 } 00:30:02.167 EOF 00:30:02.167 )") 00:30:02.167 03:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:02.167 03:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:02.167 03:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:02.167 03:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:02.167 "params": { 00:30:02.167 "name": "Nvme1", 00:30:02.167 "trtype": "tcp", 00:30:02.167 "traddr": "10.0.0.2", 00:30:02.167 "adrfam": "ipv4", 00:30:02.167 "trsvcid": "4420", 00:30:02.167 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:02.167 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:02.167 "hdgst": false, 00:30:02.167 "ddgst": false 00:30:02.167 }, 00:30:02.167 "method": "bdev_nvme_attach_controller" 00:30:02.167 }' 00:30:02.167 [2024-12-06 03:38:22.208804] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:30:02.167 [2024-12-06 03:38:22.208847] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2827797 ] 00:30:02.167 [2024-12-06 03:38:22.269876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:02.426 [2024-12-06 03:38:22.311659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:02.426 Running I/O for 10 seconds... 00:30:04.737 8309.00 IOPS, 64.91 MiB/s [2024-12-06T02:38:25.814Z] 8400.00 IOPS, 65.62 MiB/s [2024-12-06T02:38:26.753Z] 8423.00 IOPS, 65.80 MiB/s [2024-12-06T02:38:27.690Z] 8443.00 IOPS, 65.96 MiB/s [2024-12-06T02:38:28.627Z] 8456.00 IOPS, 66.06 MiB/s [2024-12-06T02:38:29.565Z] 8464.50 IOPS, 66.13 MiB/s [2024-12-06T02:38:30.502Z] 8466.86 IOPS, 66.15 MiB/s [2024-12-06T02:38:31.880Z] 8452.75 IOPS, 66.04 MiB/s [2024-12-06T02:38:32.816Z] 8446.22 IOPS, 65.99 MiB/s [2024-12-06T02:38:32.816Z] 8448.30 IOPS, 66.00 MiB/s 00:30:12.675 Latency(us) 00:30:12.675 [2024-12-06T02:38:32.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:12.675 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:30:12.675 Verification LBA range: start 0x0 length 0x1000 00:30:12.675 Nvme1n1 : 10.01 8452.01 66.03 0.00 0.00 15101.00 436.31 21655.37 00:30:12.675 [2024-12-06T02:38:32.816Z] =================================================================================================================== 00:30:12.675 [2024-12-06T02:38:32.816Z] Total : 8452.01 66.03 0.00 0.00 15101.00 436.31 21655.37 00:30:12.675 03:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2829402 00:30:12.675 03:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:30:12.675 03:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:12.675 03:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:30:12.675 03:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:30:12.675 03:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:12.675 03:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:12.675 03:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:12.675 03:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:12.675 { 00:30:12.675 "params": { 00:30:12.675 "name": "Nvme$subsystem", 00:30:12.675 "trtype": "$TEST_TRANSPORT", 00:30:12.675 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:12.675 "adrfam": "ipv4", 00:30:12.675 "trsvcid": "$NVMF_PORT", 00:30:12.675 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:12.675 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:12.675 "hdgst": ${hdgst:-false}, 00:30:12.675 "ddgst": ${ddgst:-false} 00:30:12.675 }, 00:30:12.675 "method": "bdev_nvme_attach_controller" 00:30:12.675 } 00:30:12.675 EOF 00:30:12.675 )") 00:30:12.675 [2024-12-06 03:38:32.666129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:12.675 [2024-12-06 03:38:32.666168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:12.675 03:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:12.675 03:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:12.675 03:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:12.675 03:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:12.675 "params": { 00:30:12.675 "name": "Nvme1", 00:30:12.675 "trtype": "tcp", 00:30:12.675 "traddr": "10.0.0.2", 00:30:12.675 "adrfam": "ipv4", 00:30:12.675 "trsvcid": "4420", 00:30:12.675 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:12.675 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:12.675 "hdgst": false, 00:30:12.675 "ddgst": false 00:30:12.675 }, 00:30:12.675 "method": "bdev_nvme_attach_controller" 00:30:12.675 }' 00:30:12.675 [2024-12-06 03:38:32.678093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:12.675 [2024-12-06 03:38:32.678107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:12.675 [2024-12-06 03:38:32.690091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:12.675 [2024-12-06 03:38:32.690103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:12.675 [2024-12-06 03:38:32.702088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:12.675 [2024-12-06 03:38:32.702099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:12.675 [2024-12-06 03:38:32.705946] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:30:12.675 [2024-12-06 03:38:32.705992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2829402 ] 00:30:12.675 [2024-12-06 03:38:32.714089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:12.675 [2024-12-06 03:38:32.714100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:12.675 [2024-12-06 03:38:32.726089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:12.675 [2024-12-06 03:38:32.726099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:12.675 [2024-12-06 03:38:32.738088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:12.675 [2024-12-06 03:38:32.738099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:12.675 [2024-12-06 03:38:32.750091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:12.675 [2024-12-06 03:38:32.750103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:12.675 [2024-12-06 03:38:32.762087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:12.675 [2024-12-06 03:38:32.762098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:12.675 [2024-12-06 03:38:32.767680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.675 [2024-12-06 03:38:32.774091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:12.675 [2024-12-06 03:38:32.774104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:12.675 [2024-12-06 03:38:32.786090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:12.675 [2024-12-06 03:38:32.786104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:12.675 [2024-12-06 03:38:32.798088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:12.675 [2024-12-06 03:38:32.798100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:12.675 [2024-12-06 03:38:32.810088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:12.675 [2024-12-06 03:38:32.810100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:12.675 [2024-12-06 03:38:32.812436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:12.934 [2024-12-06 03:38:32.822093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:12.934 [2024-12-06 03:38:32.822109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:12.934 [2024-12-06 03:38:32.834097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:12.934 [2024-12-06 03:38:32.834115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:12.934 [2024-12-06 03:38:32.846092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:12.934 [2024-12-06 03:38:32.846105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:12.934 [2024-12-06 03:38:32.858090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:12.934 [2024-12-06 03:38:32.858102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:12.934 [2024-12-06 03:38:32.870092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:12.934 [2024-12-06 03:38:32.870105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:12.934 [2024-12-06 03:38:32.882089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:12.934 [2024-12-06 03:38:32.882102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:12.934 [2024-12-06 03:38:32.894089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:12.934 [2024-12-06 03:38:32.894098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:12.934 [2024-12-06 03:38:32.906099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:12.934 [2024-12-06 03:38:32.906120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:12.934 [2024-12-06 03:38:32.918096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:12.934 [2024-12-06 03:38:32.918126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:12.934 [2024-12-06 03:38:32.930099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:12.934 [2024-12-06 03:38:32.930117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:12.934 [2024-12-06 03:38:32.942090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:12.934 [2024-12-06 03:38:32.942101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:12.934 [2024-12-06 03:38:32.954088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:12.934 [2024-12-06 03:38:32.954098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:12.934 [2024-12-06 03:38:32.966092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:12.935 [2024-12-06 03:38:32.966106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:12.935 [2024-12-06 03:38:32.978093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:12.935 [2024-12-06 03:38:32.978107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:12.935 [2024-12-06 03:38:32.990088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:12.935 [2024-12-06 03:38:32.990098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:12.935 [2024-12-06 03:38:33.002091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:12.935 [2024-12-06 03:38:33.002104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:12.935 [2024-12-06 03:38:33.014088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:12.935 [2024-12-06 03:38:33.014098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:12.935 [2024-12-06 03:38:33.026093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:12.935 [2024-12-06 03:38:33.026106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:12.935 [2024-12-06 03:38:33.038088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:12.935 [2024-12-06 03:38:33.038098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:12.935 [2024-12-06 03:38:33.050090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:12.935 [2024-12-06 03:38:33.050104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:12.935 [2024-12-06 03:38:33.062091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:12.935 [2024-12-06 03:38:33.062102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.193 [2024-12-06 03:38:33.074091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.193 [2024-12-06 03:38:33.074102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.193 [2024-12-06 03:38:33.086087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.193 [2024-12-06 03:38:33.086096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.193 [2024-12-06 03:38:33.098088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.193 [2024-12-06 03:38:33.098097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.193 [2024-12-06 03:38:33.110089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.193 [2024-12-06 03:38:33.110100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.193 [2024-12-06 03:38:33.122095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.193 [2024-12-06 03:38:33.122113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.193 Running I/O for 5 seconds... 00:30:13.193 [2024-12-06 03:38:33.139844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.193 [2024-12-06 03:38:33.139864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.193 [2024-12-06 03:38:33.155854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.193 [2024-12-06 03:38:33.155873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.193 [2024-12-06 03:38:33.171419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.193 [2024-12-06 03:38:33.171438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.193 [2024-12-06 03:38:33.187375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.193 [2024-12-06 03:38:33.187394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.193 [2024-12-06 03:38:33.202501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.193 [2024-12-06 03:38:33.202520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.193 [2024-12-06 03:38:33.218221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.193 [2024-12-06 03:38:33.218240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.193 [2024-12-06 03:38:33.231585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.193 [2024-12-06 03:38:33.231604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.193 [2024-12-06 03:38:33.247715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.193 [2024-12-06 03:38:33.247735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.193 [2024-12-06 03:38:33.262799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.193 [2024-12-06 03:38:33.262818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.193 [2024-12-06 03:38:33.278155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.193 [2024-12-06 03:38:33.278173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.193 [2024-12-06 03:38:33.289857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.193 [2024-12-06 03:38:33.289876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.193 [2024-12-06 03:38:33.304090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.193 [2024-12-06 03:38:33.304109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.193 [2024-12-06 03:38:33.319198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.193 [2024-12-06 03:38:33.319216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.452 [2024-12-06 03:38:33.334342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.452 [2024-12-06 03:38:33.334361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.452 [2024-12-06 03:38:33.346092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.452 [2024-12-06 03:38:33.346111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.452 [2024-12-06 03:38:33.360035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.452 [2024-12-06 03:38:33.360054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.452 [2024-12-06 03:38:33.375532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.452 [2024-12-06 03:38:33.375554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.452 [2024-12-06 03:38:33.390581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.452 [2024-12-06 03:38:33.390599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.452 [2024-12-06 03:38:33.405459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.452 [2024-12-06 03:38:33.405478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.452 [2024-12-06 03:38:33.420167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.452 [2024-12-06 03:38:33.420186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.452 [2024-12-06 03:38:33.434944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.452 [2024-12-06 03:38:33.434969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.452 [2024-12-06 03:38:33.450432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.452 [2024-12-06 03:38:33.450451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.452 [2024-12-06 03:38:33.463090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.452 [2024-12-06 03:38:33.463109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.452 [2024-12-06 03:38:33.478310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.452 [2024-12-06 03:38:33.478328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.452 [2024-12-06 03:38:33.489219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.452 [2024-12-06 03:38:33.489237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.452 [2024-12-06 03:38:33.504320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.452 [2024-12-06 03:38:33.504339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.452 [2024-12-06 03:38:33.519506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.452 [2024-12-06 03:38:33.519524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.452 [2024-12-06 03:38:33.534658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.452 [2024-12-06 03:38:33.534676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.452 [2024-12-06 03:38:33.547217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.452 [2024-12-06 03:38:33.547236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.452 [2024-12-06 03:38:33.558207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.452 [2024-12-06 03:38:33.558225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.452 [2024-12-06 03:38:33.572342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.452 [2024-12-06 03:38:33.572361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.452 [2024-12-06 03:38:33.587964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.452 [2024-12-06 03:38:33.587983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.709 [2024-12-06 03:38:33.603091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.709 [2024-12-06 03:38:33.603115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.709 [2024-12-06 03:38:33.618734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.709 [2024-12-06 03:38:33.618752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.709 [2024-12-06 03:38:33.634632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.709 [2024-12-06 03:38:33.634650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.709 [2024-12-06 03:38:33.650774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.709 [2024-12-06 03:38:33.650793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.709 [2024-12-06 03:38:33.666309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.709 [2024-12-06 03:38:33.666327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.709 [2024-12-06 03:38:33.678700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.709 [2024-12-06 03:38:33.678719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.709 [2024-12-06 03:38:33.692211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.709 [2024-12-06 03:38:33.692230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.709 [2024-12-06 03:38:33.708131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.709 [2024-12-06 03:38:33.708151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.709 [2024-12-06 03:38:33.723490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.709 [2024-12-06 03:38:33.723511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.709 [2024-12-06 03:38:33.738548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.709 [2024-12-06 03:38:33.738567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.709 [2024-12-06 03:38:33.749885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.709 [2024-12-06 03:38:33.749903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.709 [2024-12-06 03:38:33.764324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.709 [2024-12-06 03:38:33.764342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.709 [2024-12-06 03:38:33.779437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.709 [2024-12-06 03:38:33.779455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.709 [2024-12-06 03:38:33.794892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.709 [2024-12-06 03:38:33.794911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.709 [2024-12-06 03:38:33.809855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.709 [2024-12-06 03:38:33.809873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.709 [2024-12-06 03:38:33.820555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.709 [2024-12-06 03:38:33.820573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.709 [2024-12-06 03:38:33.836128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.709 [2024-12-06 03:38:33.836146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.966 [2024-12-06 03:38:33.851168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.966 [2024-12-06 03:38:33.851186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.966 [2024-12-06 03:38:33.866509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.966 [2024-12-06 03:38:33.866528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.966 [2024-12-06 03:38:33.882053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.966 [2024-12-06 03:38:33.882075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.966 [2024-12-06 03:38:33.895743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.966 [2024-12-06 03:38:33.895763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.966 [2024-12-06 03:38:33.911019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.966 [2024-12-06 03:38:33.911038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.966 [2024-12-06 03:38:33.921759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.966 [2024-12-06 03:38:33.921778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.966 [2024-12-06 03:38:33.936380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.966 [2024-12-06 03:38:33.936398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.966 [2024-12-06 03:38:33.952119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.966 [2024-12-06 03:38:33.952137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.966 [2024-12-06 03:38:33.967015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.966 [2024-12-06 03:38:33.967035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.966 [2024-12-06 03:38:33.982590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.966 [2024-12-06 03:38:33.982609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.966 [2024-12-06 03:38:33.997729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.966 [2024-12-06 03:38:33.997747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.967 [2024-12-06 03:38:34.012267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.967 [2024-12-06 03:38:34.012286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.967 [2024-12-06 03:38:34.027262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.967 [2024-12-06 03:38:34.027281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.967 [2024-12-06 03:38:34.042631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.967 [2024-12-06 03:38:34.042649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.967 [2024-12-06 03:38:34.057355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.967 [2024-12-06 03:38:34.057374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.967 [2024-12-06 03:38:34.072138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.967 [2024-12-06 03:38:34.072158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.967 [2024-12-06 03:38:34.086462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.967 [2024-12-06 03:38:34.086480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.967 [2024-12-06 03:38:34.102688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.967 [2024-12-06 03:38:34.102708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.231 [2024-12-06 03:38:34.118800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.231 [2024-12-06 03:38:34.118819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.231 [2024-12-06 03:38:34.134390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.231 [2024-12-06 03:38:34.134413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.231 16192.00 IOPS, 126.50 MiB/s [2024-12-06T02:38:34.372Z] [2024-12-06 03:38:34.150042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.231 [2024-12-06 03:38:34.150061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.231 [2024-12-06 03:38:34.163873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.231 [2024-12-06 03:38:34.163892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.231 [2024-12-06 03:38:34.178946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.231 [2024-12-06 03:38:34.178969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.231 [2024-12-06 03:38:34.194467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.231 [2024-12-06 03:38:34.194485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.231 [2024-12-06 03:38:34.206245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.231 [2024-12-06 03:38:34.206263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.231 [2024-12-06 03:38:34.220577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.231 [2024-12-06 03:38:34.220597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.231 [2024-12-06 03:38:34.235706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.231 [2024-12-06 03:38:34.235726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.231 [2024-12-06 03:38:34.250900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.231 [2024-12-06 03:38:34.250919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.231 [2024-12-06 03:38:34.265885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.231 [2024-12-06 03:38:34.265903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.231 [2024-12-06 03:38:34.280028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.231 [2024-12-06 03:38:34.280047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.231 [2024-12-06 03:38:34.295430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.231 [2024-12-06 03:38:34.295448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.231 [2024-12-06 03:38:34.310651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.231 [2024-12-06 03:38:34.310669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.231 [2024-12-06 03:38:34.325673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.231 [2024-12-06 03:38:34.325692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.232 [2024-12-06 03:38:34.339974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.232 [2024-12-06 03:38:34.339993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.232 [2024-12-06 03:38:34.355140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.232 [2024-12-06 03:38:34.355158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.490 [2024-12-06 03:38:34.370095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.490 [2024-12-06 03:38:34.370119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.490 [2024-12-06 03:38:34.381495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.490 [2024-12-06 03:38:34.381514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.490 [2024-12-06 03:38:34.396129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.490 [2024-12-06 03:38:34.396147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.490 [2024-12-06 03:38:34.410779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.490 [2024-12-06 03:38:34.410801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.490 [2024-12-06 03:38:34.423997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.490 [2024-12-06 03:38:34.424016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.490 [2024-12-06 03:38:34.439263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.490 [2024-12-06 03:38:34.439281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.490 [2024-12-06 03:38:34.454066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.490 [2024-12-06 03:38:34.454085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.490 [2024-12-06 03:38:34.466560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.490 [2024-12-06 03:38:34.466578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.490 [2024-12-06 03:38:34.479434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.490 [2024-12-06 03:38:34.479454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.490 [2024-12-06 03:38:34.490531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.490 [2024-12-06 03:38:34.490549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.490 [2024-12-06 03:38:34.503835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.490 [2024-12-06 03:38:34.503855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.490 [2024-12-06 03:38:34.518749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.490 [2024-12-06 03:38:34.518767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.490 [2024-12-06 03:38:34.533690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.490 [2024-12-06 03:38:34.533708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.490 [2024-12-06 03:38:34.548403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.490 [2024-12-06 03:38:34.548422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.490 [2024-12-06 03:38:34.563502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.490 [2024-12-06 03:38:34.563520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.490 [2024-12-06 03:38:34.578234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.490 [2024-12-06 03:38:34.578252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.490 [2024-12-06 03:38:34.589175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.490 [2024-12-06 03:38:34.589192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.490 [2024-12-06 03:38:34.603433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.490 [2024-12-06 03:38:34.603451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.490 [2024-12-06 03:38:34.618866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.490 [2024-12-06 03:38:34.618884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.748 [2024-12-06 03:38:34.634715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.748 [2024-12-06 03:38:34.634733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.749 [2024-12-06 03:38:34.650528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.749 [2024-12-06 03:38:34.650545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.749 [2024-12-06 03:38:34.666463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.749 [2024-12-06 03:38:34.666481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.749 [2024-12-06 03:38:34.682111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.749 [2024-12-06 03:38:34.682133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.749 [2024-12-06 03:38:34.695709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.749 [2024-12-06 03:38:34.695727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.749 [2024-12-06 03:38:34.710874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.749 [2024-12-06 03:38:34.710892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.749 [2024-12-06 03:38:34.727133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.749 [2024-12-06 03:38:34.727152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.749 [2024-12-06 03:38:34.742139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.749 [2024-12-06 03:38:34.742159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.749 [2024-12-06 03:38:34.753230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.749 [2024-12-06 03:38:34.753249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.749 [2024-12-06 03:38:34.768039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.749 [2024-12-06 03:38:34.768058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.749 [2024-12-06 03:38:34.782760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.749 [2024-12-06 03:38:34.782778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.749 [2024-12-06 03:38:34.798039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.749 [2024-12-06 03:38:34.798057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.749 [2024-12-06 03:38:34.811751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.749 [2024-12-06 03:38:34.811769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.749 [2024-12-06 03:38:34.826985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.749 [2024-12-06 03:38:34.827003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.749 [2024-12-06 03:38:34.843090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.749 [2024-12-06 03:38:34.843108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.749 [2024-12-06 03:38:34.858133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.749 [2024-12-06 03:38:34.858151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.749 [2024-12-06 03:38:34.872009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.749 [2024-12-06 03:38:34.872028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.007 [2024-12-06 03:38:34.886855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.007 [2024-12-06 03:38:34.886873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.007 [2024-12-06 03:38:34.901907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.008 [2024-12-06 03:38:34.901926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.008 [2024-12-06 03:38:34.913539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.008 [2024-12-06 03:38:34.913557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.008 [2024-12-06 03:38:34.927939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.008 [2024-12-06 03:38:34.927967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.008 [2024-12-06 03:38:34.942849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.008 [2024-12-06 03:38:34.942867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.008 [2024-12-06 03:38:34.957507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.008 [2024-12-06 03:38:34.957528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.008 [2024-12-06 03:38:34.971092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.008 [2024-12-06 03:38:34.971111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.008 [2024-12-06 03:38:34.985860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.008 [2024-12-06 03:38:34.985879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.008 [2024-12-06 03:38:34.997628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.008 [2024-12-06 03:38:34.997647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.008 [2024-12-06 03:38:35.012571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.008 [2024-12-06 03:38:35.012590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.008 [2024-12-06 03:38:35.027394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.008 [2024-12-06 03:38:35.027412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.008 [2024-12-06 03:38:35.042539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.008 [2024-12-06 03:38:35.042558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.008 [2024-12-06 03:38:35.054039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.008 [2024-12-06 03:38:35.054058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.008 [2024-12-06 03:38:35.068457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.008 [2024-12-06 03:38:35.068475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.008 [2024-12-06 03:38:35.083881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.008 [2024-12-06 03:38:35.083899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.008 [2024-12-06 03:38:35.098464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.008 [2024-12-06 03:38:35.098482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.008 [2024-12-06 03:38:35.114206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.008 [2024-12-06 03:38:35.114224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.008 [2024-12-06 03:38:35.128297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.008 [2024-12-06 03:38:35.128315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.008 16339.50 IOPS, 127.65 MiB/s [2024-12-06T02:38:35.149Z] [2024-12-06 03:38:35.143083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.008 [2024-12-06 03:38:35.143101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.267 [2024-12-06 03:38:35.158394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.267 [2024-12-06 03:38:35.158411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.267 [2024-12-06 03:38:35.173723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.267 [2024-12-06 03:38:35.173741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.267 [2024-12-06 03:38:35.187795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.267 [2024-12-06 03:38:35.187813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.267 [2024-12-06 03:38:35.203064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.267 [2024-12-06 03:38:35.203082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.267 [2024-12-06 03:38:35.217967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.267 [2024-12-06 03:38:35.217986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.267 [2024-12-06 03:38:35.231143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.267 [2024-12-06 03:38:35.231161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.267 [2024-12-06 03:38:35.246469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.267 [2024-12-06 03:38:35.246488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.267 [2024-12-06 03:38:35.262429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.267 [2024-12-06 03:38:35.262448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.267 [2024-12-06 03:38:35.273989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.267 [2024-12-06 03:38:35.274008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.267 [2024-12-06 03:38:35.288230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.267 [2024-12-06 03:38:35.288249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.267 [2024-12-06 03:38:35.303327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.267 [2024-12-06 03:38:35.303345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.267 [2024-12-06 03:38:35.318203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.267 [2024-12-06 03:38:35.318221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.267 [2024-12-06 03:38:35.330109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.267 [2024-12-06 03:38:35.330127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.267 [2024-12-06 03:38:35.344159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.267 [2024-12-06 03:38:35.344177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.267 [2024-12-06 03:38:35.359040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.267 [2024-12-06 03:38:35.359058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.267 [2024-12-06 03:38:35.374401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.267 [2024-12-06 03:38:35.374419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.267 [2024-12-06 03:38:35.390423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.267 [2024-12-06 03:38:35.390441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.267 [2024-12-06 03:38:35.403190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.267 [2024-12-06 03:38:35.403208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.525 [2024-12-06 03:38:35.414584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.525 [2024-12-06 03:38:35.414602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.525 [2024-12-06 03:38:35.428024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.525 [2024-12-06 03:38:35.428042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.525 [2024-12-06 03:38:35.442970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.525 [2024-12-06 03:38:35.442987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.525 [2024-12-06 03:38:35.457736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.525 [2024-12-06 03:38:35.457754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.525 [2024-12-06 03:38:35.471494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.525 [2024-12-06 03:38:35.471512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.525 [2024-12-06 03:38:35.486181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.525 [2024-12-06 03:38:35.486204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.525 [2024-12-06 03:38:35.497519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.525 [2024-12-06 03:38:35.497537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.525 [2024-12-06 03:38:35.512057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.525 [2024-12-06 03:38:35.512076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.525 [2024-12-06 03:38:35.527256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.525 [2024-12-06 03:38:35.527274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.525 [2024-12-06 03:38:35.542342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.525 [2024-12-06 03:38:35.542362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.525 [2024-12-06 03:38:35.553024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.525 [2024-12-06 03:38:35.553043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.525 [2024-12-06 03:38:35.568052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.525 [2024-12-06 03:38:35.568072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.525 [2024-12-06 03:38:35.582932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.525 [2024-12-06 03:38:35.582960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.525 [2024-12-06 03:38:35.598479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.525 [2024-12-06 03:38:35.598498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.525 [2024-12-06 03:38:35.614460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.525 [2024-12-06 03:38:35.614478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.525 [2024-12-06 03:38:35.626831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.525 [2024-12-06 03:38:35.626849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.525 [2024-12-06 03:38:35.639602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.525 [2024-12-06 03:38:35.639620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.525 [2024-12-06 03:38:35.654472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.525 [2024-12-06 03:38:35.654491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.783 [2024-12-06 03:38:35.670101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.783 [2024-12-06 03:38:35.670120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.783 [2024-12-06 03:38:35.683874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.783 [2024-12-06 03:38:35.683893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.783 [2024-12-06 03:38:35.699085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.783 [2024-12-06 03:38:35.699104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.783 [2024-12-06 03:38:35.713897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.783 [2024-12-06 03:38:35.713916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.783 [2024-12-06 03:38:35.726702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.783 [2024-12-06 03:38:35.726720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.783 [2024-12-06 03:38:35.741918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.783 [2024-12-06 03:38:35.741941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.783 [2024-12-06 03:38:35.755754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.783 [2024-12-06 03:38:35.755778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.783 [2024-12-06 03:38:35.770619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.783 [2024-12-06 03:38:35.770639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.783 [2024-12-06 03:38:35.785980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.783 [2024-12-06 03:38:35.786000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.783 [2024-12-06 03:38:35.799591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.783 [2024-12-06 03:38:35.799609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.783 [2024-12-06 03:38:35.814461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.783 [2024-12-06 03:38:35.814479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.783 [2024-12-06 03:38:35.830457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.783 [2024-12-06 03:38:35.830475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.783 [2024-12-06 03:38:35.842912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.783 [2024-12-06 03:38:35.842930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.783 [2024-12-06 03:38:35.857969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.783 [2024-12-06 03:38:35.857988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.783 [2024-12-06 03:38:35.871803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.783 [2024-12-06 03:38:35.871822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.783 [2024-12-06 03:38:35.886275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.783 [2024-12-06 03:38:35.886293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.783 [2024-12-06 03:38:35.899991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.783 [2024-12-06 03:38:35.900010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.783 [2024-12-06 03:38:35.914985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.783 [2024-12-06 03:38:35.915002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.041 [2024-12-06 03:38:35.929904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.041 [2024-12-06 03:38:35.929922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.041 [2024-12-06 03:38:35.942706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.041 [2024-12-06 03:38:35.942724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.041 [2024-12-06 03:38:35.955841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.041 [2024-12-06 03:38:35.955859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.041 [2024-12-06 03:38:35.971027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.041 [2024-12-06 03:38:35.971045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.041 [2024-12-06 03:38:35.986095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.041 [2024-12-06 03:38:35.986112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.041 [2024-12-06 03:38:35.997542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.041 [2024-12-06 03:38:35.997559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.041 [2024-12-06 03:38:36.012466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.041 [2024-12-06 03:38:36.012485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.041 [2024-12-06 03:38:36.027110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.041 [2024-12-06 03:38:36.027133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.041 [2024-12-06 03:38:36.042068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.041 [2024-12-06 03:38:36.042087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.041 [2024-12-06 03:38:36.054641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.041 [2024-12-06 03:38:36.054659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.041 [2024-12-06 03:38:36.067735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.041 [2024-12-06 03:38:36.067754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.041 [2024-12-06 03:38:36.082844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.042 [2024-12-06 03:38:36.082862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.042 [2024-12-06 03:38:36.097686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.042 [2024-12-06 03:38:36.097707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.042 [2024-12-06 03:38:36.111922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.042 [2024-12-06 03:38:36.111940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.042 [2024-12-06 03:38:36.127000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.042 [2024-12-06 03:38:36.127018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.042 16410.67 IOPS, 128.21 MiB/s [2024-12-06T02:38:36.183Z] [2024-12-06 03:38:36.142077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.042 [2024-12-06 03:38:36.142106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.042 [2024-12-06 03:38:36.153634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.042 [2024-12-06 03:38:36.153652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.042 [2024-12-06 03:38:36.167767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.042 [2024-12-06 03:38:36.167785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.311 [2024-12-06 03:38:36.182487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.311 [2024-12-06 03:38:36.182506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.311 [2024-12-06 03:38:36.198034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.311 [2024-12-06 03:38:36.198052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.311 [2024-12-06 03:38:36.211746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.311 [2024-12-06 03:38:36.211764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.311 [2024-12-06 03:38:36.226965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.311 [2024-12-06 03:38:36.226983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.311 [2024-12-06 03:38:36.241676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.311 [2024-12-06 03:38:36.241694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.311 [2024-12-06 03:38:36.254677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.311 [2024-12-06 03:38:36.254695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.311 [2024-12-06 03:38:36.267144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.311 [2024-12-06 03:38:36.267162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.311 [2024-12-06 03:38:36.281943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.311 [2024-12-06 03:38:36.281967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.311 [2024-12-06 03:38:36.294798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.311 [2024-12-06 03:38:36.294817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.311 [2024-12-06 03:38:36.308009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.311 [2024-12-06 03:38:36.308027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.311 [2024-12-06 03:38:36.322742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.311 [2024-12-06 03:38:36.322759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.311 [2024-12-06 03:38:36.338675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.311 [2024-12-06 03:38:36.338693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.311 [2024-12-06 03:38:36.354321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.311 [2024-12-06 03:38:36.354340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.311 [2024-12-06 03:38:36.366662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.311 [2024-12-06 03:38:36.366679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.311 [2024-12-06 03:38:36.379580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.311 [2024-12-06 03:38:36.379598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.311 [2024-12-06 03:38:36.394626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.311 [2024-12-06 03:38:36.394644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.311 [2024-12-06 03:38:36.408107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.311 [2024-12-06 03:38:36.408125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.311 [2024-12-06 03:38:36.423198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.311 [2024-12-06 03:38:36.423216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.311 [2024-12-06 03:38:36.437559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.311 [2024-12-06 03:38:36.437577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.572 [2024-12-06 03:38:36.451432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.572 [2024-12-06 03:38:36.451451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.572 [2024-12-06 03:38:36.466154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.572 [2024-12-06 03:38:36.466173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.572 [2024-12-06 03:38:36.479972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.572 [2024-12-06 03:38:36.479990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.572 [2024-12-06 03:38:36.494864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.572 [2024-12-06 03:38:36.494881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.572 [2024-12-06 03:38:36.509574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.572 [2024-12-06 03:38:36.509593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.572 [2024-12-06 03:38:36.522867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.572 [2024-12-06 03:38:36.522885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.572 [2024-12-06 03:38:36.537990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.572 [2024-12-06 03:38:36.538009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.572 [2024-12-06 03:38:36.549314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.572 [2024-12-06 03:38:36.549332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.572 [2024-12-06 03:38:36.563838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.573 [2024-12-06 03:38:36.563857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.573 [2024-12-06 03:38:36.578982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.573 [2024-12-06 03:38:36.579000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.573 [2024-12-06 03:38:36.594528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.573 [2024-12-06 03:38:36.594546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.573 [2024-12-06 03:38:36.610270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.573 [2024-12-06 03:38:36.610288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.573 [2024-12-06 03:38:36.621701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.573 [2024-12-06 03:38:36.621719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.573 [2024-12-06 03:38:36.635954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.573 [2024-12-06 03:38:36.635973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.573 [2024-12-06 03:38:36.651112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.573 [2024-12-06 03:38:36.651129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.573 [2024-12-06 03:38:36.666451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.573 [2024-12-06 03:38:36.666468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.573 [2024-12-06 03:38:36.682060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.573 [2024-12-06 03:38:36.682078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.573 [2024-12-06 03:38:36.696157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.573 [2024-12-06 03:38:36.696176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.831 [2024-12-06 03:38:36.711236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.831 [2024-12-06 03:38:36.711254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.831 [2024-12-06 03:38:36.726781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.831 [2024-12-06 03:38:36.726799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.831 [2024-12-06 03:38:36.742133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.831 [2024-12-06 03:38:36.742152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.831 [2024-12-06 03:38:36.756440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.831 [2024-12-06 03:38:36.756458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.832 [2024-12-06 03:38:36.770989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.832 [2024-12-06 03:38:36.771007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.832 [2024-12-06 03:38:36.785885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.832 [2024-12-06 03:38:36.785904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.832 [2024-12-06 03:38:36.796601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.832 [2024-12-06 03:38:36.796620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.832 [2024-12-06 03:38:36.811853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.832 [2024-12-06 03:38:36.811873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.832 [2024-12-06 03:38:36.826752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.832 [2024-12-06 03:38:36.826770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.832 [2024-12-06 03:38:36.842172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.832 [2024-12-06 03:38:36.842192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.832 [2024-12-06 03:38:36.852922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.832 [2024-12-06 03:38:36.852941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.832 [2024-12-06 03:38:36.867977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.832 [2024-12-06 03:38:36.867996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.832 [2024-12-06 03:38:36.882656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.832 [2024-12-06 03:38:36.882673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.832 [2024-12-06 03:38:36.897733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.832 [2024-12-06 03:38:36.897752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.832 [2024-12-06 03:38:36.912185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.832 [2024-12-06 03:38:36.912204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.832 [2024-12-06 03:38:36.926906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.832 [2024-12-06 03:38:36.926924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.832 [2024-12-06 03:38:36.941866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.832 [2024-12-06 03:38:36.941885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.832 [2024-12-06 03:38:36.955790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.832 [2024-12-06 03:38:36.955809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.104 [2024-12-06 03:38:36.971215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.105 [2024-12-06 03:38:36.971234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.105 [2024-12-06 03:38:36.986666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.105 [2024-12-06 03:38:36.986684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.105 [2024-12-06 03:38:37.002396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.105 [2024-12-06 03:38:37.002414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.105 [2024-12-06 03:38:37.018239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.105 [2024-12-06 03:38:37.018258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.105 [2024-12-06 03:38:37.030921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.105 [2024-12-06 03:38:37.030940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.105 [2024-12-06 03:38:37.047026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.105 [2024-12-06 03:38:37.047045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.105 [2024-12-06 03:38:37.062214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.105 [2024-12-06 03:38:37.062233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.105 [2024-12-06 03:38:37.073748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.105 [2024-12-06 03:38:37.073766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.105 [2024-12-06 03:38:37.087409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.105 [2024-12-06 03:38:37.087428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.105 [2024-12-06 03:38:37.102505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.105 [2024-12-06 03:38:37.102523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.105 [2024-12-06 03:38:37.118721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.105 [2024-12-06 03:38:37.118739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.105 [2024-12-06 03:38:37.134307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.105 [2024-12-06 03:38:37.134327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.105 16435.50 IOPS, 128.40 MiB/s [2024-12-06T02:38:37.246Z] [2024-12-06 03:38:37.146851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.105 [2024-12-06 03:38:37.146869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.105 [2024-12-06 03:38:37.161797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.105 [2024-12-06 03:38:37.161816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.105 [2024-12-06 03:38:37.174982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.105 [2024-12-06 03:38:37.175000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.105 [2024-12-06 03:38:37.190615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.105 [2024-12-06 03:38:37.190633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.105 [2024-12-06 03:38:37.206150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.105 [2024-12-06 03:38:37.206170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.105 [2024-12-06 03:38:37.219165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.105 [2024-12-06 03:38:37.219184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.105 [2024-12-06 03:38:37.230600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.105 [2024-12-06 03:38:37.230618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.364 [2024-12-06 03:38:37.244274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.364 [2024-12-06 03:38:37.244293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.364 [2024-12-06 03:38:37.259423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.364 [2024-12-06 03:38:37.259442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.364 [2024-12-06 03:38:37.274310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.364 [2024-12-06 03:38:37.274328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.364 [2024-12-06 03:38:37.284828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.364 [2024-12-06 03:38:37.284846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.364 [2024-12-06 03:38:37.300004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.364 [2024-12-06 03:38:37.300024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.364 [2024-12-06 03:38:37.315051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.364 [2024-12-06 03:38:37.315070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.364 [2024-12-06 03:38:37.329954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.364 [2024-12-06 03:38:37.329973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.364 [2024-12-06 03:38:37.343860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.364 [2024-12-06 03:38:37.343879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.364 [2024-12-06 03:38:37.359091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.364 [2024-12-06 03:38:37.359109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.364 [2024-12-06 03:38:37.374652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.364 [2024-12-06 03:38:37.374675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.364 [2024-12-06 03:38:37.390769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.364 [2024-12-06 03:38:37.390789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.364 [2024-12-06 03:38:37.406596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.364 [2024-12-06 03:38:37.406615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.364 [2024-12-06 03:38:37.419326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.364 [2024-12-06 03:38:37.419344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.364 [2024-12-06 03:38:37.430590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.364 [2024-12-06 03:38:37.430608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.364 [2024-12-06 03:38:37.443897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.364 [2024-12-06 03:38:37.443916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.364 [2024-12-06 03:38:37.458734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.364 [2024-12-06 03:38:37.458751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.364 [2024-12-06 03:38:37.474213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.364 [2024-12-06 03:38:37.474232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.364 [2024-12-06 03:38:37.487752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.364 [2024-12-06 03:38:37.487771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.623 [2024-12-06 03:38:37.503106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.623 [2024-12-06 03:38:37.503126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.623 [2024-12-06 03:38:37.517917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.623 [2024-12-06 03:38:37.517936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.623 [2024-12-06 03:38:37.530473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.623 [2024-12-06 03:38:37.530490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.623 [2024-12-06 03:38:37.543942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.623 [2024-12-06 03:38:37.543967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.623 [2024-12-06 03:38:37.558810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.623 [2024-12-06 03:38:37.558828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.623 [2024-12-06 03:38:37.573924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.623 [2024-12-06 03:38:37.573943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.623 [2024-12-06 03:38:37.587766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.623 [2024-12-06 03:38:37.587784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.623 [2024-12-06 03:38:37.602868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.623 [2024-12-06 03:38:37.602886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.623 [2024-12-06 03:38:37.618166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.623 [2024-12-06 03:38:37.618184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.623 [2024-12-06 03:38:37.632480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.623 [2024-12-06 03:38:37.632498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.623 [2024-12-06 03:38:37.647507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.623 [2024-12-06 03:38:37.647529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.623 [2024-12-06 03:38:37.662085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.623 [2024-12-06 03:38:37.662103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.623 [2024-12-06 03:38:37.674491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.623 [2024-12-06 03:38:37.674508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.623 [2024-12-06 03:38:37.690203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.623 [2024-12-06 03:38:37.690222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.623 [2024-12-06 03:38:37.703625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.623 [2024-12-06 03:38:37.703644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.623 [2024-12-06 03:38:37.718625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.623 [2024-12-06 03:38:37.718643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.623 [2024-12-06 03:38:37.733836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.623 [2024-12-06 03:38:37.733854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.623 [2024-12-06 03:38:37.746797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.623 [2024-12-06 03:38:37.746815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.882 [2024-12-06 03:38:37.761969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.882 [2024-12-06 03:38:37.761988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.882 [2024-12-06 03:38:37.775553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.882 [2024-12-06 03:38:37.775571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.882 [2024-12-06 03:38:37.790732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.882 [2024-12-06 03:38:37.790751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.882 [2024-12-06 03:38:37.806763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.882 [2024-12-06 03:38:37.806782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.882 [2024-12-06 03:38:37.822055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.882 [2024-12-06 03:38:37.822075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.882 [2024-12-06 03:38:37.835710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.882 [2024-12-06 03:38:37.835730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.882 [2024-12-06 03:38:37.850497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.882 [2024-12-06 03:38:37.850515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.882 [2024-12-06 03:38:37.866075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.882 [2024-12-06 03:38:37.866094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.882 [2024-12-06 03:38:37.879500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.882 [2024-12-06 03:38:37.879519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.882 [2024-12-06 03:38:37.890828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.882 [2024-12-06 03:38:37.890846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.882 [2024-12-06 03:38:37.906682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.882 [2024-12-06 03:38:37.906700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.882 [2024-12-06 03:38:37.922312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.882 [2024-12-06 03:38:37.922335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.882 [2024-12-06 03:38:37.934266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.882 [2024-12-06 03:38:37.934284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.882 [2024-12-06 03:38:37.948181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.882 [2024-12-06 03:38:37.948199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.882 [2024-12-06 03:38:37.963065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.882 [2024-12-06 03:38:37.963084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.882 [2024-12-06 03:38:37.978077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.882 [2024-12-06 03:38:37.978096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.882 [2024-12-06 03:38:37.991822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.882 [2024-12-06 03:38:37.991840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.882 [2024-12-06 03:38:38.006808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.882 [2024-12-06 03:38:38.006826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.142 [2024-12-06 03:38:38.022590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.142 [2024-12-06 03:38:38.022609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.142 [2024-12-06 03:38:38.039054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.142 [2024-12-06 03:38:38.039073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.142 [2024-12-06 03:38:38.053960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.142 [2024-12-06 03:38:38.053978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.142 [2024-12-06 03:38:38.065326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.142 [2024-12-06 03:38:38.065344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.142 [2024-12-06 03:38:38.079818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.142 [2024-12-06 03:38:38.079836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.142 [2024-12-06 03:38:38.094886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.142 [2024-12-06 03:38:38.094905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.142 [2024-12-06 03:38:38.110997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.142 [2024-12-06 03:38:38.111016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.142 [2024-12-06 03:38:38.126092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.142 [2024-12-06 03:38:38.126111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.142 [2024-12-06 03:38:38.140469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.142 [2024-12-06 03:38:38.140488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.142 16432.60 IOPS, 128.38 MiB/s 00:30:18.142 Latency(us) 00:30:18.142 [2024-12-06T02:38:38.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:18.142 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:30:18.142 Nvme1n1 : 5.00 16443.35 128.46 0.00 0.00 7777.95 2094.30 13164.19 00:30:18.142 [2024-12-06T02:38:38.283Z] =================================================================================================================== 00:30:18.142 [2024-12-06T02:38:38.283Z] Total : 16443.35 128.46 0.00 0.00 7777.95 2094.30 13164.19 00:30:18.142 [2024-12-06 03:38:38.150116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.142 [2024-12-06 03:38:38.150133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.142 [2024-12-06 03:38:38.162094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.142 [2024-12-06 03:38:38.162110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.142 [2024-12-06 03:38:38.174099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.142 [2024-12-06 03:38:38.174112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.142 [2024-12-06 03:38:38.186099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.142 [2024-12-06 03:38:38.186118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.142 [2024-12-06 03:38:38.198095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.142 [2024-12-06 03:38:38.198108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.142 [2024-12-06 03:38:38.210098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.142 [2024-12-06 03:38:38.210112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.142 [2024-12-06 03:38:38.222093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.142 [2024-12-06 03:38:38.222126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.142 [2024-12-06 03:38:38.234092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.142 [2024-12-06 03:38:38.234105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.142 [2024-12-06 03:38:38.246091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.142 [2024-12-06 03:38:38.246105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.142 [2024-12-06 03:38:38.258089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.142 [2024-12-06 03:38:38.258101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.142 [2024-12-06 03:38:38.270088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.142 [2024-12-06 03:38:38.270097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.401 [2024-12-06 03:38:38.282093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.401 [2024-12-06 03:38:38.282104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.401 [2024-12-06 03:38:38.294088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.401 [2024-12-06 03:38:38.294100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.401 [2024-12-06 03:38:38.306089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.401 [2024-12-06 03:38:38.306099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2829402) - No such process 00:30:18.401 03:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2829402 00:30:18.401 03:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:18.401 03:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.401 03:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:18.401 03:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.401 03:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:18.401 03:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.401 03:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:18.401 delay0 00:30:18.401 03:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.401 03:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:30:18.401 03:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.401 03:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:18.401 03:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.401 03:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:30:18.401 [2024-12-06 03:38:38.398035] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:24.973 Initializing NVMe Controllers 00:30:24.973 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:24.973 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:24.973 Initialization complete. Launching workers. 00:30:24.973 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 780 00:30:24.973 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1047, failed to submit 53 00:30:24.973 success 919, unsuccessful 128, failed 0 00:30:24.973 03:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:30:24.973 03:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:30:24.973 03:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:24.973 03:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:30:24.973 03:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:24.973 03:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:30:24.973 03:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:24.974 03:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:24.974 rmmod nvme_tcp 00:30:24.974 rmmod nvme_fabrics 00:30:24.974 rmmod nvme_keyring 00:30:24.974 03:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:24.974 03:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:30:24.974 03:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:30:24.974 03:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2827772 ']' 00:30:24.974 03:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2827772 00:30:24.974 03:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2827772 ']' 00:30:24.974 03:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2827772 00:30:24.974 03:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:30:24.974 03:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:24.974 03:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2827772 00:30:24.974 03:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:24.974 03:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:24.974 03:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2827772' 00:30:24.974 killing process with pid 2827772 00:30:24.974 03:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2827772 00:30:24.974 03:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2827772 00:30:24.974 03:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:24.974 03:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:24.974 03:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:24.974 03:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:30:24.974 03:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:30:24.974 03:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:24.974 03:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:30:24.974 03:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:24.974 03:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:24.974 03:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:24.974 03:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:24.974 03:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:27.508 00:30:27.508 real 0m30.956s 00:30:27.508 user 0m40.624s 00:30:27.508 sys 0m11.909s 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:27.508 ************************************ 00:30:27.508 END TEST nvmf_zcopy 00:30:27.508 ************************************ 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:27.508 ************************************ 00:30:27.508 START TEST nvmf_nmic 00:30:27.508 ************************************ 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:30:27.508 * Looking for test storage... 00:30:27.508 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:27.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.508 --rc genhtml_branch_coverage=1 00:30:27.508 --rc genhtml_function_coverage=1 00:30:27.508 --rc genhtml_legend=1 00:30:27.508 --rc geninfo_all_blocks=1 00:30:27.508 --rc geninfo_unexecuted_blocks=1 00:30:27.508 00:30:27.508 ' 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:27.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.508 --rc genhtml_branch_coverage=1 00:30:27.508 --rc genhtml_function_coverage=1 00:30:27.508 --rc genhtml_legend=1 00:30:27.508 --rc geninfo_all_blocks=1 00:30:27.508 --rc geninfo_unexecuted_blocks=1 00:30:27.508 00:30:27.508 ' 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:27.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.508 --rc genhtml_branch_coverage=1 00:30:27.508 --rc genhtml_function_coverage=1 00:30:27.508 --rc genhtml_legend=1 00:30:27.508 --rc geninfo_all_blocks=1 00:30:27.508 --rc geninfo_unexecuted_blocks=1 00:30:27.508 00:30:27.508 ' 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:27.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.508 --rc genhtml_branch_coverage=1 00:30:27.508 --rc genhtml_function_coverage=1 00:30:27.508 --rc genhtml_legend=1 00:30:27.508 --rc geninfo_all_blocks=1 00:30:27.508 --rc geninfo_unexecuted_blocks=1 00:30:27.508 00:30:27.508 ' 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.508 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.509 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.509 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:30:27.509 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.509 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:30:27.509 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:27.509 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:27.509 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:27.509 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:27.509 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:27.509 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:27.509 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:27.509 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:27.509 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:27.509 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:27.509 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:27.509 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:27.509 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:30:27.509 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:27.509 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:27.509 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:27.509 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:27.509 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:27.509 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.509 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:27.509 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.509 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:27.509 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:27.509 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:30:27.509 03:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:32.788 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:32.788 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:30:32.788 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:32.788 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:32.788 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:32.788 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:32.788 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:32.788 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:30:32.788 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:32.788 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:30:32.788 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:30:32.788 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:30:32.788 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:30:32.788 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:30:32.788 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:30:32.788 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:32.788 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:32.788 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:32.788 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:32.788 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:32.788 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:32.788 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:32.788 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:32.788 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:32.788 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:32.788 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:32.788 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:32.788 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:32.788 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:32.789 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:32.789 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:32.789 Found net devices under 0000:86:00.0: cvl_0_0 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:32.789 Found net devices under 0000:86:00.1: cvl_0_1 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:32.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:32.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:30:32.789 00:30:32.789 --- 10.0.0.2 ping statistics --- 00:30:32.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:32.789 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:32.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:32.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:30:32.789 00:30:32.789 --- 10.0.0.1 ping statistics --- 00:30:32.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:32.789 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2834877 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2834877 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2834877 ']' 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:32.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:32.789 03:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:33.049 [2024-12-06 03:38:52.960251] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:33.049 [2024-12-06 03:38:52.961252] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:30:33.049 [2024-12-06 03:38:52.961294] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:33.049 [2024-12-06 03:38:53.030100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:33.049 [2024-12-06 03:38:53.073524] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:33.049 [2024-12-06 03:38:53.073561] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:33.049 [2024-12-06 03:38:53.073568] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:33.049 [2024-12-06 03:38:53.073577] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:33.049 [2024-12-06 03:38:53.073582] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:33.049 [2024-12-06 03:38:53.075198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:33.049 [2024-12-06 03:38:53.075217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:33.049 [2024-12-06 03:38:53.075306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:33.049 [2024-12-06 03:38:53.075308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:33.049 [2024-12-06 03:38:53.143870] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:33.049 [2024-12-06 03:38:53.143941] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:33.049 [2024-12-06 03:38:53.144131] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:33.049 [2024-12-06 03:38:53.144422] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:33.049 [2024-12-06 03:38:53.144592] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:33.049 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:33.049 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:30:33.049 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:33.049 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:33.049 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:33.308 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:33.308 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:33.308 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.308 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:33.308 [2024-12-06 03:38:53.207816] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:33.308 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.308 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:33.308 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.308 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:33.308 Malloc0 00:30:33.309 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.309 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:33.309 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.309 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:33.309 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.309 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:33.309 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.309 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:33.309 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.309 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:33.309 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.309 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:33.309 [2024-12-06 03:38:53.267984] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:33.309 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.309 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:30:33.309 test case1: single bdev can't be used in multiple subsystems 00:30:33.309 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:30:33.309 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.309 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:33.309 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.309 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:33.309 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.309 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:33.309 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.309 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:30:33.309 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:30:33.309 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.309 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:33.309 [2024-12-06 03:38:53.291724] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:30:33.309 [2024-12-06 03:38:53.291744] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:30:33.309 [2024-12-06 03:38:53.291752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:33.309 request: 00:30:33.309 { 00:30:33.309 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:30:33.309 "namespace": { 00:30:33.309 "bdev_name": "Malloc0", 00:30:33.309 "no_auto_visible": false, 00:30:33.309 "hide_metadata": false 00:30:33.309 }, 00:30:33.309 "method": "nvmf_subsystem_add_ns", 00:30:33.309 "req_id": 1 00:30:33.309 } 00:30:33.309 Got JSON-RPC error response 00:30:33.309 response: 00:30:33.309 { 00:30:33.309 "code": -32602, 00:30:33.309 "message": "Invalid parameters" 00:30:33.309 } 00:30:33.309 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:33.309 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:30:33.309 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:30:33.309 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:30:33.309 Adding namespace failed - expected result. 00:30:33.309 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:30:33.309 test case2: host connect to nvmf target in multiple paths 00:30:33.309 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:33.309 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.309 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:33.309 [2024-12-06 03:38:53.303818] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:33.309 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.309 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:33.568 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:30:33.827 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:30:33.827 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:30:33.827 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:30:33.827 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:30:33.827 03:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:30:36.365 03:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:30:36.365 03:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:30:36.365 03:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:30:36.365 03:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:30:36.365 03:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:30:36.365 03:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:30:36.365 03:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:30:36.365 [global] 00:30:36.365 thread=1 00:30:36.365 invalidate=1 00:30:36.365 rw=write 00:30:36.365 time_based=1 00:30:36.365 runtime=1 00:30:36.365 ioengine=libaio 00:30:36.365 direct=1 00:30:36.365 bs=4096 00:30:36.365 iodepth=1 00:30:36.365 norandommap=0 00:30:36.365 numjobs=1 00:30:36.365 00:30:36.365 verify_dump=1 00:30:36.365 verify_backlog=512 00:30:36.365 verify_state_save=0 00:30:36.365 do_verify=1 00:30:36.366 verify=crc32c-intel 00:30:36.366 [job0] 00:30:36.366 filename=/dev/nvme0n1 00:30:36.366 Could not set queue depth (nvme0n1) 00:30:36.366 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:36.366 fio-3.35 00:30:36.366 Starting 1 thread 00:30:37.303 00:30:37.303 job0: (groupid=0, jobs=1): err= 0: pid=2835581: Fri Dec 6 03:38:57 2024 00:30:37.303 read: IOPS=2354, BW=9419KiB/s (9645kB/s)(9428KiB/1001msec) 00:30:37.303 slat (nsec): min=6917, max=37804, avg=8362.78, stdev=1460.10 00:30:37.303 clat (usec): min=188, max=900, avg=223.95, stdev=59.85 00:30:37.303 lat (usec): min=200, max=907, avg=232.32, stdev=59.94 00:30:37.303 clat percentiles (usec): 00:30:37.303 | 1.00th=[ 196], 5.00th=[ 198], 10.00th=[ 200], 20.00th=[ 202], 00:30:37.303 | 30.00th=[ 202], 40.00th=[ 204], 50.00th=[ 206], 60.00th=[ 208], 00:30:37.303 | 70.00th=[ 210], 80.00th=[ 215], 90.00th=[ 269], 95.00th=[ 371], 00:30:37.303 | 99.00th=[ 383], 99.50th=[ 392], 99.90th=[ 807], 99.95th=[ 857], 00:30:37.303 | 99.99th=[ 898] 00:30:37.303 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:30:37.304 slat (usec): min=10, max=26940, avg=22.63, stdev=532.21 00:30:37.304 clat (usec): min=135, max=347, avg=148.35, stdev=12.96 00:30:37.304 lat (usec): min=147, max=27229, avg=170.98, stdev=535.16 00:30:37.304 clat percentiles (usec): 00:30:37.304 | 1.00th=[ 139], 5.00th=[ 141], 10.00th=[ 141], 20.00th=[ 143], 00:30:37.304 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 145], 60.00th=[ 147], 00:30:37.304 | 70.00th=[ 149], 80.00th=[ 151], 90.00th=[ 155], 95.00th=[ 165], 00:30:37.304 | 99.00th=[ 206], 99.50th=[ 208], 99.90th=[ 223], 99.95th=[ 289], 00:30:37.304 | 99.99th=[ 347] 00:30:37.304 bw ( KiB/s): min=10360, max=10360, per=100.00%, avg=10360.00, stdev= 0.00, samples=1 00:30:37.304 iops : min= 2590, max= 2590, avg=2590.00, stdev= 0.00, samples=1 00:30:37.304 lat (usec) : 250=94.94%, 500=4.88%, 750=0.08%, 1000=0.10% 00:30:37.304 cpu : usr=4.10%, sys=8.00%, ctx=4920, majf=0, minf=1 00:30:37.304 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:37.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:37.304 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:37.304 issued rwts: total=2357,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:37.304 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:37.304 00:30:37.304 Run status group 0 (all jobs): 00:30:37.304 READ: bw=9419KiB/s (9645kB/s), 9419KiB/s-9419KiB/s (9645kB/s-9645kB/s), io=9428KiB (9654kB), run=1001-1001msec 00:30:37.304 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:30:37.304 00:30:37.304 Disk stats (read/write): 00:30:37.304 nvme0n1: ios=2073/2386, merge=0/0, ticks=1421/322, in_queue=1743, util=98.50% 00:30:37.304 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:37.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:30:37.574 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:37.574 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:30:37.574 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:30:37.574 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:37.574 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:30:37.574 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:37.574 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:30:37.574 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:37.574 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:30:37.574 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:37.574 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:30:37.574 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:37.574 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:30:37.574 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:37.574 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:37.574 rmmod nvme_tcp 00:30:37.574 rmmod nvme_fabrics 00:30:37.574 rmmod nvme_keyring 00:30:37.574 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:37.574 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:30:37.574 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:30:37.574 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2834877 ']' 00:30:37.574 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2834877 00:30:37.574 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2834877 ']' 00:30:37.574 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2834877 00:30:37.574 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:30:37.574 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:37.574 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2834877 00:30:37.574 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:37.574 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:37.574 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2834877' 00:30:37.574 killing process with pid 2834877 00:30:37.574 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2834877 00:30:37.574 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2834877 00:30:37.833 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:37.833 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:37.833 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:37.833 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:30:37.833 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:30:37.833 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:37.833 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:30:37.833 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:37.833 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:37.833 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:37.833 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:37.833 03:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:40.366 03:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:40.366 00:30:40.366 real 0m12.839s 00:30:40.366 user 0m24.610s 00:30:40.366 sys 0m5.882s 00:30:40.366 03:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:40.366 03:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:40.366 ************************************ 00:30:40.366 END TEST nvmf_nmic 00:30:40.366 ************************************ 00:30:40.366 03:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:30:40.366 03:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:40.366 03:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:40.366 03:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:40.366 ************************************ 00:30:40.366 START TEST nvmf_fio_target 00:30:40.366 ************************************ 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:30:40.366 * Looking for test storage... 00:30:40.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:40.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.366 --rc genhtml_branch_coverage=1 00:30:40.366 --rc genhtml_function_coverage=1 00:30:40.366 --rc genhtml_legend=1 00:30:40.366 --rc geninfo_all_blocks=1 00:30:40.366 --rc geninfo_unexecuted_blocks=1 00:30:40.366 00:30:40.366 ' 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:40.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.366 --rc genhtml_branch_coverage=1 00:30:40.366 --rc genhtml_function_coverage=1 00:30:40.366 --rc genhtml_legend=1 00:30:40.366 --rc geninfo_all_blocks=1 00:30:40.366 --rc geninfo_unexecuted_blocks=1 00:30:40.366 00:30:40.366 ' 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:40.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.366 --rc genhtml_branch_coverage=1 00:30:40.366 --rc genhtml_function_coverage=1 00:30:40.366 --rc genhtml_legend=1 00:30:40.366 --rc geninfo_all_blocks=1 00:30:40.366 --rc geninfo_unexecuted_blocks=1 00:30:40.366 00:30:40.366 ' 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:40.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.366 --rc genhtml_branch_coverage=1 00:30:40.366 --rc genhtml_function_coverage=1 00:30:40.366 --rc genhtml_legend=1 00:30:40.366 --rc geninfo_all_blocks=1 00:30:40.366 --rc geninfo_unexecuted_blocks=1 00:30:40.366 00:30:40.366 ' 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.366 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:30:40.367 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.367 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:30:40.367 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:40.367 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:40.367 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:40.367 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:40.367 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:40.367 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:40.367 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:40.367 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:40.367 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:40.367 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:40.367 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:40.367 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:40.367 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:40.367 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:30:40.367 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:40.367 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:40.367 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:40.367 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:40.367 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:40.367 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:40.367 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:40.367 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:40.367 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:40.367 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:40.367 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:30:40.367 03:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:45.773 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:45.773 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:45.773 Found net devices under 0000:86:00.0: cvl_0_0 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:45.773 Found net devices under 0000:86:00.1: cvl_0_1 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:45.773 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:45.774 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:45.774 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:45.774 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:45.774 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:45.774 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:45.774 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:45.774 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:45.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:45.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:30:45.774 00:30:45.774 --- 10.0.0.2 ping statistics --- 00:30:45.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.774 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:30:45.774 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:45.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:45.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:30:45.774 00:30:45.774 --- 10.0.0.1 ping statistics --- 00:30:45.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.774 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:30:45.774 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:45.774 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:30:45.774 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:45.774 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:45.774 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:45.774 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:45.774 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:45.774 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:45.774 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:45.774 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:30:45.774 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:45.774 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:45.774 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:45.774 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2839346 00:30:45.774 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2839346 00:30:45.774 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:30:45.774 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2839346 ']' 00:30:45.774 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:45.774 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:45.774 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:45.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:45.774 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:45.774 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:45.774 [2024-12-06 03:39:05.755096] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:45.774 [2024-12-06 03:39:05.756212] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:30:45.774 [2024-12-06 03:39:05.756249] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:45.774 [2024-12-06 03:39:05.822013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:45.774 [2024-12-06 03:39:05.865132] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:45.774 [2024-12-06 03:39:05.865170] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:45.774 [2024-12-06 03:39:05.865177] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:45.774 [2024-12-06 03:39:05.865184] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:45.774 [2024-12-06 03:39:05.865189] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:45.774 [2024-12-06 03:39:05.866821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:45.774 [2024-12-06 03:39:05.866920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:45.774 [2024-12-06 03:39:05.867015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:45.774 [2024-12-06 03:39:05.867016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:46.034 [2024-12-06 03:39:05.936944] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:46.034 [2024-12-06 03:39:05.937086] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:46.034 [2024-12-06 03:39:05.937248] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:46.034 [2024-12-06 03:39:05.937584] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:46.034 [2024-12-06 03:39:05.937755] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:46.034 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:46.034 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:30:46.034 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:46.034 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:46.034 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:46.034 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:46.034 03:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:46.294 [2024-12-06 03:39:06.175598] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:46.294 03:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:46.294 03:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:30:46.294 03:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:46.554 03:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:30:46.554 03:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:46.813 03:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:30:46.813 03:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:47.073 03:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:30:47.073 03:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:30:47.332 03:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:47.332 03:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:30:47.590 03:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:47.590 03:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:30:47.590 03:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:47.849 03:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:30:47.849 03:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:30:48.108 03:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:48.367 03:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:30:48.367 03:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:48.367 03:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:30:48.367 03:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:48.626 03:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:48.886 [2024-12-06 03:39:08.843641] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:48.886 03:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:30:49.145 03:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:30:49.145 03:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:49.404 03:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:30:49.404 03:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:30:49.404 03:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:30:49.404 03:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:30:49.404 03:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:30:49.404 03:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:30:51.966 03:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:30:51.966 03:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:30:51.966 03:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:30:51.966 03:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:30:51.966 03:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:30:51.966 03:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:30:51.966 03:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:30:51.966 [global] 00:30:51.966 thread=1 00:30:51.966 invalidate=1 00:30:51.966 rw=write 00:30:51.966 time_based=1 00:30:51.966 runtime=1 00:30:51.966 ioengine=libaio 00:30:51.966 direct=1 00:30:51.966 bs=4096 00:30:51.966 iodepth=1 00:30:51.966 norandommap=0 00:30:51.966 numjobs=1 00:30:51.966 00:30:51.966 verify_dump=1 00:30:51.966 verify_backlog=512 00:30:51.966 verify_state_save=0 00:30:51.966 do_verify=1 00:30:51.966 verify=crc32c-intel 00:30:51.966 [job0] 00:30:51.966 filename=/dev/nvme0n1 00:30:51.966 [job1] 00:30:51.966 filename=/dev/nvme0n2 00:30:51.966 [job2] 00:30:51.966 filename=/dev/nvme0n3 00:30:51.966 [job3] 00:30:51.966 filename=/dev/nvme0n4 00:30:51.966 Could not set queue depth (nvme0n1) 00:30:51.966 Could not set queue depth (nvme0n2) 00:30:51.966 Could not set queue depth (nvme0n3) 00:30:51.966 Could not set queue depth (nvme0n4) 00:30:51.966 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:51.966 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:51.966 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:51.966 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:51.966 fio-3.35 00:30:51.966 Starting 4 threads 00:30:53.343 00:30:53.343 job0: (groupid=0, jobs=1): err= 0: pid=2840985: Fri Dec 6 03:39:13 2024 00:30:53.343 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:30:53.343 slat (nsec): min=7054, max=22710, avg=8821.21, stdev=1331.69 00:30:53.343 clat (usec): min=226, max=1623, avg=248.33, stdev=31.93 00:30:53.343 lat (usec): min=233, max=1632, avg=257.15, stdev=32.05 00:30:53.343 clat percentiles (usec): 00:30:53.343 | 1.00th=[ 233], 5.00th=[ 237], 10.00th=[ 239], 20.00th=[ 241], 00:30:53.343 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 249], 00:30:53.343 | 70.00th=[ 251], 80.00th=[ 253], 90.00th=[ 260], 95.00th=[ 265], 00:30:53.343 | 99.00th=[ 277], 99.50th=[ 285], 99.90th=[ 322], 99.95th=[ 441], 00:30:53.343 | 99.99th=[ 1631] 00:30:53.343 write: IOPS=2522, BW=9.85MiB/s (10.3MB/s)(9.86MiB/1001msec); 0 zone resets 00:30:53.343 slat (nsec): min=10035, max=49053, avg=11621.70, stdev=1939.73 00:30:53.343 clat (usec): min=144, max=360, avg=170.41, stdev=16.51 00:30:53.343 lat (usec): min=155, max=387, avg=182.03, stdev=16.90 00:30:53.343 clat percentiles (usec): 00:30:53.343 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 159], 00:30:53.343 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 169], 00:30:53.343 | 70.00th=[ 174], 80.00th=[ 180], 90.00th=[ 192], 95.00th=[ 200], 00:30:53.343 | 99.00th=[ 229], 99.50th=[ 245], 99.90th=[ 330], 99.95th=[ 338], 00:30:53.343 | 99.99th=[ 359] 00:30:53.343 bw ( KiB/s): min= 9752, max= 9752, per=41.03%, avg=9752.00, stdev= 0.00, samples=1 00:30:53.343 iops : min= 2438, max= 2438, avg=2438.00, stdev= 0.00, samples=1 00:30:53.343 lat (usec) : 250=85.48%, 500=14.50% 00:30:53.343 lat (msec) : 2=0.02% 00:30:53.343 cpu : usr=5.20%, sys=6.20%, ctx=4574, majf=0, minf=1 00:30:53.343 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:53.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.343 issued rwts: total=2048,2525,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:53.343 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:53.343 job1: (groupid=0, jobs=1): err= 0: pid=2840986: Fri Dec 6 03:39:13 2024 00:30:53.343 read: IOPS=2137, BW=8551KiB/s (8757kB/s)(8560KiB/1001msec) 00:30:53.343 slat (nsec): min=2727, max=26365, avg=6557.98, stdev=2026.52 00:30:53.343 clat (usec): min=191, max=406, avg=232.45, stdev=17.60 00:30:53.343 lat (usec): min=198, max=413, avg=239.01, stdev=16.47 00:30:53.343 clat percentiles (usec): 00:30:53.343 | 1.00th=[ 206], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 217], 00:30:53.343 | 30.00th=[ 221], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 237], 00:30:53.343 | 70.00th=[ 245], 80.00th=[ 251], 90.00th=[ 255], 95.00th=[ 260], 00:30:53.343 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 310], 99.95th=[ 334], 00:30:53.343 | 99.99th=[ 408] 00:30:53.343 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:30:53.343 slat (nsec): min=3696, max=42745, avg=9869.70, stdev=2683.43 00:30:53.343 clat (usec): min=134, max=369, avg=176.53, stdev=34.97 00:30:53.343 lat (usec): min=141, max=379, avg=186.40, stdev=35.62 00:30:53.343 clat percentiles (usec): 00:30:53.343 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 153], 00:30:53.343 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:30:53.343 | 70.00th=[ 172], 80.00th=[ 235], 90.00th=[ 241], 95.00th=[ 243], 00:30:53.343 | 99.00th=[ 247], 99.50th=[ 265], 99.90th=[ 314], 99.95th=[ 330], 00:30:53.343 | 99.99th=[ 371] 00:30:53.343 bw ( KiB/s): min=10072, max=10072, per=42.37%, avg=10072.00, stdev= 0.00, samples=1 00:30:53.343 iops : min= 2518, max= 2518, avg=2518.00, stdev= 0.00, samples=1 00:30:53.343 lat (usec) : 250=89.98%, 500=10.02% 00:30:53.343 cpu : usr=2.60%, sys=3.60%, ctx=4701, majf=0, minf=2 00:30:53.343 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:53.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.343 issued rwts: total=2140,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:53.343 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:53.343 job2: (groupid=0, jobs=1): err= 0: pid=2840989: Fri Dec 6 03:39:13 2024 00:30:53.343 read: IOPS=21, BW=87.2KiB/s (89.3kB/s)(88.0KiB/1009msec) 00:30:53.343 slat (nsec): min=9531, max=25044, avg=23319.59, stdev=3117.15 00:30:53.343 clat (usec): min=40897, max=41528, avg=40988.93, stdev=126.09 00:30:53.343 lat (usec): min=40921, max=41538, avg=41012.25, stdev=123.16 00:30:53.343 clat percentiles (usec): 00:30:53.343 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:30:53.343 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:53.343 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:53.343 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:30:53.343 | 99.99th=[41681] 00:30:53.343 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:30:53.343 slat (nsec): min=9768, max=37446, avg=11014.17, stdev=2023.45 00:30:53.343 clat (usec): min=158, max=533, avg=194.16, stdev=31.37 00:30:53.343 lat (usec): min=169, max=544, avg=205.18, stdev=32.03 00:30:53.343 clat percentiles (usec): 00:30:53.343 | 1.00th=[ 165], 5.00th=[ 169], 10.00th=[ 172], 20.00th=[ 178], 00:30:53.343 | 30.00th=[ 182], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 194], 00:30:53.343 | 70.00th=[ 198], 80.00th=[ 202], 90.00th=[ 212], 95.00th=[ 227], 00:30:53.343 | 99.00th=[ 310], 99.50th=[ 437], 99.90th=[ 537], 99.95th=[ 537], 00:30:53.343 | 99.99th=[ 537] 00:30:53.343 bw ( KiB/s): min= 4096, max= 4096, per=17.23%, avg=4096.00, stdev= 0.00, samples=1 00:30:53.343 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:53.343 lat (usec) : 250=93.82%, 500=1.69%, 750=0.37% 00:30:53.343 lat (msec) : 50=4.12% 00:30:53.343 cpu : usr=0.10%, sys=0.69%, ctx=536, majf=0, minf=1 00:30:53.343 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:53.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.343 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:53.343 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:53.343 job3: (groupid=0, jobs=1): err= 0: pid=2840990: Fri Dec 6 03:39:13 2024 00:30:53.344 read: IOPS=21, BW=85.6KiB/s (87.7kB/s)(88.0KiB/1028msec) 00:30:53.344 slat (nsec): min=10833, max=27381, avg=24982.27, stdev=3286.41 00:30:53.344 clat (usec): min=40875, max=41199, avg=40973.52, stdev=62.15 00:30:53.344 lat (usec): min=40900, max=41210, avg=40998.50, stdev=59.58 00:30:53.344 clat percentiles (usec): 00:30:53.344 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:30:53.344 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:53.344 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:53.344 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:53.344 | 99.99th=[41157] 00:30:53.344 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:30:53.344 slat (nsec): min=9570, max=36784, avg=14132.43, stdev=2078.42 00:30:53.344 clat (usec): min=155, max=385, avg=227.43, stdev=27.54 00:30:53.344 lat (usec): min=167, max=422, avg=241.56, stdev=28.34 00:30:53.344 clat percentiles (usec): 00:30:53.344 | 1.00th=[ 159], 5.00th=[ 169], 10.00th=[ 186], 20.00th=[ 212], 00:30:53.344 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 237], 00:30:53.344 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 255], 95.00th=[ 265], 00:30:53.344 | 99.00th=[ 281], 99.50th=[ 310], 99.90th=[ 388], 99.95th=[ 388], 00:30:53.344 | 99.99th=[ 388] 00:30:53.344 bw ( KiB/s): min= 4096, max= 4096, per=17.23%, avg=4096.00, stdev= 0.00, samples=1 00:30:53.344 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:53.344 lat (usec) : 250=79.78%, 500=16.10% 00:30:53.344 lat (msec) : 50=4.12% 00:30:53.344 cpu : usr=0.78%, sys=0.68%, ctx=538, majf=0, minf=1 00:30:53.344 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:53.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.344 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:53.344 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:53.344 00:30:53.344 Run status group 0 (all jobs): 00:30:53.344 READ: bw=16.1MiB/s (16.9MB/s), 85.6KiB/s-8551KiB/s (87.7kB/s-8757kB/s), io=16.5MiB (17.3MB), run=1001-1028msec 00:30:53.344 WRITE: bw=23.2MiB/s (24.3MB/s), 1992KiB/s-9.99MiB/s (2040kB/s-10.5MB/s), io=23.9MiB (25.0MB), run=1001-1028msec 00:30:53.344 00:30:53.344 Disk stats (read/write): 00:30:53.344 nvme0n1: ios=1822/2048, merge=0/0, ticks=438/331, in_queue=769, util=85.77% 00:30:53.344 nvme0n2: ios=1904/2048, merge=0/0, ticks=489/350, in_queue=839, util=90.58% 00:30:53.344 nvme0n3: ios=75/512, merge=0/0, ticks=1550/97, in_queue=1647, util=97.48% 00:30:53.344 nvme0n4: ios=40/512, merge=0/0, ticks=1627/115, in_queue=1742, util=97.46% 00:30:53.344 03:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:30:53.344 [global] 00:30:53.344 thread=1 00:30:53.344 invalidate=1 00:30:53.344 rw=randwrite 00:30:53.344 time_based=1 00:30:53.344 runtime=1 00:30:53.344 ioengine=libaio 00:30:53.344 direct=1 00:30:53.344 bs=4096 00:30:53.344 iodepth=1 00:30:53.344 norandommap=0 00:30:53.344 numjobs=1 00:30:53.344 00:30:53.344 verify_dump=1 00:30:53.344 verify_backlog=512 00:30:53.344 verify_state_save=0 00:30:53.344 do_verify=1 00:30:53.344 verify=crc32c-intel 00:30:53.344 [job0] 00:30:53.344 filename=/dev/nvme0n1 00:30:53.344 [job1] 00:30:53.344 filename=/dev/nvme0n2 00:30:53.344 [job2] 00:30:53.344 filename=/dev/nvme0n3 00:30:53.344 [job3] 00:30:53.344 filename=/dev/nvme0n4 00:30:53.344 Could not set queue depth (nvme0n1) 00:30:53.344 Could not set queue depth (nvme0n2) 00:30:53.344 Could not set queue depth (nvme0n3) 00:30:53.344 Could not set queue depth (nvme0n4) 00:30:53.344 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:53.344 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:53.344 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:53.344 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:53.344 fio-3.35 00:30:53.344 Starting 4 threads 00:30:54.722 00:30:54.722 job0: (groupid=0, jobs=1): err= 0: pid=2841359: Fri Dec 6 03:39:14 2024 00:30:54.722 read: IOPS=214, BW=857KiB/s (877kB/s)(880KiB/1027msec) 00:30:54.722 slat (nsec): min=6790, max=25670, avg=9698.26, stdev=2837.52 00:30:54.722 clat (usec): min=228, max=41155, avg=4194.07, stdev=11966.96 00:30:54.722 lat (usec): min=237, max=41173, avg=4203.77, stdev=11968.28 00:30:54.722 clat percentiles (usec): 00:30:54.722 | 1.00th=[ 231], 5.00th=[ 247], 10.00th=[ 262], 20.00th=[ 277], 00:30:54.722 | 30.00th=[ 289], 40.00th=[ 314], 50.00th=[ 330], 60.00th=[ 343], 00:30:54.722 | 70.00th=[ 347], 80.00th=[ 363], 90.00th=[ 437], 95.00th=[41157], 00:30:54.722 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:54.722 | 99.99th=[41157] 00:30:54.722 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:30:54.722 slat (nsec): min=9212, max=45558, avg=10333.34, stdev=1851.94 00:30:54.722 clat (usec): min=152, max=429, avg=177.05, stdev=17.75 00:30:54.722 lat (usec): min=164, max=451, avg=187.39, stdev=18.67 00:30:54.722 clat percentiles (usec): 00:30:54.722 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 169], 00:30:54.722 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 178], 00:30:54.722 | 70.00th=[ 180], 80.00th=[ 182], 90.00th=[ 186], 95.00th=[ 192], 00:30:54.722 | 99.00th=[ 206], 99.50th=[ 277], 99.90th=[ 429], 99.95th=[ 429], 00:30:54.722 | 99.99th=[ 429] 00:30:54.722 bw ( KiB/s): min= 4096, max= 4096, per=17.74%, avg=4096.00, stdev= 0.00, samples=1 00:30:54.722 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:54.722 lat (usec) : 250=71.86%, 500=25.27% 00:30:54.722 lat (msec) : 50=2.87% 00:30:54.722 cpu : usr=0.39%, sys=0.58%, ctx=735, majf=0, minf=1 00:30:54.722 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:54.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:54.722 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:54.722 issued rwts: total=220,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:54.722 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:54.722 job1: (groupid=0, jobs=1): err= 0: pid=2841360: Fri Dec 6 03:39:14 2024 00:30:54.722 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:30:54.722 slat (nsec): min=6513, max=25505, avg=7510.55, stdev=959.46 00:30:54.722 clat (usec): min=200, max=440, avg=251.28, stdev=16.33 00:30:54.722 lat (usec): min=207, max=447, avg=258.79, stdev=16.35 00:30:54.722 clat percentiles (usec): 00:30:54.723 | 1.00th=[ 227], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 243], 00:30:54.723 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 251], 00:30:54.723 | 70.00th=[ 253], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 273], 00:30:54.723 | 99.00th=[ 334], 99.50th=[ 359], 99.90th=[ 396], 99.95th=[ 396], 00:30:54.723 | 99.99th=[ 441] 00:30:54.723 write: IOPS=2517, BW=9.83MiB/s (10.3MB/s)(9.84MiB/1001msec); 0 zone resets 00:30:54.723 slat (nsec): min=9302, max=36309, avg=10514.55, stdev=1151.96 00:30:54.723 clat (usec): min=130, max=411, avg=170.98, stdev=30.75 00:30:54.723 lat (usec): min=140, max=420, avg=181.49, stdev=30.87 00:30:54.723 clat percentiles (usec): 00:30:54.723 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 145], 20.00th=[ 153], 00:30:54.723 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:30:54.723 | 70.00th=[ 169], 80.00th=[ 184], 90.00th=[ 223], 95.00th=[ 237], 00:30:54.723 | 99.00th=[ 289], 99.50th=[ 302], 99.90th=[ 318], 99.95th=[ 363], 00:30:54.723 | 99.99th=[ 412] 00:30:54.723 bw ( KiB/s): min= 9496, max= 9496, per=41.13%, avg=9496.00, stdev= 0.00, samples=1 00:30:54.723 iops : min= 2374, max= 2374, avg=2374.00, stdev= 0.00, samples=1 00:30:54.723 lat (usec) : 250=79.58%, 500=20.42% 00:30:54.723 cpu : usr=2.00%, sys=4.40%, ctx=4569, majf=0, minf=1 00:30:54.723 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:54.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:54.723 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:54.723 issued rwts: total=2048,2520,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:54.723 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:54.723 job2: (groupid=0, jobs=1): err= 0: pid=2841361: Fri Dec 6 03:39:14 2024 00:30:54.723 read: IOPS=507, BW=2029KiB/s (2078kB/s)(2088KiB/1029msec) 00:30:54.723 slat (nsec): min=6536, max=25275, avg=8356.99, stdev=2455.28 00:30:54.723 clat (usec): min=211, max=41250, avg=1545.65, stdev=7020.45 00:30:54.723 lat (usec): min=218, max=41259, avg=1554.01, stdev=7022.15 00:30:54.723 clat percentiles (usec): 00:30:54.723 | 1.00th=[ 215], 5.00th=[ 229], 10.00th=[ 273], 20.00th=[ 281], 00:30:54.723 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 285], 60.00th=[ 289], 00:30:54.723 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 334], 95.00th=[ 371], 00:30:54.723 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:54.723 | 99.99th=[41157] 00:30:54.723 write: IOPS=995, BW=3981KiB/s (4076kB/s)(4096KiB/1029msec); 0 zone resets 00:30:54.723 slat (nsec): min=9416, max=65647, avg=11742.14, stdev=2926.79 00:30:54.723 clat (usec): min=139, max=3973, avg=196.61, stdev=122.39 00:30:54.723 lat (usec): min=148, max=3986, avg=208.36, stdev=122.56 00:30:54.723 clat percentiles (usec): 00:30:54.723 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 159], 20.00th=[ 163], 00:30:54.723 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 188], 60.00th=[ 200], 00:30:54.723 | 70.00th=[ 215], 80.00th=[ 225], 90.00th=[ 237], 95.00th=[ 243], 00:30:54.723 | 99.00th=[ 258], 99.50th=[ 265], 99.90th=[ 529], 99.95th=[ 3982], 00:30:54.723 | 99.99th=[ 3982] 00:30:54.723 bw ( KiB/s): min= 4096, max= 4096, per=17.74%, avg=4096.00, stdev= 0.00, samples=2 00:30:54.723 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:30:54.723 lat (usec) : 250=67.27%, 500=31.50%, 750=0.06% 00:30:54.723 lat (msec) : 4=0.06%, 10=0.06%, 50=1.03% 00:30:54.723 cpu : usr=0.78%, sys=1.56%, ctx=1547, majf=0, minf=2 00:30:54.723 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:54.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:54.723 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:54.723 issued rwts: total=522,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:54.723 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:54.723 job3: (groupid=0, jobs=1): err= 0: pid=2841362: Fri Dec 6 03:39:14 2024 00:30:54.723 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:30:54.723 slat (nsec): min=6531, max=30482, avg=7713.41, stdev=1452.21 00:30:54.723 clat (usec): min=203, max=40969, avg=384.28, stdev=2006.86 00:30:54.723 lat (usec): min=211, max=40983, avg=391.99, stdev=2007.04 00:30:54.723 clat percentiles (usec): 00:30:54.723 | 1.00th=[ 212], 5.00th=[ 219], 10.00th=[ 227], 20.00th=[ 277], 00:30:54.723 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 285], 60.00th=[ 289], 00:30:54.723 | 70.00th=[ 293], 80.00th=[ 297], 90.00th=[ 306], 95.00th=[ 310], 00:30:54.723 | 99.00th=[ 371], 99.50th=[ 437], 99.90th=[40633], 99.95th=[41157], 00:30:54.723 | 99.99th=[41157] 00:30:54.723 write: IOPS=1882, BW=7528KiB/s (7709kB/s)(7536KiB/1001msec); 0 zone resets 00:30:54.723 slat (nsec): min=9608, max=47078, avg=11780.65, stdev=2784.60 00:30:54.723 clat (usec): min=147, max=4096, avg=193.32, stdev=112.87 00:30:54.723 lat (usec): min=158, max=4140, avg=205.10, stdev=113.57 00:30:54.723 clat percentiles (usec): 00:30:54.723 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 165], 00:30:54.723 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 190], 00:30:54.723 | 70.00th=[ 194], 80.00th=[ 206], 90.00th=[ 229], 95.00th=[ 241], 00:30:54.723 | 99.00th=[ 258], 99.50th=[ 379], 99.90th=[ 1876], 99.95th=[ 4113], 00:30:54.723 | 99.99th=[ 4113] 00:30:54.723 bw ( KiB/s): min= 8192, max= 8192, per=35.48%, avg=8192.00, stdev= 0.00, samples=1 00:30:54.723 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:30:54.723 lat (usec) : 250=60.12%, 500=39.62%, 750=0.03% 00:30:54.723 lat (msec) : 2=0.09%, 10=0.03%, 50=0.12% 00:30:54.723 cpu : usr=2.10%, sys=3.10%, ctx=3421, majf=0, minf=1 00:30:54.723 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:54.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:54.723 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:54.723 issued rwts: total=1536,1884,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:54.723 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:54.723 00:30:54.723 Run status group 0 (all jobs): 00:30:54.723 READ: bw=16.4MiB/s (17.2MB/s), 857KiB/s-8184KiB/s (877kB/s-8380kB/s), io=16.9MiB (17.7MB), run=1001-1029msec 00:30:54.723 WRITE: bw=22.5MiB/s (23.6MB/s), 1994KiB/s-9.83MiB/s (2042kB/s-10.3MB/s), io=23.2MiB (24.3MB), run=1001-1029msec 00:30:54.723 00:30:54.723 Disk stats (read/write): 00:30:54.723 nvme0n1: ios=250/512, merge=0/0, ticks=1632/90, in_queue=1722, util=96.89% 00:30:54.723 nvme0n2: ios=1830/2048, merge=0/0, ticks=1266/352, in_queue=1618, util=96.55% 00:30:54.723 nvme0n3: ios=538/1024, merge=0/0, ticks=1108/193, in_queue=1301, util=95.01% 00:30:54.723 nvme0n4: ios=1341/1536, merge=0/0, ticks=1423/299, in_queue=1722, util=98.01% 00:30:54.723 03:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:30:54.723 [global] 00:30:54.723 thread=1 00:30:54.723 invalidate=1 00:30:54.723 rw=write 00:30:54.723 time_based=1 00:30:54.723 runtime=1 00:30:54.723 ioengine=libaio 00:30:54.723 direct=1 00:30:54.723 bs=4096 00:30:54.723 iodepth=128 00:30:54.723 norandommap=0 00:30:54.723 numjobs=1 00:30:54.723 00:30:54.723 verify_dump=1 00:30:54.723 verify_backlog=512 00:30:54.723 verify_state_save=0 00:30:54.723 do_verify=1 00:30:54.723 verify=crc32c-intel 00:30:54.723 [job0] 00:30:54.723 filename=/dev/nvme0n1 00:30:54.723 [job1] 00:30:54.723 filename=/dev/nvme0n2 00:30:54.723 [job2] 00:30:54.723 filename=/dev/nvme0n3 00:30:54.723 [job3] 00:30:54.723 filename=/dev/nvme0n4 00:30:54.723 Could not set queue depth (nvme0n1) 00:30:54.723 Could not set queue depth (nvme0n2) 00:30:54.723 Could not set queue depth (nvme0n3) 00:30:54.723 Could not set queue depth (nvme0n4) 00:30:54.982 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:54.982 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:54.982 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:54.982 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:54.982 fio-3.35 00:30:54.982 Starting 4 threads 00:30:56.360 00:30:56.360 job0: (groupid=0, jobs=1): err= 0: pid=2841730: Fri Dec 6 03:39:16 2024 00:30:56.360 read: IOPS=6083, BW=23.8MiB/s (24.9MB/s)(24.8MiB/1045msec) 00:30:56.360 slat (nsec): min=1126, max=11919k, avg=74266.07, stdev=629309.43 00:30:56.360 clat (usec): min=2849, max=49364, avg=10983.88, stdev=6737.98 00:30:56.360 lat (usec): min=2856, max=53309, avg=11058.15, stdev=6767.96 00:30:56.360 clat percentiles (usec): 00:30:56.360 | 1.00th=[ 5014], 5.00th=[ 6521], 10.00th=[ 7046], 20.00th=[ 7570], 00:30:56.360 | 30.00th=[ 7898], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9503], 00:30:56.360 | 70.00th=[10945], 80.00th=[12649], 90.00th=[15533], 95.00th=[23725], 00:30:56.360 | 99.00th=[45351], 99.50th=[49021], 99.90th=[49546], 99.95th=[49546], 00:30:56.360 | 99.99th=[49546] 00:30:56.360 write: IOPS=6369, BW=24.9MiB/s (26.1MB/s)(26.0MiB/1045msec); 0 zone resets 00:30:56.360 slat (nsec): min=1875, max=9093.6k, avg=65047.89, stdev=503848.60 00:30:56.360 clat (usec): min=502, max=41533, avg=9420.77, stdev=4002.21 00:30:56.360 lat (usec): min=512, max=41542, avg=9485.81, stdev=4040.73 00:30:56.360 clat percentiles (usec): 00:30:56.360 | 1.00th=[ 3654], 5.00th=[ 4817], 10.00th=[ 5669], 20.00th=[ 6783], 00:30:56.360 | 30.00th=[ 7439], 40.00th=[ 8029], 50.00th=[ 8455], 60.00th=[ 9634], 00:30:56.360 | 70.00th=[10290], 80.00th=[11600], 90.00th=[13566], 95.00th=[15795], 00:30:56.360 | 99.00th=[27132], 99.50th=[31327], 99.90th=[31327], 99.95th=[32375], 00:30:56.360 | 99.99th=[41681] 00:30:56.360 bw ( KiB/s): min=24576, max=28672, per=44.37%, avg=26624.00, stdev=2896.31, samples=2 00:30:56.360 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:30:56.360 lat (usec) : 750=0.04% 00:30:56.360 lat (msec) : 2=0.02%, 4=0.97%, 10=63.96%, 20=30.75%, 50=4.26% 00:30:56.360 cpu : usr=4.02%, sys=6.70%, ctx=392, majf=0, minf=1 00:30:56.360 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:30:56.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.360 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:56.360 issued rwts: total=6357,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:56.360 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:56.360 job1: (groupid=0, jobs=1): err= 0: pid=2841731: Fri Dec 6 03:39:16 2024 00:30:56.360 read: IOPS=3026, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1015msec) 00:30:56.360 slat (nsec): min=1243, max=29049k, avg=117086.14, stdev=931344.42 00:30:56.360 clat (usec): min=1880, max=58516, avg=14422.97, stdev=8426.66 00:30:56.360 lat (usec): min=1888, max=58523, avg=14540.06, stdev=8529.30 00:30:56.360 clat percentiles (usec): 00:30:56.360 | 1.00th=[ 4228], 5.00th=[ 7111], 10.00th=[ 8848], 20.00th=[ 9634], 00:30:56.360 | 30.00th=[ 9896], 40.00th=[10421], 50.00th=[10552], 60.00th=[11863], 00:30:56.360 | 70.00th=[14091], 80.00th=[18744], 90.00th=[26084], 95.00th=[32113], 00:30:56.360 | 99.00th=[46924], 99.50th=[51643], 99.90th=[58459], 99.95th=[58459], 00:30:56.360 | 99.99th=[58459] 00:30:56.360 write: IOPS=3473, BW=13.6MiB/s (14.2MB/s)(13.8MiB/1015msec); 0 zone resets 00:30:56.360 slat (usec): min=2, max=16968, avg=164.14, stdev=991.31 00:30:56.360 clat (usec): min=494, max=89367, avg=23906.87, stdev=19492.31 00:30:56.360 lat (usec): min=536, max=89382, avg=24071.01, stdev=19631.19 00:30:56.360 clat percentiles (usec): 00:30:56.360 | 1.00th=[ 3556], 5.00th=[ 7111], 10.00th=[ 8160], 20.00th=[ 8455], 00:30:56.360 | 30.00th=[ 9765], 40.00th=[12649], 50.00th=[17433], 60.00th=[21890], 00:30:56.360 | 70.00th=[27395], 80.00th=[36439], 90.00th=[51643], 95.00th=[69731], 00:30:56.360 | 99.00th=[87557], 99.50th=[88605], 99.90th=[89654], 99.95th=[89654], 00:30:56.360 | 99.99th=[89654] 00:30:56.360 bw ( KiB/s): min= 8640, max=18544, per=22.65%, avg=13592.00, stdev=7003.19, samples=2 00:30:56.360 iops : min= 2160, max= 4636, avg=3398.00, stdev=1750.80, samples=2 00:30:56.360 lat (usec) : 500=0.02%, 750=0.02% 00:30:56.360 lat (msec) : 2=0.26%, 4=1.49%, 10=29.83%, 20=37.36%, 50=24.57% 00:30:56.360 lat (msec) : 100=6.47% 00:30:56.360 cpu : usr=2.76%, sys=3.35%, ctx=261, majf=0, minf=1 00:30:56.360 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:30:56.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.360 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:56.360 issued rwts: total=3072,3526,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:56.360 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:56.360 job2: (groupid=0, jobs=1): err= 0: pid=2841732: Fri Dec 6 03:39:16 2024 00:30:56.360 read: IOPS=2368, BW=9472KiB/s (9700kB/s)(9548KiB/1008msec) 00:30:56.360 slat (nsec): min=1109, max=42332k, avg=162792.83, stdev=1328648.59 00:30:56.360 clat (usec): min=2768, max=62033, avg=19334.61, stdev=11722.82 00:30:56.360 lat (usec): min=2775, max=62039, avg=19497.40, stdev=11810.13 00:30:56.360 clat percentiles (usec): 00:30:56.360 | 1.00th=[ 4490], 5.00th=[ 8356], 10.00th=[ 9896], 20.00th=[10814], 00:30:56.361 | 30.00th=[12649], 40.00th=[13960], 50.00th=[14484], 60.00th=[16909], 00:30:56.361 | 70.00th=[19792], 80.00th=[29754], 90.00th=[38536], 95.00th=[45351], 00:30:56.361 | 99.00th=[56886], 99.50th=[60031], 99.90th=[62129], 99.95th=[62129], 00:30:56.361 | 99.99th=[62129] 00:30:56.361 write: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec); 0 zone resets 00:30:56.361 slat (usec): min=2, max=15684, avg=229.45, stdev=1096.42 00:30:56.361 clat (usec): min=2049, max=91803, avg=31805.53, stdev=21052.71 00:30:56.361 lat (usec): min=2060, max=91812, avg=32034.98, stdev=21183.23 00:30:56.361 clat percentiles (usec): 00:30:56.361 | 1.00th=[ 5473], 5.00th=[11207], 10.00th=[11207], 20.00th=[12125], 00:30:56.361 | 30.00th=[14746], 40.00th=[19530], 50.00th=[27132], 60.00th=[32375], 00:30:56.361 | 70.00th=[39584], 80.00th=[50594], 90.00th=[63701], 95.00th=[71828], 00:30:56.361 | 99.00th=[90702], 99.50th=[91751], 99.90th=[91751], 99.95th=[91751], 00:30:56.361 | 99.99th=[91751] 00:30:56.361 bw ( KiB/s): min= 8720, max=11760, per=17.06%, avg=10240.00, stdev=2149.60, samples=2 00:30:56.361 iops : min= 2180, max= 2940, avg=2560.00, stdev=537.40, samples=2 00:30:56.361 lat (msec) : 4=0.69%, 10=6.19%, 20=50.72%, 50=30.75%, 100=11.66% 00:30:56.361 cpu : usr=2.28%, sys=2.48%, ctx=240, majf=0, minf=1 00:30:56.361 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:30:56.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.361 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:56.361 issued rwts: total=2387,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:56.361 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:56.361 job3: (groupid=0, jobs=1): err= 0: pid=2841733: Fri Dec 6 03:39:16 2024 00:30:56.361 read: IOPS=2529, BW=9.88MiB/s (10.4MB/s)(10.0MiB/1012msec) 00:30:56.361 slat (nsec): min=1129, max=25471k, avg=133970.44, stdev=1218905.65 00:30:56.361 clat (usec): min=693, max=55753, avg=17911.99, stdev=7564.78 00:30:56.361 lat (usec): min=700, max=55869, avg=18045.96, stdev=7677.36 00:30:56.361 clat percentiles (usec): 00:30:56.361 | 1.00th=[ 1254], 5.00th=[ 8160], 10.00th=[ 9503], 20.00th=[12518], 00:30:56.361 | 30.00th=[13435], 40.00th=[14222], 50.00th=[15926], 60.00th=[19006], 00:30:56.361 | 70.00th=[21365], 80.00th=[24773], 90.00th=[28967], 95.00th=[31065], 00:30:56.361 | 99.00th=[36439], 99.50th=[37487], 99.90th=[44303], 99.95th=[50594], 00:30:56.361 | 99.99th=[55837] 00:30:56.361 write: IOPS=2900, BW=11.3MiB/s (11.9MB/s)(11.5MiB/1012msec); 0 zone resets 00:30:56.361 slat (usec): min=2, max=26103, avg=215.42, stdev=1362.90 00:30:56.361 clat (msec): min=4, max=134, avg=27.92, stdev=24.50 00:30:56.361 lat (msec): min=4, max=135, avg=28.14, stdev=24.68 00:30:56.361 clat percentiles (msec): 00:30:56.361 | 1.00th=[ 5], 5.00th=[ 8], 10.00th=[ 12], 20.00th=[ 14], 00:30:56.361 | 30.00th=[ 16], 40.00th=[ 18], 50.00th=[ 23], 60.00th=[ 25], 00:30:56.361 | 70.00th=[ 27], 80.00th=[ 29], 90.00th=[ 63], 95.00th=[ 94], 00:30:56.361 | 99.00th=[ 118], 99.50th=[ 126], 99.90th=[ 136], 99.95th=[ 136], 00:30:56.361 | 99.99th=[ 136] 00:30:56.361 bw ( KiB/s): min= 8192, max=14264, per=18.71%, avg=11228.00, stdev=4293.55, samples=2 00:30:56.361 iops : min= 2048, max= 3566, avg=2807.00, stdev=1073.39, samples=2 00:30:56.361 lat (usec) : 750=0.05% 00:30:56.361 lat (msec) : 2=0.91%, 10=8.41%, 20=43.33%, 50=41.47%, 100=3.89% 00:30:56.361 lat (msec) : 250=1.93% 00:30:56.361 cpu : usr=2.18%, sys=2.97%, ctx=210, majf=0, minf=1 00:30:56.361 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:30:56.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.361 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:56.361 issued rwts: total=2560,2935,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:56.361 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:56.361 00:30:56.361 Run status group 0 (all jobs): 00:30:56.361 READ: bw=53.7MiB/s (56.3MB/s), 9472KiB/s-23.8MiB/s (9700kB/s-24.9MB/s), io=56.2MiB (58.9MB), run=1008-1045msec 00:30:56.361 WRITE: bw=58.6MiB/s (61.4MB/s), 9.92MiB/s-24.9MiB/s (10.4MB/s-26.1MB/s), io=61.2MiB (64.2MB), run=1008-1045msec 00:30:56.361 00:30:56.361 Disk stats (read/write): 00:30:56.361 nvme0n1: ios=5469/5632, merge=0/0, ticks=47240/44631, in_queue=91871, util=96.39% 00:30:56.361 nvme0n2: ios=2611/2919, merge=0/0, ticks=35571/60735, in_queue=96306, util=97.53% 00:30:56.361 nvme0n3: ios=1661/2048, merge=0/0, ticks=36677/58883, in_queue=95560, util=87.64% 00:30:56.361 nvme0n4: ios=2063/2170, merge=0/0, ticks=27496/53744, in_queue=81240, util=97.02% 00:30:56.361 03:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:30:56.361 [global] 00:30:56.361 thread=1 00:30:56.361 invalidate=1 00:30:56.361 rw=randwrite 00:30:56.361 time_based=1 00:30:56.361 runtime=1 00:30:56.361 ioengine=libaio 00:30:56.361 direct=1 00:30:56.361 bs=4096 00:30:56.361 iodepth=128 00:30:56.361 norandommap=0 00:30:56.361 numjobs=1 00:30:56.361 00:30:56.361 verify_dump=1 00:30:56.361 verify_backlog=512 00:30:56.361 verify_state_save=0 00:30:56.361 do_verify=1 00:30:56.361 verify=crc32c-intel 00:30:56.361 [job0] 00:30:56.361 filename=/dev/nvme0n1 00:30:56.361 [job1] 00:30:56.361 filename=/dev/nvme0n2 00:30:56.361 [job2] 00:30:56.361 filename=/dev/nvme0n3 00:30:56.361 [job3] 00:30:56.361 filename=/dev/nvme0n4 00:30:56.361 Could not set queue depth (nvme0n1) 00:30:56.361 Could not set queue depth (nvme0n2) 00:30:56.361 Could not set queue depth (nvme0n3) 00:30:56.361 Could not set queue depth (nvme0n4) 00:30:56.620 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:56.620 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:56.620 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:56.620 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:56.620 fio-3.35 00:30:56.620 Starting 4 threads 00:30:57.999 00:30:57.999 job0: (groupid=0, jobs=1): err= 0: pid=2842103: Fri Dec 6 03:39:17 2024 00:30:57.999 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:30:57.999 slat (nsec): min=1235, max=11247k, avg=86911.88, stdev=634989.62 00:30:57.999 clat (usec): min=4719, max=37963, avg=10328.88, stdev=3696.03 00:30:57.999 lat (usec): min=4729, max=37972, avg=10415.79, stdev=3769.12 00:30:57.999 clat percentiles (usec): 00:30:57.999 | 1.00th=[ 5014], 5.00th=[ 6325], 10.00th=[ 6915], 20.00th=[ 7504], 00:30:57.999 | 30.00th=[ 7767], 40.00th=[ 8094], 50.00th=[ 9765], 60.00th=[11207], 00:30:57.999 | 70.00th=[11600], 80.00th=[12780], 90.00th=[13960], 95.00th=[16319], 00:30:57.999 | 99.00th=[23725], 99.50th=[27395], 99.90th=[32900], 99.95th=[32900], 00:30:57.999 | 99.99th=[38011] 00:30:57.999 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:30:58.000 slat (usec): min=2, max=9728, avg=93.91, stdev=513.88 00:30:58.000 clat (usec): min=1291, max=57274, avg=13198.49, stdev=9473.78 00:30:58.000 lat (usec): min=2765, max=57282, avg=13292.40, stdev=9530.19 00:30:58.000 clat percentiles (usec): 00:30:58.000 | 1.00th=[ 4359], 5.00th=[ 5080], 10.00th=[ 5932], 20.00th=[ 6980], 00:30:58.000 | 30.00th=[ 7767], 40.00th=[ 8225], 50.00th=[ 8455], 60.00th=[10552], 00:30:58.000 | 70.00th=[13960], 80.00th=[20579], 90.00th=[26870], 95.00th=[31065], 00:30:58.000 | 99.00th=[50594], 99.50th=[52167], 99.90th=[57410], 99.95th=[57410], 00:30:58.000 | 99.99th=[57410] 00:30:58.000 bw ( KiB/s): min=16360, max=27592, per=33.75%, avg=21976.00, stdev=7942.22, samples=2 00:30:58.000 iops : min= 4090, max= 6898, avg=5494.00, stdev=1985.56, samples=2 00:30:58.000 lat (msec) : 2=0.01%, 4=0.30%, 10=54.72%, 20=33.08%, 50=11.18% 00:30:58.000 lat (msec) : 100=0.71% 00:30:58.000 cpu : usr=2.80%, sys=6.29%, ctx=569, majf=0, minf=1 00:30:58.000 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:30:58.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:58.000 issued rwts: total=5120,5621,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.000 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:58.000 job1: (groupid=0, jobs=1): err= 0: pid=2842104: Fri Dec 6 03:39:17 2024 00:30:58.000 read: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:30:58.000 slat (nsec): min=1060, max=16534k, avg=129796.38, stdev=825645.88 00:30:58.000 clat (usec): min=7627, max=59845, avg=16071.91, stdev=8948.13 00:30:58.000 lat (usec): min=7633, max=59851, avg=16201.70, stdev=9009.40 00:30:58.000 clat percentiles (usec): 00:30:58.000 | 1.00th=[ 8717], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[10945], 00:30:58.000 | 30.00th=[11338], 40.00th=[11600], 50.00th=[12125], 60.00th=[13566], 00:30:58.000 | 70.00th=[15139], 80.00th=[20317], 90.00th=[27919], 95.00th=[36439], 00:30:58.000 | 99.00th=[52691], 99.50th=[58983], 99.90th=[60031], 99.95th=[60031], 00:30:58.000 | 99.99th=[60031] 00:30:58.000 write: IOPS=3926, BW=15.3MiB/s (16.1MB/s)(15.5MiB/1009msec); 0 zone resets 00:30:58.000 slat (nsec): min=1779, max=10751k, avg=132565.79, stdev=593268.72 00:30:58.000 clat (usec): min=2610, max=61523, avg=17646.99, stdev=12334.56 00:30:58.000 lat (usec): min=6661, max=61529, avg=17779.56, stdev=12406.17 00:30:58.000 clat percentiles (usec): 00:30:58.000 | 1.00th=[ 7767], 5.00th=[ 8979], 10.00th=[ 9765], 20.00th=[10421], 00:30:58.000 | 30.00th=[10683], 40.00th=[11338], 50.00th=[12125], 60.00th=[12780], 00:30:58.000 | 70.00th=[13304], 80.00th=[26608], 90.00th=[40633], 95.00th=[42730], 00:30:58.000 | 99.00th=[56886], 99.50th=[58459], 99.90th=[61604], 99.95th=[61604], 00:30:58.000 | 99.99th=[61604] 00:30:58.000 bw ( KiB/s): min=10192, max=20480, per=23.55%, avg=15336.00, stdev=7274.71, samples=2 00:30:58.000 iops : min= 2548, max= 5120, avg=3834.00, stdev=1818.68, samples=2 00:30:58.000 lat (msec) : 4=0.01%, 10=11.60%, 20=66.53%, 50=19.92%, 100=1.95% 00:30:58.000 cpu : usr=1.69%, sys=3.17%, ctx=483, majf=0, minf=1 00:30:58.000 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:30:58.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:58.000 issued rwts: total=3584,3962,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.000 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:58.000 job2: (groupid=0, jobs=1): err= 0: pid=2842105: Fri Dec 6 03:39:17 2024 00:30:58.000 read: IOPS=3649, BW=14.3MiB/s (14.9MB/s)(15.0MiB/1049msec) 00:30:58.000 slat (nsec): min=1378, max=14273k, avg=98217.60, stdev=743935.50 00:30:58.000 clat (usec): min=1207, max=64059, avg=14294.49, stdev=9620.49 00:30:58.000 lat (usec): min=1212, max=65348, avg=14392.71, stdev=9676.20 00:30:58.000 clat percentiles (usec): 00:30:58.000 | 1.00th=[ 1860], 5.00th=[ 5604], 10.00th=[ 6587], 20.00th=[ 8979], 00:30:58.000 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11076], 60.00th=[11600], 00:30:58.000 | 70.00th=[15270], 80.00th=[17171], 90.00th=[23725], 95.00th=[33817], 00:30:58.000 | 99.00th=[51643], 99.50th=[51643], 99.90th=[64226], 99.95th=[64226], 00:30:58.000 | 99.99th=[64226] 00:30:58.000 write: IOPS=3904, BW=15.3MiB/s (16.0MB/s)(16.0MiB/1049msec); 0 zone resets 00:30:58.000 slat (usec): min=2, max=10998, avg=127.49, stdev=626.45 00:30:58.000 clat (usec): min=717, max=55191, avg=19035.00, stdev=12082.12 00:30:58.000 lat (usec): min=1453, max=55197, avg=19162.49, stdev=12153.77 00:30:58.000 clat percentiles (usec): 00:30:58.000 | 1.00th=[ 1991], 5.00th=[ 5342], 10.00th=[ 7570], 20.00th=[ 9765], 00:30:58.000 | 30.00th=[11338], 40.00th=[12649], 50.00th=[13829], 60.00th=[15926], 00:30:58.000 | 70.00th=[24773], 80.00th=[32375], 90.00th=[40109], 95.00th=[41681], 00:30:58.000 | 99.00th=[44303], 99.50th=[44827], 99.90th=[55313], 99.95th=[55313], 00:30:58.000 | 99.99th=[55313] 00:30:58.000 bw ( KiB/s): min=12112, max=20656, per=25.16%, avg=16384.00, stdev=6041.52, samples=2 00:30:58.000 iops : min= 3028, max= 5164, avg=4096.00, stdev=1510.38, samples=2 00:30:58.000 lat (usec) : 750=0.01% 00:30:58.000 lat (msec) : 2=1.25%, 4=1.98%, 10=21.23%, 20=51.29%, 50=22.48% 00:30:58.000 lat (msec) : 100=1.77% 00:30:58.000 cpu : usr=3.44%, sys=4.58%, ctx=448, majf=0, minf=1 00:30:58.000 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:30:58.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:58.000 issued rwts: total=3828,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.000 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:58.000 job3: (groupid=0, jobs=1): err= 0: pid=2842106: Fri Dec 6 03:39:17 2024 00:30:58.000 read: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec) 00:30:58.000 slat (nsec): min=1180, max=17857k, avg=130610.32, stdev=819296.33 00:30:58.000 clat (usec): min=5651, max=47877, avg=16027.33, stdev=5730.39 00:30:58.000 lat (usec): min=5658, max=47886, avg=16157.94, stdev=5781.97 00:30:58.000 clat percentiles (usec): 00:30:58.000 | 1.00th=[ 8717], 5.00th=[10421], 10.00th=[11207], 20.00th=[12387], 00:30:58.000 | 30.00th=[13829], 40.00th=[14484], 50.00th=[15270], 60.00th=[15795], 00:30:58.000 | 70.00th=[16188], 80.00th=[17171], 90.00th=[20055], 95.00th=[32375], 00:30:58.000 | 99.00th=[39060], 99.50th=[45876], 99.90th=[47973], 99.95th=[47973], 00:30:58.000 | 99.99th=[47973] 00:30:58.000 write: IOPS=3372, BW=13.2MiB/s (13.8MB/s)(13.3MiB/1008msec); 0 zone resets 00:30:58.000 slat (nsec): min=1841, max=14779k, avg=172997.30, stdev=859310.84 00:30:58.000 clat (usec): min=1487, max=65275, avg=23037.82, stdev=12115.03 00:30:58.000 lat (usec): min=6253, max=65298, avg=23210.82, stdev=12186.64 00:30:58.000 clat percentiles (usec): 00:30:58.000 | 1.00th=[ 9503], 5.00th=[11338], 10.00th=[11863], 20.00th=[13566], 00:30:58.000 | 30.00th=[14877], 40.00th=[17171], 50.00th=[19792], 60.00th=[21890], 00:30:58.000 | 70.00th=[26346], 80.00th=[30016], 90.00th=[40109], 95.00th=[54264], 00:30:58.000 | 99.00th=[62129], 99.50th=[63701], 99.90th=[65274], 99.95th=[65274], 00:30:58.000 | 99.99th=[65274] 00:30:58.000 bw ( KiB/s): min=10624, max=15544, per=20.09%, avg=13084.00, stdev=3478.97, samples=2 00:30:58.000 iops : min= 2656, max= 3886, avg=3271.00, stdev=869.74, samples=2 00:30:58.000 lat (msec) : 2=0.02%, 10=2.78%, 20=67.02%, 50=27.24%, 100=2.94% 00:30:58.000 cpu : usr=1.59%, sys=3.38%, ctx=347, majf=0, minf=1 00:30:58.000 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:30:58.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:58.000 issued rwts: total=3072,3399,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.000 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:58.000 00:30:58.000 Run status group 0 (all jobs): 00:30:58.000 READ: bw=58.1MiB/s (60.9MB/s), 11.9MiB/s-20.0MiB/s (12.5MB/s-20.9MB/s), io=61.0MiB (63.9MB), run=1002-1049msec 00:30:58.000 WRITE: bw=63.6MiB/s (66.7MB/s), 13.2MiB/s-21.9MiB/s (13.8MB/s-23.0MB/s), io=66.7MiB (70.0MB), run=1002-1049msec 00:30:58.000 00:30:58.000 Disk stats (read/write): 00:30:58.000 nvme0n1: ios=4132/4353, merge=0/0, ticks=42163/63464, in_queue=105627, util=96.99% 00:30:58.000 nvme0n2: ios=3353/3584, merge=0/0, ticks=16592/15378, in_queue=31970, util=98.28% 00:30:58.000 nvme0n3: ios=3625/3679, merge=0/0, ticks=42756/59267, in_queue=102023, util=97.09% 00:30:58.000 nvme0n4: ios=2585/2943, merge=0/0, ticks=18415/27268, in_queue=45683, util=94.35% 00:30:58.000 03:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:30:58.000 03:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2842338 00:30:58.000 03:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:30:58.000 03:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:30:58.000 [global] 00:30:58.000 thread=1 00:30:58.000 invalidate=1 00:30:58.000 rw=read 00:30:58.000 time_based=1 00:30:58.000 runtime=10 00:30:58.000 ioengine=libaio 00:30:58.000 direct=1 00:30:58.000 bs=4096 00:30:58.000 iodepth=1 00:30:58.000 norandommap=1 00:30:58.000 numjobs=1 00:30:58.000 00:30:58.000 [job0] 00:30:58.000 filename=/dev/nvme0n1 00:30:58.000 [job1] 00:30:58.000 filename=/dev/nvme0n2 00:30:58.000 [job2] 00:30:58.000 filename=/dev/nvme0n3 00:30:58.000 [job3] 00:30:58.000 filename=/dev/nvme0n4 00:30:58.000 Could not set queue depth (nvme0n1) 00:30:58.000 Could not set queue depth (nvme0n2) 00:30:58.000 Could not set queue depth (nvme0n3) 00:30:58.000 Could not set queue depth (nvme0n4) 00:30:58.260 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:58.260 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:58.260 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:58.260 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:58.260 fio-3.35 00:30:58.260 Starting 4 threads 00:31:00.796 03:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:31:01.054 03:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:31:01.054 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=274432, buflen=4096 00:31:01.055 fio: pid=2842481, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:01.314 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=41918464, buflen=4096 00:31:01.314 fio: pid=2842480, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:01.314 03:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:01.314 03:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:31:01.573 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=46297088, buflen=4096 00:31:01.573 fio: pid=2842478, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:01.573 03:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:01.573 03:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:31:01.842 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=38309888, buflen=4096 00:31:01.842 fio: pid=2842479, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:01.842 03:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:01.842 03:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:31:01.842 00:31:01.842 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2842478: Fri Dec 6 03:39:21 2024 00:31:01.842 read: IOPS=3672, BW=14.3MiB/s (15.0MB/s)(44.2MiB/3078msec) 00:31:01.842 slat (usec): min=6, max=15653, avg=10.91, stdev=164.54 00:31:01.842 clat (usec): min=178, max=2335, avg=257.37, stdev=41.31 00:31:01.842 lat (usec): min=185, max=16171, avg=268.29, stdev=172.11 00:31:01.842 clat percentiles (usec): 00:31:01.842 | 1.00th=[ 221], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 241], 00:31:01.842 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 255], 00:31:01.842 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 306], 00:31:01.842 | 99.00th=[ 367], 99.50th=[ 420], 99.90th=[ 474], 99.95th=[ 603], 00:31:01.842 | 99.99th=[ 1811] 00:31:01.842 bw ( KiB/s): min=14264, max=15488, per=39.38%, avg=14821.67, stdev=451.32, samples=6 00:31:01.842 iops : min= 3566, max= 3872, avg=3705.33, stdev=112.89, samples=6 00:31:01.842 lat (usec) : 250=49.40%, 500=50.52%, 750=0.04% 00:31:01.842 lat (msec) : 2=0.03%, 4=0.01% 00:31:01.842 cpu : usr=2.44%, sys=5.88%, ctx=11306, majf=0, minf=1 00:31:01.842 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:01.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.842 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.842 issued rwts: total=11304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.842 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:01.842 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2842479: Fri Dec 6 03:39:21 2024 00:31:01.842 read: IOPS=2843, BW=11.1MiB/s (11.6MB/s)(36.5MiB/3290msec) 00:31:01.842 slat (usec): min=6, max=30151, avg=17.72, stdev=405.17 00:31:01.842 clat (usec): min=202, max=41240, avg=329.59, stdev=1618.86 00:31:01.842 lat (usec): min=209, max=41251, avg=347.31, stdev=1669.21 00:31:01.842 clat percentiles (usec): 00:31:01.842 | 1.00th=[ 215], 5.00th=[ 225], 10.00th=[ 237], 20.00th=[ 243], 00:31:01.842 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 253], 60.00th=[ 258], 00:31:01.842 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 297], 00:31:01.842 | 99.00th=[ 433], 99.50th=[ 465], 99.90th=[41157], 99.95th=[41157], 00:31:01.842 | 99.99th=[41157] 00:31:01.842 bw ( KiB/s): min= 5512, max=14744, per=30.21%, avg=11372.00, stdev=3744.45, samples=6 00:31:01.842 iops : min= 1378, max= 3686, avg=2843.00, stdev=936.11, samples=6 00:31:01.842 lat (usec) : 250=38.10%, 500=61.56%, 750=0.09% 00:31:01.842 lat (msec) : 2=0.02%, 4=0.01%, 10=0.02%, 20=0.03%, 50=0.16% 00:31:01.842 cpu : usr=1.73%, sys=4.65%, ctx=9360, majf=0, minf=1 00:31:01.842 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:01.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.842 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.842 issued rwts: total=9354,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.842 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:01.842 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2842480: Fri Dec 6 03:39:21 2024 00:31:01.842 read: IOPS=3571, BW=13.9MiB/s (14.6MB/s)(40.0MiB/2866msec) 00:31:01.842 slat (usec): min=6, max=15097, avg=10.37, stdev=166.99 00:31:01.842 clat (usec): min=196, max=12856, avg=265.62, stdev=130.58 00:31:01.842 lat (usec): min=204, max=15487, avg=275.99, stdev=213.34 00:31:01.842 clat percentiles (usec): 00:31:01.842 | 1.00th=[ 221], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 241], 00:31:01.842 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 262], 00:31:01.842 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 306], 95.00th=[ 322], 00:31:01.842 | 99.00th=[ 437], 99.50th=[ 453], 99.90th=[ 482], 99.95th=[ 490], 00:31:01.842 | 99.99th=[ 2180] 00:31:01.842 bw ( KiB/s): min=13384, max=15904, per=38.66%, avg=14552.00, stdev=997.81, samples=5 00:31:01.842 iops : min= 3346, max= 3976, avg=3638.00, stdev=249.45, samples=5 00:31:01.842 lat (usec) : 250=38.47%, 500=61.49%, 750=0.02% 00:31:01.842 lat (msec) : 4=0.01%, 20=0.01% 00:31:01.842 cpu : usr=2.23%, sys=5.45%, ctx=10238, majf=0, minf=2 00:31:01.842 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:01.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.842 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.842 issued rwts: total=10235,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.842 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:01.842 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2842481: Fri Dec 6 03:39:21 2024 00:31:01.842 read: IOPS=25, BW=99.6KiB/s (102kB/s)(268KiB/2691msec) 00:31:01.842 slat (nsec): min=8220, max=32359, avg=14637.53, stdev=5492.86 00:31:01.842 clat (usec): min=460, max=42028, avg=39830.01, stdev=6961.99 00:31:01.842 lat (usec): min=485, max=42038, avg=39844.72, stdev=6959.50 00:31:01.842 clat percentiles (usec): 00:31:01.842 | 1.00th=[ 461], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:01.842 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:01.842 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:31:01.842 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:01.842 | 99.99th=[42206] 00:31:01.842 bw ( KiB/s): min= 96, max= 112, per=0.26%, avg=99.20, stdev= 7.16, samples=5 00:31:01.842 iops : min= 24, max= 28, avg=24.80, stdev= 1.79, samples=5 00:31:01.842 lat (usec) : 500=2.94% 00:31:01.842 lat (msec) : 50=95.59% 00:31:01.842 cpu : usr=0.07%, sys=0.00%, ctx=68, majf=0, minf=2 00:31:01.842 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:01.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.843 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.843 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.843 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:01.843 00:31:01.843 Run status group 0 (all jobs): 00:31:01.843 READ: bw=36.8MiB/s (38.5MB/s), 99.6KiB/s-14.3MiB/s (102kB/s-15.0MB/s), io=121MiB (127MB), run=2691-3290msec 00:31:01.843 00:31:01.843 Disk stats (read/write): 00:31:01.843 nvme0n1: ios=11298/0, merge=0/0, ticks=2746/0, in_queue=2746, util=93.47% 00:31:01.843 nvme0n2: ios=8700/0, merge=0/0, ticks=2795/0, in_queue=2795, util=93.51% 00:31:01.843 nvme0n3: ios=10090/0, merge=0/0, ticks=2553/0, in_queue=2553, util=95.37% 00:31:01.843 nvme0n4: ios=64/0, merge=0/0, ticks=2546/0, in_queue=2546, util=96.34% 00:31:02.104 03:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:02.104 03:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:31:02.104 03:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:02.104 03:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:31:02.362 03:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:02.362 03:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:31:02.621 03:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:02.621 03:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:31:02.880 03:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:31:02.880 03:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2842338 00:31:02.881 03:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:31:02.881 03:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:02.881 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:02.881 03:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:02.881 03:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:31:02.881 03:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:02.881 03:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:02.881 03:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:02.881 03:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:02.881 03:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:31:02.881 03:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:31:02.881 03:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:31:02.881 nvmf hotplug test: fio failed as expected 00:31:02.881 03:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:03.140 03:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:31:03.140 03:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:31:03.140 03:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:31:03.140 03:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:31:03.140 03:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:31:03.140 03:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:03.140 03:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:31:03.140 03:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:03.140 03:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:31:03.140 03:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:03.140 03:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:03.140 rmmod nvme_tcp 00:31:03.140 rmmod nvme_fabrics 00:31:03.140 rmmod nvme_keyring 00:31:03.140 03:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:03.140 03:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:31:03.140 03:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:31:03.140 03:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2839346 ']' 00:31:03.140 03:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2839346 00:31:03.140 03:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2839346 ']' 00:31:03.140 03:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2839346 00:31:03.140 03:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:31:03.140 03:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:03.140 03:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2839346 00:31:03.140 03:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:03.140 03:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:03.140 03:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2839346' 00:31:03.140 killing process with pid 2839346 00:31:03.140 03:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2839346 00:31:03.140 03:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2839346 00:31:03.400 03:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:03.400 03:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:03.400 03:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:03.400 03:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:31:03.400 03:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:31:03.400 03:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:03.400 03:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:31:03.400 03:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:03.400 03:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:03.400 03:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:03.400 03:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:03.400 03:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:05.937 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:05.937 00:31:05.937 real 0m25.467s 00:31:05.937 user 1m31.326s 00:31:05.937 sys 0m11.189s 00:31:05.937 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:05.937 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:05.937 ************************************ 00:31:05.937 END TEST nvmf_fio_target 00:31:05.937 ************************************ 00:31:05.937 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:05.937 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:05.937 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:05.937 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:05.937 ************************************ 00:31:05.937 START TEST nvmf_bdevio 00:31:05.937 ************************************ 00:31:05.937 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:05.937 * Looking for test storage... 00:31:05.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:05.937 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:05.937 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:31:05.937 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:05.937 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:05.937 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:05.937 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:05.937 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:05.937 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:31:05.937 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:31:05.937 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:31:05.937 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:05.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.938 --rc genhtml_branch_coverage=1 00:31:05.938 --rc genhtml_function_coverage=1 00:31:05.938 --rc genhtml_legend=1 00:31:05.938 --rc geninfo_all_blocks=1 00:31:05.938 --rc geninfo_unexecuted_blocks=1 00:31:05.938 00:31:05.938 ' 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:05.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.938 --rc genhtml_branch_coverage=1 00:31:05.938 --rc genhtml_function_coverage=1 00:31:05.938 --rc genhtml_legend=1 00:31:05.938 --rc geninfo_all_blocks=1 00:31:05.938 --rc geninfo_unexecuted_blocks=1 00:31:05.938 00:31:05.938 ' 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:05.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.938 --rc genhtml_branch_coverage=1 00:31:05.938 --rc genhtml_function_coverage=1 00:31:05.938 --rc genhtml_legend=1 00:31:05.938 --rc geninfo_all_blocks=1 00:31:05.938 --rc geninfo_unexecuted_blocks=1 00:31:05.938 00:31:05.938 ' 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:05.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.938 --rc genhtml_branch_coverage=1 00:31:05.938 --rc genhtml_function_coverage=1 00:31:05.938 --rc genhtml_legend=1 00:31:05.938 --rc geninfo_all_blocks=1 00:31:05.938 --rc geninfo_unexecuted_blocks=1 00:31:05.938 00:31:05.938 ' 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:05.938 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:05.939 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:05.939 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:05.939 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:05.939 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:05.939 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:31:05.939 03:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:11.210 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:11.210 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:31:11.210 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:11.210 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:11.210 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:11.210 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:11.210 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:11.210 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:31:11.210 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:11.210 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:31:11.210 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:31:11.210 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:31:11.210 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:31:11.210 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:31:11.210 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:31:11.210 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:11.210 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:11.210 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:11.210 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:11.210 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:11.210 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:11.210 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:11.210 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:11.210 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:11.210 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:11.210 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:11.210 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:11.210 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:11.210 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:11.210 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:11.210 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:11.210 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:11.211 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:11.211 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:11.211 Found net devices under 0000:86:00.0: cvl_0_0 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:11.211 Found net devices under 0000:86:00.1: cvl_0_1 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:11.211 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:11.471 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:11.471 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:11.471 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:11.471 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:11.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:11.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.466 ms 00:31:11.471 00:31:11.471 --- 10.0.0.2 ping statistics --- 00:31:11.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:11.471 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:31:11.471 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:11.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:11.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:31:11.471 00:31:11.471 --- 10.0.0.1 ping statistics --- 00:31:11.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:11.471 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:31:11.471 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:11.471 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:31:11.471 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:11.471 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:11.471 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:11.471 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:11.471 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:11.471 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:11.471 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:11.471 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:31:11.471 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:11.471 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:11.471 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:11.471 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2846716 00:31:11.471 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2846716 00:31:11.471 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:31:11.471 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2846716 ']' 00:31:11.471 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:11.471 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:11.471 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:11.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:11.471 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:11.471 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:11.471 [2024-12-06 03:39:31.482286] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:11.471 [2024-12-06 03:39:31.483268] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:31:11.471 [2024-12-06 03:39:31.483305] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:11.471 [2024-12-06 03:39:31.548936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:11.471 [2024-12-06 03:39:31.589222] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:11.471 [2024-12-06 03:39:31.589262] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:11.471 [2024-12-06 03:39:31.589269] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:11.471 [2024-12-06 03:39:31.589276] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:11.471 [2024-12-06 03:39:31.589282] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:11.471 [2024-12-06 03:39:31.590937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:11.471 [2024-12-06 03:39:31.591064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:11.471 [2024-12-06 03:39:31.591175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:11.471 [2024-12-06 03:39:31.591175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:11.731 [2024-12-06 03:39:31.659858] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:11.731 [2024-12-06 03:39:31.660441] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:11.731 [2024-12-06 03:39:31.660833] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:11.731 [2024-12-06 03:39:31.661078] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:11.731 [2024-12-06 03:39:31.661120] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:11.731 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:11.731 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:31:11.731 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:11.731 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:11.731 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:11.731 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:11.731 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:11.731 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.731 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:11.731 [2024-12-06 03:39:31.739853] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:11.731 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.731 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:11.731 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.731 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:11.731 Malloc0 00:31:11.731 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.731 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:11.731 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.731 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:11.731 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.731 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:11.731 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.731 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:11.731 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.731 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:11.731 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.731 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:11.731 [2024-12-06 03:39:31.807861] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:11.731 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.731 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:31:11.731 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:31:11.731 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:31:11.731 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:31:11.731 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:11.731 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:11.731 { 00:31:11.731 "params": { 00:31:11.731 "name": "Nvme$subsystem", 00:31:11.731 "trtype": "$TEST_TRANSPORT", 00:31:11.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:11.731 "adrfam": "ipv4", 00:31:11.731 "trsvcid": "$NVMF_PORT", 00:31:11.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:11.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:11.731 "hdgst": ${hdgst:-false}, 00:31:11.731 "ddgst": ${ddgst:-false} 00:31:11.731 }, 00:31:11.731 "method": "bdev_nvme_attach_controller" 00:31:11.731 } 00:31:11.731 EOF 00:31:11.731 )") 00:31:11.731 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:31:11.731 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:31:11.731 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:31:11.731 03:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:11.731 "params": { 00:31:11.731 "name": "Nvme1", 00:31:11.731 "trtype": "tcp", 00:31:11.731 "traddr": "10.0.0.2", 00:31:11.731 "adrfam": "ipv4", 00:31:11.731 "trsvcid": "4420", 00:31:11.731 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:11.731 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:11.731 "hdgst": false, 00:31:11.731 "ddgst": false 00:31:11.731 }, 00:31:11.731 "method": "bdev_nvme_attach_controller" 00:31:11.731 }' 00:31:11.731 [2024-12-06 03:39:31.858584] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:31:11.731 [2024-12-06 03:39:31.858625] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2846857 ] 00:31:11.991 [2024-12-06 03:39:31.920414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:11.991 [2024-12-06 03:39:31.964774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:11.991 [2024-12-06 03:39:31.964869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:11.991 [2024-12-06 03:39:31.964872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:12.252 I/O targets: 00:31:12.252 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:31:12.252 00:31:12.252 00:31:12.252 CUnit - A unit testing framework for C - Version 2.1-3 00:31:12.252 http://cunit.sourceforge.net/ 00:31:12.252 00:31:12.252 00:31:12.252 Suite: bdevio tests on: Nvme1n1 00:31:12.252 Test: blockdev write read block ...passed 00:31:12.252 Test: blockdev write zeroes read block ...passed 00:31:12.252 Test: blockdev write zeroes read no split ...passed 00:31:12.252 Test: blockdev write zeroes read split ...passed 00:31:12.512 Test: blockdev write zeroes read split partial ...passed 00:31:12.512 Test: blockdev reset ...[2024-12-06 03:39:32.421695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:31:12.512 [2024-12-06 03:39:32.421759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8dbf30 (9): Bad file descriptor 00:31:12.512 [2024-12-06 03:39:32.514979] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:31:12.512 passed 00:31:12.512 Test: blockdev write read 8 blocks ...passed 00:31:12.512 Test: blockdev write read size > 128k ...passed 00:31:12.512 Test: blockdev write read invalid size ...passed 00:31:12.512 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:12.512 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:12.512 Test: blockdev write read max offset ...passed 00:31:12.772 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:12.772 Test: blockdev writev readv 8 blocks ...passed 00:31:12.772 Test: blockdev writev readv 30 x 1block ...passed 00:31:12.772 Test: blockdev writev readv block ...passed 00:31:12.772 Test: blockdev writev readv size > 128k ...passed 00:31:12.772 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:12.772 Test: blockdev comparev and writev ...[2024-12-06 03:39:32.765815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:12.772 [2024-12-06 03:39:32.765843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.772 [2024-12-06 03:39:32.765857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:12.772 [2024-12-06 03:39:32.765865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:12.772 [2024-12-06 03:39:32.766177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:12.772 [2024-12-06 03:39:32.766189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:12.772 [2024-12-06 03:39:32.766200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:12.772 [2024-12-06 03:39:32.766208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:12.772 [2024-12-06 03:39:32.766500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:12.772 [2024-12-06 03:39:32.766510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:12.772 [2024-12-06 03:39:32.766522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:12.772 [2024-12-06 03:39:32.766529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:12.772 [2024-12-06 03:39:32.766823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:12.772 [2024-12-06 03:39:32.766833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:12.772 [2024-12-06 03:39:32.766845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:12.772 [2024-12-06 03:39:32.766851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:12.772 passed 00:31:12.772 Test: blockdev nvme passthru rw ...passed 00:31:12.772 Test: blockdev nvme passthru vendor specific ...[2024-12-06 03:39:32.849318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:12.772 [2024-12-06 03:39:32.849336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:12.772 [2024-12-06 03:39:32.849450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:12.772 [2024-12-06 03:39:32.849460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:12.772 [2024-12-06 03:39:32.849574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:12.772 [2024-12-06 03:39:32.849584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:12.772 [2024-12-06 03:39:32.849696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:12.772 [2024-12-06 03:39:32.849706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:12.772 passed 00:31:12.772 Test: blockdev nvme admin passthru ...passed 00:31:12.772 Test: blockdev copy ...passed 00:31:12.772 00:31:12.772 Run Summary: Type Total Ran Passed Failed Inactive 00:31:12.772 suites 1 1 n/a 0 0 00:31:12.772 tests 23 23 23 0 0 00:31:12.772 asserts 152 152 152 0 n/a 00:31:12.772 00:31:12.772 Elapsed time = 1.257 seconds 00:31:13.032 03:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:13.032 03:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.032 03:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:13.032 03:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.032 03:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:31:13.032 03:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:31:13.032 03:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:13.032 03:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:31:13.032 03:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:13.032 03:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:31:13.032 03:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:13.032 03:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:13.032 rmmod nvme_tcp 00:31:13.032 rmmod nvme_fabrics 00:31:13.032 rmmod nvme_keyring 00:31:13.032 03:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:13.032 03:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:31:13.032 03:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:31:13.032 03:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2846716 ']' 00:31:13.032 03:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2846716 00:31:13.032 03:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2846716 ']' 00:31:13.032 03:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2846716 00:31:13.032 03:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:31:13.032 03:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:13.032 03:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2846716 00:31:13.291 03:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:31:13.291 03:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:31:13.291 03:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2846716' 00:31:13.291 killing process with pid 2846716 00:31:13.291 03:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2846716 00:31:13.291 03:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2846716 00:31:13.292 03:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:13.292 03:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:13.292 03:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:13.292 03:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:31:13.292 03:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:31:13.292 03:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:13.292 03:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:31:13.292 03:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:13.292 03:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:13.292 03:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:13.292 03:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:13.292 03:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:15.830 03:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:15.830 00:31:15.830 real 0m9.893s 00:31:15.830 user 0m9.748s 00:31:15.830 sys 0m5.146s 00:31:15.830 03:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:15.830 03:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:15.830 ************************************ 00:31:15.830 END TEST nvmf_bdevio 00:31:15.830 ************************************ 00:31:15.830 03:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:31:15.830 00:31:15.830 real 4m26.904s 00:31:15.830 user 9m7.380s 00:31:15.830 sys 1m47.474s 00:31:15.830 03:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:15.830 03:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:15.830 ************************************ 00:31:15.830 END TEST nvmf_target_core_interrupt_mode 00:31:15.830 ************************************ 00:31:15.830 03:39:35 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:31:15.830 03:39:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:15.830 03:39:35 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:15.830 03:39:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:15.830 ************************************ 00:31:15.830 START TEST nvmf_interrupt 00:31:15.830 ************************************ 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:31:15.830 * Looking for test storage... 00:31:15.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:15.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.830 --rc genhtml_branch_coverage=1 00:31:15.830 --rc genhtml_function_coverage=1 00:31:15.830 --rc genhtml_legend=1 00:31:15.830 --rc geninfo_all_blocks=1 00:31:15.830 --rc geninfo_unexecuted_blocks=1 00:31:15.830 00:31:15.830 ' 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:15.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.830 --rc genhtml_branch_coverage=1 00:31:15.830 --rc genhtml_function_coverage=1 00:31:15.830 --rc genhtml_legend=1 00:31:15.830 --rc geninfo_all_blocks=1 00:31:15.830 --rc geninfo_unexecuted_blocks=1 00:31:15.830 00:31:15.830 ' 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:15.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.830 --rc genhtml_branch_coverage=1 00:31:15.830 --rc genhtml_function_coverage=1 00:31:15.830 --rc genhtml_legend=1 00:31:15.830 --rc geninfo_all_blocks=1 00:31:15.830 --rc geninfo_unexecuted_blocks=1 00:31:15.830 00:31:15.830 ' 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:15.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.830 --rc genhtml_branch_coverage=1 00:31:15.830 --rc genhtml_function_coverage=1 00:31:15.830 --rc genhtml_legend=1 00:31:15.830 --rc geninfo_all_blocks=1 00:31:15.830 --rc geninfo_unexecuted_blocks=1 00:31:15.830 00:31:15.830 ' 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.830 03:39:35 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:31:15.831 03:39:35 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.831 03:39:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:31:15.831 03:39:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:15.831 03:39:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:15.831 03:39:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:15.831 03:39:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:15.831 03:39:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:15.831 03:39:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:15.831 03:39:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:15.831 03:39:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:15.831 03:39:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:15.831 03:39:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:15.831 03:39:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:31:15.831 03:39:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:15.831 03:39:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:31:15.831 03:39:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:15.831 03:39:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:15.831 03:39:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:15.831 03:39:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:15.831 03:39:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:15.831 03:39:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:15.831 03:39:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:15.831 03:39:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:15.831 03:39:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:15.831 03:39:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:15.831 03:39:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:31:15.831 03:39:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:21.123 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:21.123 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:21.123 Found net devices under 0000:86:00.0: cvl_0_0 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:21.123 Found net devices under 0000:86:00.1: cvl_0_1 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:21.123 03:39:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:21.123 03:39:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:21.123 03:39:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:21.123 03:39:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:21.123 03:39:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:21.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:21.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:31:21.123 00:31:21.123 --- 10.0.0.2 ping statistics --- 00:31:21.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.123 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:31:21.123 03:39:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:21.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:21.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:31:21.123 00:31:21.123 --- 10.0.0.1 ping statistics --- 00:31:21.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.123 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:31:21.123 03:39:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:21.123 03:39:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:31:21.123 03:39:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:21.123 03:39:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:21.123 03:39:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:21.123 03:39:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:21.123 03:39:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:21.123 03:39:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:21.123 03:39:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:21.123 03:39:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:31:21.123 03:39:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:21.123 03:39:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:21.123 03:39:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:21.123 03:39:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=2850496 00:31:21.123 03:39:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 2850496 00:31:21.123 03:39:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 2850496 ']' 00:31:21.123 03:39:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:21.124 03:39:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:21.124 03:39:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:21.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:21.124 03:39:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:21.124 03:39:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:21.124 03:39:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:21.124 [2024-12-06 03:39:41.135324] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:21.124 [2024-12-06 03:39:41.136268] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:31:21.124 [2024-12-06 03:39:41.136302] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:21.124 [2024-12-06 03:39:41.201571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:21.124 [2024-12-06 03:39:41.243920] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:21.124 [2024-12-06 03:39:41.243961] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:21.124 [2024-12-06 03:39:41.243969] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:21.124 [2024-12-06 03:39:41.243975] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:21.124 [2024-12-06 03:39:41.243980] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:21.124 [2024-12-06 03:39:41.245129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:21.124 [2024-12-06 03:39:41.245132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:21.383 [2024-12-06 03:39:41.313218] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:21.383 [2024-12-06 03:39:41.313486] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:21.383 [2024-12-06 03:39:41.313528] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:31:21.383 5000+0 records in 00:31:21.383 5000+0 records out 00:31:21.383 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0168487 s, 608 MB/s 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:21.383 AIO0 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:21.383 [2024-12-06 03:39:41.425629] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:21.383 [2024-12-06 03:39:41.449957] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2850496 0 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2850496 0 idle 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2850496 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2850496 -w 256 00:31:21.383 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:21.641 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2850496 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.23 reactor_0' 00:31:21.641 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2850496 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.23 reactor_0 00:31:21.641 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:21.641 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:21.641 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:21.641 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:21.641 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:21.641 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:21.641 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:21.641 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:21.641 03:39:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:31:21.641 03:39:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2850496 1 00:31:21.641 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2850496 1 idle 00:31:21.642 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2850496 00:31:21.642 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:21.642 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:21.642 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:21.642 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:21.642 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:21.642 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:21.642 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:21.642 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:21.642 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:21.642 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:21.642 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2850496 -w 256 00:31:21.899 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2850503 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1' 00:31:21.899 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2850503 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1 00:31:21.899 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:21.899 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:21.899 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:21.899 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:21.899 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:21.899 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:21.899 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:21.899 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:21.899 03:39:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:31:21.899 03:39:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2850625 00:31:21.899 03:39:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:31:21.899 03:39:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:31:21.899 03:39:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2850496 0 00:31:21.899 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2850496 0 busy 00:31:21.899 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2850496 00:31:21.900 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:21.900 03:39:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:21.900 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:31:21.900 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:31:21.900 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:21.900 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:31:21.900 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:21.900 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:21.900 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:21.900 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2850496 -w 256 00:31:21.900 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:21.900 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2850496 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:00.23 reactor_0' 00:31:21.900 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2850496 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:00.23 reactor_0 00:31:21.900 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:21.900 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:21.900 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:21.900 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:21.900 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:31:21.900 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:31:21.900 03:39:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:31:23.277 03:39:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:31:23.277 03:39:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:23.277 03:39:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2850496 -w 256 00:31:23.277 03:39:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:23.277 03:39:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2850496 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:02.58 reactor_0' 00:31:23.277 03:39:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2850496 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:02.58 reactor_0 00:31:23.277 03:39:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:23.277 03:39:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:23.277 03:39:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:31:23.277 03:39:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:31:23.277 03:39:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:31:23.277 03:39:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:31:23.277 03:39:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:31:23.277 03:39:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:23.277 03:39:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:31:23.277 03:39:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:31:23.277 03:39:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2850496 1 00:31:23.277 03:39:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2850496 1 busy 00:31:23.277 03:39:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2850496 00:31:23.277 03:39:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:23.277 03:39:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:31:23.277 03:39:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:31:23.277 03:39:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:23.277 03:39:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:31:23.277 03:39:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:23.277 03:39:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:23.277 03:39:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:23.277 03:39:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2850496 -w 256 00:31:23.277 03:39:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:23.277 03:39:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2850503 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:01.35 reactor_1' 00:31:23.277 03:39:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2850503 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:01.35 reactor_1 00:31:23.277 03:39:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:23.277 03:39:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:23.277 03:39:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:31:23.277 03:39:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:31:23.277 03:39:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:31:23.277 03:39:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:31:23.277 03:39:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:31:23.277 03:39:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:23.277 03:39:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2850625 00:31:33.262 Initializing NVMe Controllers 00:31:33.262 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:33.262 Controller IO queue size 256, less than required. 00:31:33.262 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:33.262 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:33.262 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:33.262 Initialization complete. Launching workers. 00:31:33.262 ======================================================== 00:31:33.262 Latency(us) 00:31:33.262 Device Information : IOPS MiB/s Average min max 00:31:33.262 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16320.80 63.75 15694.23 2708.70 55940.94 00:31:33.262 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16149.90 63.09 15859.86 4206.55 19536.79 00:31:33.262 ======================================================== 00:31:33.263 Total : 32470.70 126.84 15776.61 2708.70 55940.94 00:31:33.263 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2850496 0 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2850496 0 idle 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2850496 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2850496 -w 256 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2850496 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.22 reactor_0' 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2850496 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.22 reactor_0 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2850496 1 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2850496 1 idle 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2850496 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2850496 -w 256 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2850503 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1' 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2850503 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:31:33.263 03:39:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:31:35.163 03:39:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:35.163 03:39:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:35.163 03:39:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:35.163 03:39:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:31:35.163 03:39:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:35.163 03:39:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:31:35.163 03:39:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:31:35.163 03:39:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2850496 0 00:31:35.163 03:39:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2850496 0 idle 00:31:35.163 03:39:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2850496 00:31:35.163 03:39:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:35.163 03:39:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:35.163 03:39:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:35.163 03:39:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:35.163 03:39:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:35.163 03:39:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:35.163 03:39:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:35.163 03:39:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:35.163 03:39:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:35.163 03:39:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2850496 -w 256 00:31:35.163 03:39:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:35.163 03:39:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2850496 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:20.35 reactor_0' 00:31:35.163 03:39:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2850496 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:20.35 reactor_0 00:31:35.163 03:39:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:35.163 03:39:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:35.163 03:39:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:35.163 03:39:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:35.163 03:39:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:35.163 03:39:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:35.163 03:39:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:35.163 03:39:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:35.163 03:39:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:31:35.163 03:39:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2850496 1 00:31:35.163 03:39:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2850496 1 idle 00:31:35.163 03:39:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2850496 00:31:35.163 03:39:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:35.163 03:39:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:35.163 03:39:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:35.163 03:39:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:35.163 03:39:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:35.163 03:39:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:35.163 03:39:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:35.163 03:39:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:35.163 03:39:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:35.163 03:39:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2850496 -w 256 00:31:35.163 03:39:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:35.163 03:39:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2850503 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:10.04 reactor_1' 00:31:35.163 03:39:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2850503 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:10.04 reactor_1 00:31:35.163 03:39:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:35.163 03:39:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:35.163 03:39:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:35.163 03:39:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:35.163 03:39:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:35.163 03:39:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:35.163 03:39:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:35.163 03:39:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:35.163 03:39:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:35.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:35.163 03:39:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:35.163 03:39:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:31:35.163 03:39:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:35.164 03:39:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:35.164 03:39:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:35.164 03:39:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:35.422 03:39:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:31:35.422 03:39:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:31:35.422 03:39:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:31:35.422 03:39:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:35.422 03:39:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:31:35.422 03:39:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:35.422 03:39:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:31:35.422 03:39:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:35.422 03:39:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:35.422 rmmod nvme_tcp 00:31:35.422 rmmod nvme_fabrics 00:31:35.422 rmmod nvme_keyring 00:31:35.422 03:39:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:35.422 03:39:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:31:35.422 03:39:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:31:35.422 03:39:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 2850496 ']' 00:31:35.422 03:39:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 2850496 00:31:35.422 03:39:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 2850496 ']' 00:31:35.422 03:39:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 2850496 00:31:35.422 03:39:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:31:35.422 03:39:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:35.422 03:39:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2850496 00:31:35.422 03:39:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:35.422 03:39:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:35.422 03:39:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2850496' 00:31:35.422 killing process with pid 2850496 00:31:35.422 03:39:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 2850496 00:31:35.422 03:39:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 2850496 00:31:35.681 03:39:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:35.681 03:39:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:35.681 03:39:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:35.681 03:39:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:31:35.681 03:39:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:31:35.681 03:39:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:35.681 03:39:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:31:35.681 03:39:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:35.681 03:39:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:35.681 03:39:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:35.681 03:39:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:35.681 03:39:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:37.584 03:39:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:37.584 00:31:37.584 real 0m22.153s 00:31:37.584 user 0m39.289s 00:31:37.584 sys 0m8.088s 00:31:37.584 03:39:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:37.584 03:39:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:37.584 ************************************ 00:31:37.584 END TEST nvmf_interrupt 00:31:37.584 ************************************ 00:31:37.843 00:31:37.843 real 26m44.399s 00:31:37.843 user 55m53.044s 00:31:37.843 sys 8m55.267s 00:31:37.843 03:39:57 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:37.843 03:39:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:37.843 ************************************ 00:31:37.843 END TEST nvmf_tcp 00:31:37.843 ************************************ 00:31:37.843 03:39:57 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:31:37.843 03:39:57 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:37.844 03:39:57 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:37.844 03:39:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:37.844 03:39:57 -- common/autotest_common.sh@10 -- # set +x 00:31:37.844 ************************************ 00:31:37.844 START TEST spdkcli_nvmf_tcp 00:31:37.844 ************************************ 00:31:37.844 03:39:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:37.844 * Looking for test storage... 00:31:37.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:31:37.844 03:39:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:37.844 03:39:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:31:37.844 03:39:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:37.844 03:39:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:37.844 03:39:57 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:37.844 03:39:57 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:37.844 03:39:57 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:37.844 03:39:57 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:31:37.844 03:39:57 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:31:37.844 03:39:57 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:31:37.844 03:39:57 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:31:37.844 03:39:57 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:31:37.844 03:39:57 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:31:37.844 03:39:57 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:31:37.844 03:39:57 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:37.844 03:39:57 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:31:37.844 03:39:57 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:31:37.844 03:39:57 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:37.844 03:39:57 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:38.103 03:39:57 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:31:38.103 03:39:57 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:31:38.103 03:39:57 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:38.103 03:39:57 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:31:38.103 03:39:57 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:31:38.103 03:39:57 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:31:38.103 03:39:57 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:31:38.103 03:39:57 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:38.103 03:39:57 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:31:38.103 03:39:57 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:31:38.103 03:39:57 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:38.103 03:39:57 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:38.103 03:39:57 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:31:38.103 03:39:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:38.103 03:39:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:38.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.103 --rc genhtml_branch_coverage=1 00:31:38.103 --rc genhtml_function_coverage=1 00:31:38.103 --rc genhtml_legend=1 00:31:38.103 --rc geninfo_all_blocks=1 00:31:38.103 --rc geninfo_unexecuted_blocks=1 00:31:38.103 00:31:38.103 ' 00:31:38.104 03:39:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:38.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.104 --rc genhtml_branch_coverage=1 00:31:38.104 --rc genhtml_function_coverage=1 00:31:38.104 --rc genhtml_legend=1 00:31:38.104 --rc geninfo_all_blocks=1 00:31:38.104 --rc geninfo_unexecuted_blocks=1 00:31:38.104 00:31:38.104 ' 00:31:38.104 03:39:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:38.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.104 --rc genhtml_branch_coverage=1 00:31:38.104 --rc genhtml_function_coverage=1 00:31:38.104 --rc genhtml_legend=1 00:31:38.104 --rc geninfo_all_blocks=1 00:31:38.104 --rc geninfo_unexecuted_blocks=1 00:31:38.104 00:31:38.104 ' 00:31:38.104 03:39:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:38.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.104 --rc genhtml_branch_coverage=1 00:31:38.104 --rc genhtml_function_coverage=1 00:31:38.104 --rc genhtml_legend=1 00:31:38.104 --rc geninfo_all_blocks=1 00:31:38.104 --rc geninfo_unexecuted_blocks=1 00:31:38.104 00:31:38.104 ' 00:31:38.104 03:39:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:31:38.104 03:39:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:31:38.104 03:39:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:31:38.104 03:39:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:38.104 03:39:57 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:31:38.104 03:39:57 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:38.104 03:39:57 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:38.104 03:39:57 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:38.104 03:39:57 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:38.104 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2853442 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2853442 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 2853442 ']' 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:38.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:38.104 03:39:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:38.104 [2024-12-06 03:39:58.077216] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:31:38.104 [2024-12-06 03:39:58.077264] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2853442 ] 00:31:38.104 [2024-12-06 03:39:58.137742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:38.105 [2024-12-06 03:39:58.181868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:38.105 [2024-12-06 03:39:58.181872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:38.364 03:39:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:38.364 03:39:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:31:38.364 03:39:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:31:38.364 03:39:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:38.364 03:39:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:38.364 03:39:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:31:38.364 03:39:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:31:38.364 03:39:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:31:38.364 03:39:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:38.364 03:39:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:38.364 03:39:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:31:38.364 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:31:38.364 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:31:38.364 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:31:38.364 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:31:38.364 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:31:38.364 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:31:38.364 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:38.364 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:31:38.364 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:31:38.364 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:38.364 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:38.364 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:31:38.364 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:38.364 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:38.364 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:31:38.364 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:38.364 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:38.364 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:38.364 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:38.364 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:31:38.364 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:31:38.364 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:38.364 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:31:38.364 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:38.364 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:31:38.364 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:31:38.364 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:31:38.364 ' 00:31:40.903 [2024-12-06 03:40:00.797675] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:42.279 [2024-12-06 03:40:02.017770] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:31:44.179 [2024-12-06 03:40:04.264716] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:31:46.080 [2024-12-06 03:40:06.194900] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:31:47.980 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:31:47.980 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:31:47.980 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:31:47.980 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:31:47.980 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:31:47.980 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:31:47.980 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:31:47.980 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:47.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:31:47.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:31:47.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:47.980 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:47.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:31:47.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:47.980 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:47.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:31:47.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:47.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:47.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:47.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:47.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:31:47.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:31:47.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:47.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:31:47.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:47.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:31:47.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:31:47.980 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:31:47.980 03:40:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:31:47.980 03:40:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:47.980 03:40:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:47.980 03:40:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:31:47.980 03:40:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:47.980 03:40:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:47.980 03:40:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:31:47.980 03:40:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:31:48.239 03:40:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:31:48.239 03:40:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:31:48.239 03:40:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:31:48.239 03:40:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:48.239 03:40:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:48.239 03:40:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:31:48.239 03:40:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:48.239 03:40:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:48.239 03:40:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:31:48.239 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:31:48.239 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:48.239 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:31:48.239 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:31:48.239 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:31:48.239 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:31:48.239 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:48.239 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:31:48.239 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:31:48.239 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:31:48.239 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:31:48.239 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:31:48.239 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:31:48.239 ' 00:31:53.511 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:31:53.511 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:31:53.511 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:53.511 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:31:53.511 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:31:53.511 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:31:53.511 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:31:53.511 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:53.511 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:31:53.511 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:31:53.511 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:31:53.511 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:31:53.511 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:31:53.511 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:31:53.511 03:40:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:31:53.511 03:40:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:53.511 03:40:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:53.511 03:40:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2853442 00:31:53.511 03:40:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2853442 ']' 00:31:53.511 03:40:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2853442 00:31:53.511 03:40:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:31:53.511 03:40:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:53.511 03:40:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2853442 00:31:53.511 03:40:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:53.511 03:40:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:53.511 03:40:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2853442' 00:31:53.511 killing process with pid 2853442 00:31:53.511 03:40:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 2853442 00:31:53.511 03:40:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 2853442 00:31:53.771 03:40:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:31:53.771 03:40:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:31:53.771 03:40:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2853442 ']' 00:31:53.771 03:40:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2853442 00:31:53.771 03:40:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2853442 ']' 00:31:53.771 03:40:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2853442 00:31:53.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2853442) - No such process 00:31:53.771 03:40:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 2853442 is not found' 00:31:53.771 Process with pid 2853442 is not found 00:31:53.771 03:40:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:31:53.771 03:40:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:31:53.771 03:40:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:31:53.771 00:31:53.771 real 0m15.859s 00:31:53.771 user 0m33.050s 00:31:53.771 sys 0m0.678s 00:31:53.771 03:40:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:53.771 03:40:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:53.771 ************************************ 00:31:53.771 END TEST spdkcli_nvmf_tcp 00:31:53.771 ************************************ 00:31:53.771 03:40:13 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:53.771 03:40:13 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:53.771 03:40:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:53.771 03:40:13 -- common/autotest_common.sh@10 -- # set +x 00:31:53.771 ************************************ 00:31:53.771 START TEST nvmf_identify_passthru 00:31:53.771 ************************************ 00:31:53.771 03:40:13 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:53.771 * Looking for test storage... 00:31:53.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:53.771 03:40:13 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:53.771 03:40:13 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:31:53.771 03:40:13 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:53.771 03:40:13 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:53.771 03:40:13 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:53.771 03:40:13 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:53.771 03:40:13 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:53.771 03:40:13 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:31:53.771 03:40:13 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:31:53.771 03:40:13 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:31:53.771 03:40:13 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:31:53.771 03:40:13 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:31:53.771 03:40:13 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:31:53.771 03:40:13 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:31:53.772 03:40:13 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:53.772 03:40:13 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:31:53.772 03:40:13 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:31:53.772 03:40:13 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:53.772 03:40:13 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:53.772 03:40:13 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:31:53.772 03:40:13 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:31:53.772 03:40:13 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:53.772 03:40:13 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:31:53.772 03:40:13 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:31:53.772 03:40:13 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:31:53.772 03:40:13 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:31:53.772 03:40:13 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:53.772 03:40:13 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:31:53.772 03:40:13 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:31:53.772 03:40:13 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:53.772 03:40:13 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:53.772 03:40:13 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:31:53.772 03:40:13 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:53.772 03:40:13 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:53.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.772 --rc genhtml_branch_coverage=1 00:31:53.772 --rc genhtml_function_coverage=1 00:31:53.772 --rc genhtml_legend=1 00:31:53.772 --rc geninfo_all_blocks=1 00:31:53.772 --rc geninfo_unexecuted_blocks=1 00:31:53.772 00:31:53.772 ' 00:31:53.772 03:40:13 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:53.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.772 --rc genhtml_branch_coverage=1 00:31:53.772 --rc genhtml_function_coverage=1 00:31:53.772 --rc genhtml_legend=1 00:31:53.772 --rc geninfo_all_blocks=1 00:31:53.772 --rc geninfo_unexecuted_blocks=1 00:31:53.772 00:31:53.772 ' 00:31:53.772 03:40:13 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:53.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.772 --rc genhtml_branch_coverage=1 00:31:53.772 --rc genhtml_function_coverage=1 00:31:53.772 --rc genhtml_legend=1 00:31:53.772 --rc geninfo_all_blocks=1 00:31:53.772 --rc geninfo_unexecuted_blocks=1 00:31:53.772 00:31:53.772 ' 00:31:53.772 03:40:13 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:53.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.772 --rc genhtml_branch_coverage=1 00:31:53.772 --rc genhtml_function_coverage=1 00:31:53.772 --rc genhtml_legend=1 00:31:53.772 --rc geninfo_all_blocks=1 00:31:53.772 --rc geninfo_unexecuted_blocks=1 00:31:53.772 00:31:53.772 ' 00:31:53.772 03:40:13 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:53.772 03:40:13 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:31:53.772 03:40:13 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:53.772 03:40:13 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:53.772 03:40:13 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:53.772 03:40:13 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:53.772 03:40:13 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:53.772 03:40:13 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:53.772 03:40:13 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:53.772 03:40:13 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:53.772 03:40:13 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:53.772 03:40:13 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:53.772 03:40:13 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:53.772 03:40:13 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:53.772 03:40:13 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:53.772 03:40:13 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:53.772 03:40:13 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:53.772 03:40:13 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:53.772 03:40:13 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:53.772 03:40:13 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:31:54.033 03:40:13 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:54.033 03:40:13 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:54.033 03:40:13 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:54.033 03:40:13 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.033 03:40:13 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.033 03:40:13 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.033 03:40:13 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:54.033 03:40:13 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.033 03:40:13 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:31:54.033 03:40:13 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:54.033 03:40:13 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:54.033 03:40:13 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:54.033 03:40:13 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:54.033 03:40:13 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:54.033 03:40:13 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:54.033 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:54.033 03:40:13 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:54.033 03:40:13 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:54.033 03:40:13 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:54.033 03:40:13 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:54.033 03:40:13 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:31:54.033 03:40:13 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:54.033 03:40:13 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:54.033 03:40:13 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:54.033 03:40:13 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.033 03:40:13 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.033 03:40:13 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.033 03:40:13 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:54.033 03:40:13 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.033 03:40:13 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:31:54.033 03:40:13 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:54.033 03:40:13 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:54.033 03:40:13 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:54.033 03:40:13 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:54.033 03:40:13 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:54.033 03:40:13 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:54.033 03:40:13 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:54.033 03:40:13 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:54.033 03:40:13 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:54.033 03:40:13 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:54.033 03:40:13 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:31:54.033 03:40:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:59.301 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:59.301 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:59.301 Found net devices under 0000:86:00.0: cvl_0_0 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.301 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:59.302 Found net devices under 0000:86:00.1: cvl_0_1 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:59.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:59.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.436 ms 00:31:59.302 00:31:59.302 --- 10.0.0.2 ping statistics --- 00:31:59.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.302 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:59.302 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:59.302 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:31:59.302 00:31:59.302 --- 10.0.0.1 ping statistics --- 00:31:59.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.302 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:59.302 03:40:19 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:59.302 03:40:19 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:31:59.302 03:40:19 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:59.302 03:40:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:59.302 03:40:19 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:31:59.302 03:40:19 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:31:59.302 03:40:19 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:31:59.302 03:40:19 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:31:59.302 03:40:19 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:31:59.302 03:40:19 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:59.302 03:40:19 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:31:59.302 03:40:19 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:59.302 03:40:19 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:59.302 03:40:19 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:59.561 03:40:19 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:59.561 03:40:19 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:31:59.561 03:40:19 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:31:59.561 03:40:19 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:31:59.561 03:40:19 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:31:59.561 03:40:19 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:31:59.561 03:40:19 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:31:59.561 03:40:19 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:32:03.753 03:40:23 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:32:03.753 03:40:23 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:32:03.753 03:40:23 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:32:03.753 03:40:23 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:32:07.949 03:40:27 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:32:07.949 03:40:27 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:32:07.949 03:40:27 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:07.949 03:40:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:07.949 03:40:27 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:32:07.949 03:40:27 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:07.949 03:40:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:07.949 03:40:27 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2860260 00:32:07.949 03:40:27 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:32:07.949 03:40:27 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:07.949 03:40:27 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2860260 00:32:07.949 03:40:27 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 2860260 ']' 00:32:07.949 03:40:27 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:07.949 03:40:27 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:07.949 03:40:27 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:07.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:07.949 03:40:27 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:07.949 03:40:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:07.949 [2024-12-06 03:40:27.806092] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:32:07.949 [2024-12-06 03:40:27.806145] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:07.949 [2024-12-06 03:40:27.872606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:07.949 [2024-12-06 03:40:27.916211] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:07.949 [2024-12-06 03:40:27.916251] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:07.949 [2024-12-06 03:40:27.916258] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:07.949 [2024-12-06 03:40:27.916265] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:07.949 [2024-12-06 03:40:27.916270] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:07.949 [2024-12-06 03:40:27.917914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:07.949 [2024-12-06 03:40:27.918017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:07.949 [2024-12-06 03:40:27.918040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:07.949 [2024-12-06 03:40:27.918042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:07.949 03:40:27 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:07.949 03:40:27 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:32:07.949 03:40:27 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:32:07.949 03:40:27 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.949 03:40:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:07.949 INFO: Log level set to 20 00:32:07.949 INFO: Requests: 00:32:07.949 { 00:32:07.949 "jsonrpc": "2.0", 00:32:07.949 "method": "nvmf_set_config", 00:32:07.949 "id": 1, 00:32:07.949 "params": { 00:32:07.949 "admin_cmd_passthru": { 00:32:07.949 "identify_ctrlr": true 00:32:07.949 } 00:32:07.949 } 00:32:07.949 } 00:32:07.949 00:32:07.949 INFO: response: 00:32:07.949 { 00:32:07.949 "jsonrpc": "2.0", 00:32:07.949 "id": 1, 00:32:07.949 "result": true 00:32:07.949 } 00:32:07.949 00:32:07.949 03:40:27 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.949 03:40:27 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:32:07.949 03:40:27 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.949 03:40:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:07.949 INFO: Setting log level to 20 00:32:07.949 INFO: Setting log level to 20 00:32:07.949 INFO: Log level set to 20 00:32:07.949 INFO: Log level set to 20 00:32:07.949 INFO: Requests: 00:32:07.949 { 00:32:07.949 "jsonrpc": "2.0", 00:32:07.949 "method": "framework_start_init", 00:32:07.949 "id": 1 00:32:07.949 } 00:32:07.949 00:32:07.949 INFO: Requests: 00:32:07.949 { 00:32:07.949 "jsonrpc": "2.0", 00:32:07.949 "method": "framework_start_init", 00:32:07.949 "id": 1 00:32:07.949 } 00:32:07.949 00:32:07.949 [2024-12-06 03:40:28.036180] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:32:07.949 INFO: response: 00:32:07.949 { 00:32:07.949 "jsonrpc": "2.0", 00:32:07.949 "id": 1, 00:32:07.949 "result": true 00:32:07.949 } 00:32:07.949 00:32:07.949 INFO: response: 00:32:07.949 { 00:32:07.950 "jsonrpc": "2.0", 00:32:07.950 "id": 1, 00:32:07.950 "result": true 00:32:07.950 } 00:32:07.950 00:32:07.950 03:40:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.950 03:40:28 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:07.950 03:40:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.950 03:40:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:07.950 INFO: Setting log level to 40 00:32:07.950 INFO: Setting log level to 40 00:32:07.950 INFO: Setting log level to 40 00:32:07.950 [2024-12-06 03:40:28.049504] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:07.950 03:40:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.950 03:40:28 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:32:07.950 03:40:28 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:07.950 03:40:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:08.209 03:40:28 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:32:08.209 03:40:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.209 03:40:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:11.500 Nvme0n1 00:32:11.500 03:40:30 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.500 03:40:30 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:32:11.500 03:40:30 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.500 03:40:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:11.500 03:40:30 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.500 03:40:30 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:11.500 03:40:30 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.500 03:40:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:11.500 03:40:30 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.500 03:40:30 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:11.500 03:40:30 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.500 03:40:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:11.500 [2024-12-06 03:40:30.961886] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:11.500 03:40:30 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.500 03:40:30 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:32:11.500 03:40:30 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.500 03:40:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:11.500 [ 00:32:11.500 { 00:32:11.500 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:11.500 "subtype": "Discovery", 00:32:11.500 "listen_addresses": [], 00:32:11.500 "allow_any_host": true, 00:32:11.500 "hosts": [] 00:32:11.500 }, 00:32:11.500 { 00:32:11.500 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:11.500 "subtype": "NVMe", 00:32:11.500 "listen_addresses": [ 00:32:11.500 { 00:32:11.500 "trtype": "TCP", 00:32:11.500 "adrfam": "IPv4", 00:32:11.500 "traddr": "10.0.0.2", 00:32:11.500 "trsvcid": "4420" 00:32:11.500 } 00:32:11.500 ], 00:32:11.500 "allow_any_host": true, 00:32:11.500 "hosts": [], 00:32:11.500 "serial_number": "SPDK00000000000001", 00:32:11.500 "model_number": "SPDK bdev Controller", 00:32:11.500 "max_namespaces": 1, 00:32:11.500 "min_cntlid": 1, 00:32:11.500 "max_cntlid": 65519, 00:32:11.500 "namespaces": [ 00:32:11.500 { 00:32:11.500 "nsid": 1, 00:32:11.500 "bdev_name": "Nvme0n1", 00:32:11.500 "name": "Nvme0n1", 00:32:11.500 "nguid": "56CF48D8DD6E495B85AB7E91169DDD0A", 00:32:11.500 "uuid": "56cf48d8-dd6e-495b-85ab-7e91169ddd0a" 00:32:11.500 } 00:32:11.500 ] 00:32:11.500 } 00:32:11.500 ] 00:32:11.500 03:40:30 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.500 03:40:30 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:11.500 03:40:30 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:32:11.500 03:40:30 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:32:11.500 03:40:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:32:11.500 03:40:31 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:11.500 03:40:31 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:32:11.500 03:40:31 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:32:11.500 03:40:31 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:32:11.500 03:40:31 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:32:11.500 03:40:31 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:32:11.501 03:40:31 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:11.501 03:40:31 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.501 03:40:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:11.501 03:40:31 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.501 03:40:31 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:32:11.501 03:40:31 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:32:11.501 03:40:31 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:11.501 03:40:31 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:32:11.501 03:40:31 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:11.501 03:40:31 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:32:11.501 03:40:31 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:11.501 03:40:31 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:11.501 rmmod nvme_tcp 00:32:11.501 rmmod nvme_fabrics 00:32:11.501 rmmod nvme_keyring 00:32:11.501 03:40:31 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:11.501 03:40:31 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:32:11.501 03:40:31 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:32:11.501 03:40:31 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 2860260 ']' 00:32:11.501 03:40:31 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 2860260 00:32:11.501 03:40:31 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 2860260 ']' 00:32:11.501 03:40:31 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 2860260 00:32:11.501 03:40:31 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:32:11.501 03:40:31 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:11.501 03:40:31 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2860260 00:32:11.501 03:40:31 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:11.501 03:40:31 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:11.501 03:40:31 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2860260' 00:32:11.501 killing process with pid 2860260 00:32:11.501 03:40:31 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 2860260 00:32:11.501 03:40:31 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 2860260 00:32:12.880 03:40:32 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:12.880 03:40:32 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:12.880 03:40:32 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:12.880 03:40:32 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:32:12.880 03:40:32 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:32:12.880 03:40:32 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:32:12.880 03:40:32 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:12.880 03:40:32 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:12.880 03:40:32 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:12.880 03:40:32 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:12.880 03:40:32 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:12.880 03:40:32 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:14.846 03:40:34 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:15.187 00:32:15.187 real 0m21.237s 00:32:15.187 user 0m26.399s 00:32:15.187 sys 0m5.811s 00:32:15.187 03:40:34 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:15.187 03:40:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:15.187 ************************************ 00:32:15.187 END TEST nvmf_identify_passthru 00:32:15.187 ************************************ 00:32:15.187 03:40:35 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:15.187 03:40:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:15.187 03:40:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:15.187 03:40:35 -- common/autotest_common.sh@10 -- # set +x 00:32:15.187 ************************************ 00:32:15.187 START TEST nvmf_dif 00:32:15.187 ************************************ 00:32:15.187 03:40:35 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:15.187 * Looking for test storage... 00:32:15.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:15.187 03:40:35 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:15.187 03:40:35 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:32:15.187 03:40:35 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:15.187 03:40:35 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:15.187 03:40:35 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:15.187 03:40:35 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:15.187 03:40:35 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:15.187 03:40:35 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:32:15.187 03:40:35 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:32:15.187 03:40:35 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:32:15.187 03:40:35 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:32:15.187 03:40:35 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:32:15.187 03:40:35 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:32:15.187 03:40:35 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:32:15.187 03:40:35 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:15.187 03:40:35 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:32:15.187 03:40:35 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:32:15.187 03:40:35 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:15.187 03:40:35 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:15.187 03:40:35 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:32:15.187 03:40:35 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:32:15.187 03:40:35 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:15.187 03:40:35 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:32:15.187 03:40:35 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:32:15.187 03:40:35 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:32:15.187 03:40:35 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:32:15.187 03:40:35 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:15.187 03:40:35 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:32:15.187 03:40:35 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:32:15.187 03:40:35 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:15.187 03:40:35 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:15.187 03:40:35 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:32:15.187 03:40:35 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:15.187 03:40:35 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:15.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.187 --rc genhtml_branch_coverage=1 00:32:15.187 --rc genhtml_function_coverage=1 00:32:15.187 --rc genhtml_legend=1 00:32:15.187 --rc geninfo_all_blocks=1 00:32:15.187 --rc geninfo_unexecuted_blocks=1 00:32:15.187 00:32:15.187 ' 00:32:15.187 03:40:35 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:15.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.187 --rc genhtml_branch_coverage=1 00:32:15.187 --rc genhtml_function_coverage=1 00:32:15.187 --rc genhtml_legend=1 00:32:15.187 --rc geninfo_all_blocks=1 00:32:15.187 --rc geninfo_unexecuted_blocks=1 00:32:15.187 00:32:15.187 ' 00:32:15.187 03:40:35 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:15.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.187 --rc genhtml_branch_coverage=1 00:32:15.187 --rc genhtml_function_coverage=1 00:32:15.187 --rc genhtml_legend=1 00:32:15.187 --rc geninfo_all_blocks=1 00:32:15.187 --rc geninfo_unexecuted_blocks=1 00:32:15.187 00:32:15.187 ' 00:32:15.187 03:40:35 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:15.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.187 --rc genhtml_branch_coverage=1 00:32:15.187 --rc genhtml_function_coverage=1 00:32:15.187 --rc genhtml_legend=1 00:32:15.187 --rc geninfo_all_blocks=1 00:32:15.187 --rc geninfo_unexecuted_blocks=1 00:32:15.187 00:32:15.187 ' 00:32:15.187 03:40:35 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:15.187 03:40:35 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:32:15.187 03:40:35 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:15.187 03:40:35 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:15.187 03:40:35 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:15.187 03:40:35 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:15.187 03:40:35 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:15.187 03:40:35 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:15.187 03:40:35 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:15.187 03:40:35 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:15.187 03:40:35 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:15.188 03:40:35 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:15.188 03:40:35 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:15.188 03:40:35 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:15.188 03:40:35 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:15.188 03:40:35 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:15.188 03:40:35 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:15.188 03:40:35 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:15.188 03:40:35 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:15.188 03:40:35 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:32:15.188 03:40:35 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:15.188 03:40:35 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:15.188 03:40:35 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:15.188 03:40:35 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.188 03:40:35 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.188 03:40:35 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.188 03:40:35 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:32:15.188 03:40:35 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.188 03:40:35 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:32:15.188 03:40:35 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:15.188 03:40:35 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:15.188 03:40:35 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:15.188 03:40:35 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:15.188 03:40:35 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:15.188 03:40:35 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:15.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:15.188 03:40:35 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:15.188 03:40:35 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:15.188 03:40:35 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:15.188 03:40:35 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:32:15.188 03:40:35 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:32:15.188 03:40:35 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:32:15.188 03:40:35 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:32:15.188 03:40:35 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:32:15.188 03:40:35 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:15.188 03:40:35 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:15.188 03:40:35 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:15.188 03:40:35 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:15.188 03:40:35 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:15.188 03:40:35 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:15.188 03:40:35 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:15.188 03:40:35 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:15.188 03:40:35 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:15.188 03:40:35 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:15.188 03:40:35 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:32:15.188 03:40:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:20.461 03:40:40 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:20.461 03:40:40 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:32:20.461 03:40:40 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:20.461 03:40:40 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:20.461 03:40:40 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:20.461 03:40:40 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:20.461 03:40:40 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:20.461 03:40:40 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:32:20.461 03:40:40 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:20.461 03:40:40 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:32:20.461 03:40:40 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:32:20.461 03:40:40 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:32:20.461 03:40:40 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:32:20.461 03:40:40 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:32:20.461 03:40:40 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:20.462 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:20.462 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:20.462 Found net devices under 0000:86:00.0: cvl_0_0 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:20.462 Found net devices under 0000:86:00.1: cvl_0_1 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:20.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:20.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.431 ms 00:32:20.462 00:32:20.462 --- 10.0.0.2 ping statistics --- 00:32:20.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:20.462 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:20.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:20.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:32:20.462 00:32:20.462 --- 10.0.0.1 ping statistics --- 00:32:20.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:20.462 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:32:20.462 03:40:40 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:22.361 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:32:22.361 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:32:22.361 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:32:22.361 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:32:22.361 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:32:22.361 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:32:22.361 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:32:22.361 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:32:22.361 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:32:22.361 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:32:22.361 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:32:22.361 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:32:22.361 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:32:22.361 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:32:22.361 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:32:22.361 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:32:22.361 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:32:22.620 03:40:42 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:22.620 03:40:42 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:22.620 03:40:42 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:22.620 03:40:42 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:22.620 03:40:42 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:22.620 03:40:42 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:22.620 03:40:42 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:32:22.620 03:40:42 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:32:22.620 03:40:42 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:22.620 03:40:42 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:22.620 03:40:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:22.620 03:40:42 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=2865724 00:32:22.620 03:40:42 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 2865724 00:32:22.620 03:40:42 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 2865724 ']' 00:32:22.620 03:40:42 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:22.620 03:40:42 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:22.620 03:40:42 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:22.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:22.620 03:40:42 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:32:22.620 03:40:42 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:22.620 03:40:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:22.620 [2024-12-06 03:40:42.699251] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:32:22.620 [2024-12-06 03:40:42.699295] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:22.880 [2024-12-06 03:40:42.765898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:22.880 [2024-12-06 03:40:42.807057] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:22.880 [2024-12-06 03:40:42.807096] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:22.880 [2024-12-06 03:40:42.807103] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:22.880 [2024-12-06 03:40:42.807109] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:22.880 [2024-12-06 03:40:42.807114] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:22.880 [2024-12-06 03:40:42.807703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:22.880 03:40:42 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:22.880 03:40:42 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:32:22.880 03:40:42 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:22.880 03:40:42 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:22.880 03:40:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:22.880 03:40:42 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:22.880 03:40:42 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:32:22.880 03:40:42 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:32:22.880 03:40:42 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.880 03:40:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:22.880 [2024-12-06 03:40:42.939493] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:22.880 03:40:42 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.880 03:40:42 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:32:22.880 03:40:42 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:22.880 03:40:42 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:22.880 03:40:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:22.880 ************************************ 00:32:22.880 START TEST fio_dif_1_default 00:32:22.880 ************************************ 00:32:22.880 03:40:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:32:22.880 03:40:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:32:22.880 03:40:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:32:22.880 03:40:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:32:22.880 03:40:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:32:22.880 03:40:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:32:22.880 03:40:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:22.880 03:40:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.880 03:40:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:22.880 bdev_null0 00:32:22.880 03:40:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.880 03:40:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:22.880 03:40:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.880 03:40:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:22.880 03:40:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.880 03:40:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:22.880 03:40:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.880 03:40:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:22.880 03:40:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.880 03:40:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:22.880 03:40:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.880 03:40:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:22.881 [2024-12-06 03:40:43.003772] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:22.881 03:40:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.881 03:40:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:32:22.881 03:40:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:22.881 03:40:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:22.881 03:40:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:32:22.881 03:40:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:22.881 03:40:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:22.881 03:40:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:22.881 03:40:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:22.881 03:40:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:22.881 03:40:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:32:22.881 03:40:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:32:22.881 03:40:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:32:22.881 03:40:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:22.881 03:40:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:22.881 03:40:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:32:22.881 03:40:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:32:22.881 03:40:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:32:22.881 03:40:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:22.881 03:40:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:22.881 { 00:32:22.881 "params": { 00:32:22.881 "name": "Nvme$subsystem", 00:32:22.881 "trtype": "$TEST_TRANSPORT", 00:32:22.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:22.881 "adrfam": "ipv4", 00:32:22.881 "trsvcid": "$NVMF_PORT", 00:32:22.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:22.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:22.881 "hdgst": ${hdgst:-false}, 00:32:22.881 "ddgst": ${ddgst:-false} 00:32:22.881 }, 00:32:22.881 "method": "bdev_nvme_attach_controller" 00:32:22.881 } 00:32:22.881 EOF 00:32:22.881 )") 00:32:22.881 03:40:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:22.881 03:40:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:32:22.881 03:40:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:32:22.881 03:40:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:22.881 03:40:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:32:22.881 03:40:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:32:22.881 03:40:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:32:23.141 03:40:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:32:23.141 03:40:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:23.141 "params": { 00:32:23.141 "name": "Nvme0", 00:32:23.141 "trtype": "tcp", 00:32:23.141 "traddr": "10.0.0.2", 00:32:23.141 "adrfam": "ipv4", 00:32:23.141 "trsvcid": "4420", 00:32:23.141 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:23.141 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:23.141 "hdgst": false, 00:32:23.141 "ddgst": false 00:32:23.141 }, 00:32:23.141 "method": "bdev_nvme_attach_controller" 00:32:23.141 }' 00:32:23.141 03:40:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:23.141 03:40:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:23.141 03:40:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:23.141 03:40:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:23.141 03:40:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:23.141 03:40:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:23.141 03:40:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:23.141 03:40:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:23.141 03:40:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:23.141 03:40:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:23.407 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:23.407 fio-3.35 00:32:23.407 Starting 1 thread 00:32:35.616 00:32:35.616 filename0: (groupid=0, jobs=1): err= 0: pid=2865994: Fri Dec 6 03:40:54 2024 00:32:35.616 read: IOPS=188, BW=754KiB/s (772kB/s)(7552KiB/10021msec) 00:32:35.616 slat (nsec): min=5656, max=27079, avg=6246.74, stdev=873.88 00:32:35.616 clat (usec): min=408, max=43828, avg=21213.30, stdev=20633.24 00:32:35.616 lat (usec): min=413, max=43855, avg=21219.55, stdev=20633.20 00:32:35.616 clat percentiles (usec): 00:32:35.616 | 1.00th=[ 424], 5.00th=[ 437], 10.00th=[ 457], 20.00th=[ 474], 00:32:35.616 | 30.00th=[ 486], 40.00th=[ 570], 50.00th=[40633], 60.00th=[41681], 00:32:35.616 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42730], 00:32:35.616 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:32:35.616 | 99.99th=[43779] 00:32:35.616 bw ( KiB/s): min= 670, max= 768, per=99.92%, avg=753.50, stdev=30.51, samples=20 00:32:35.616 iops : min= 167, max= 192, avg=188.35, stdev= 7.70, samples=20 00:32:35.616 lat (usec) : 500=34.85%, 750=14.94% 00:32:35.616 lat (msec) : 50=50.21% 00:32:35.616 cpu : usr=92.55%, sys=7.20%, ctx=12, majf=0, minf=0 00:32:35.616 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:35.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:35.616 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:35.616 issued rwts: total=1888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:35.616 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:35.616 00:32:35.616 Run status group 0 (all jobs): 00:32:35.616 READ: bw=754KiB/s (772kB/s), 754KiB/s-754KiB/s (772kB/s-772kB/s), io=7552KiB (7733kB), run=10021-10021msec 00:32:35.616 03:40:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:32:35.616 03:40:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:32:35.616 03:40:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:32:35.616 03:40:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:35.616 03:40:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:32:35.616 03:40:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:35.616 03:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.616 03:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:35.616 03:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.616 03:40:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:35.616 03:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.616 03:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:35.616 03:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.616 00:32:35.616 real 0m11.205s 00:32:35.616 user 0m16.057s 00:32:35.616 sys 0m0.988s 00:32:35.616 03:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:35.616 03:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:35.616 ************************************ 00:32:35.617 END TEST fio_dif_1_default 00:32:35.617 ************************************ 00:32:35.617 03:40:54 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:32:35.617 03:40:54 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:35.617 03:40:54 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:35.617 03:40:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:35.617 ************************************ 00:32:35.617 START TEST fio_dif_1_multi_subsystems 00:32:35.617 ************************************ 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:35.617 bdev_null0 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:35.617 [2024-12-06 03:40:54.270715] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:35.617 bdev_null1 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:35.617 { 00:32:35.617 "params": { 00:32:35.617 "name": "Nvme$subsystem", 00:32:35.617 "trtype": "$TEST_TRANSPORT", 00:32:35.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:35.617 "adrfam": "ipv4", 00:32:35.617 "trsvcid": "$NVMF_PORT", 00:32:35.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:35.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:35.617 "hdgst": ${hdgst:-false}, 00:32:35.617 "ddgst": ${ddgst:-false} 00:32:35.617 }, 00:32:35.617 "method": "bdev_nvme_attach_controller" 00:32:35.617 } 00:32:35.617 EOF 00:32:35.617 )") 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:35.617 { 00:32:35.617 "params": { 00:32:35.617 "name": "Nvme$subsystem", 00:32:35.617 "trtype": "$TEST_TRANSPORT", 00:32:35.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:35.617 "adrfam": "ipv4", 00:32:35.617 "trsvcid": "$NVMF_PORT", 00:32:35.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:35.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:35.617 "hdgst": ${hdgst:-false}, 00:32:35.617 "ddgst": ${ddgst:-false} 00:32:35.617 }, 00:32:35.617 "method": "bdev_nvme_attach_controller" 00:32:35.617 } 00:32:35.617 EOF 00:32:35.617 )") 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:35.617 "params": { 00:32:35.617 "name": "Nvme0", 00:32:35.617 "trtype": "tcp", 00:32:35.617 "traddr": "10.0.0.2", 00:32:35.617 "adrfam": "ipv4", 00:32:35.617 "trsvcid": "4420", 00:32:35.617 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:35.617 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:35.617 "hdgst": false, 00:32:35.617 "ddgst": false 00:32:35.617 }, 00:32:35.617 "method": "bdev_nvme_attach_controller" 00:32:35.617 },{ 00:32:35.617 "params": { 00:32:35.617 "name": "Nvme1", 00:32:35.617 "trtype": "tcp", 00:32:35.617 "traddr": "10.0.0.2", 00:32:35.617 "adrfam": "ipv4", 00:32:35.617 "trsvcid": "4420", 00:32:35.617 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:35.617 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:35.617 "hdgst": false, 00:32:35.617 "ddgst": false 00:32:35.617 }, 00:32:35.617 "method": "bdev_nvme_attach_controller" 00:32:35.617 }' 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:35.617 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:35.618 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:35.618 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:35.618 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:35.618 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:35.618 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:35.618 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:35.618 03:40:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:35.618 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:35.618 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:35.618 fio-3.35 00:32:35.618 Starting 2 threads 00:32:45.618 00:32:45.618 filename0: (groupid=0, jobs=1): err= 0: pid=2867847: Fri Dec 6 03:41:05 2024 00:32:45.618 read: IOPS=97, BW=392KiB/s (401kB/s)(3920KiB/10012msec) 00:32:45.618 slat (nsec): min=6092, max=27009, avg=7937.00, stdev=2705.54 00:32:45.618 clat (usec): min=663, max=42074, avg=40838.09, stdev=2577.69 00:32:45.618 lat (usec): min=670, max=42088, avg=40846.03, stdev=2577.65 00:32:45.618 clat percentiles (usec): 00:32:45.618 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:32:45.618 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:45.618 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:45.618 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:45.618 | 99.99th=[42206] 00:32:45.618 bw ( KiB/s): min= 384, max= 416, per=33.89%, avg=390.40, stdev=13.13, samples=20 00:32:45.618 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:32:45.618 lat (usec) : 750=0.41% 00:32:45.618 lat (msec) : 50=99.59% 00:32:45.618 cpu : usr=96.74%, sys=3.01%, ctx=13, majf=0, minf=174 00:32:45.618 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:45.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:45.618 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:45.618 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:45.618 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:45.618 filename1: (groupid=0, jobs=1): err= 0: pid=2867848: Fri Dec 6 03:41:05 2024 00:32:45.618 read: IOPS=189, BW=759KiB/s (777kB/s)(7600KiB/10010msec) 00:32:45.618 slat (nsec): min=6145, max=28448, avg=7276.87, stdev=2035.47 00:32:45.618 clat (usec): min=432, max=42944, avg=21051.09, stdev=20480.00 00:32:45.618 lat (usec): min=438, max=42973, avg=21058.37, stdev=20479.37 00:32:45.618 clat percentiles (usec): 00:32:45.618 | 1.00th=[ 441], 5.00th=[ 461], 10.00th=[ 494], 20.00th=[ 506], 00:32:45.618 | 30.00th=[ 515], 40.00th=[ 619], 50.00th=[41157], 60.00th=[41157], 00:32:45.618 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:32:45.618 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:32:45.618 | 99.99th=[42730] 00:32:45.618 bw ( KiB/s): min= 704, max= 768, per=65.88%, avg=758.40, stdev=21.02, samples=20 00:32:45.618 iops : min= 176, max= 192, avg=189.60, stdev= 5.26, samples=20 00:32:45.618 lat (usec) : 500=15.74%, 750=33.84%, 1000=0.32% 00:32:45.618 lat (msec) : 50=50.11% 00:32:45.618 cpu : usr=96.78%, sys=2.97%, ctx=9, majf=0, minf=105 00:32:45.618 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:45.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:45.618 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:45.618 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:45.618 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:45.618 00:32:45.618 Run status group 0 (all jobs): 00:32:45.618 READ: bw=1151KiB/s (1178kB/s), 392KiB/s-759KiB/s (401kB/s-777kB/s), io=11.2MiB (11.8MB), run=10010-10012msec 00:32:45.618 03:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:32:45.618 03:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:32:45.618 03:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:32:45.618 03:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:45.618 03:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:32:45.618 03:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:45.618 03:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.618 03:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:45.618 03:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.618 03:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:45.618 03:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.618 03:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:45.618 03:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.618 03:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:32:45.618 03:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:45.618 03:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:32:45.618 03:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:45.618 03:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.618 03:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:45.618 03:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.618 03:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:45.618 03:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.618 03:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:45.618 03:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.618 00:32:45.618 real 0m11.300s 00:32:45.618 user 0m25.978s 00:32:45.618 sys 0m0.890s 00:32:45.618 03:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:45.618 03:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:45.618 ************************************ 00:32:45.618 END TEST fio_dif_1_multi_subsystems 00:32:45.618 ************************************ 00:32:45.618 03:41:05 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:32:45.618 03:41:05 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:45.618 03:41:05 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:45.618 03:41:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:45.618 ************************************ 00:32:45.618 START TEST fio_dif_rand_params 00:32:45.618 ************************************ 00:32:45.618 03:41:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:32:45.618 03:41:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:32:45.618 03:41:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:32:45.618 03:41:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:45.619 bdev_null0 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:45.619 [2024-12-06 03:41:05.647553] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:45.619 { 00:32:45.619 "params": { 00:32:45.619 "name": "Nvme$subsystem", 00:32:45.619 "trtype": "$TEST_TRANSPORT", 00:32:45.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:45.619 "adrfam": "ipv4", 00:32:45.619 "trsvcid": "$NVMF_PORT", 00:32:45.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:45.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:45.619 "hdgst": ${hdgst:-false}, 00:32:45.619 "ddgst": ${ddgst:-false} 00:32:45.619 }, 00:32:45.619 "method": "bdev_nvme_attach_controller" 00:32:45.619 } 00:32:45.619 EOF 00:32:45.619 )") 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:45.619 "params": { 00:32:45.619 "name": "Nvme0", 00:32:45.619 "trtype": "tcp", 00:32:45.619 "traddr": "10.0.0.2", 00:32:45.619 "adrfam": "ipv4", 00:32:45.619 "trsvcid": "4420", 00:32:45.619 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:45.619 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:45.619 "hdgst": false, 00:32:45.619 "ddgst": false 00:32:45.619 }, 00:32:45.619 "method": "bdev_nvme_attach_controller" 00:32:45.619 }' 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:45.619 03:41:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:45.877 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:45.877 ... 00:32:45.877 fio-3.35 00:32:45.877 Starting 3 threads 00:32:52.465 00:32:52.465 filename0: (groupid=0, jobs=1): err= 0: pid=2869807: Fri Dec 6 03:41:11 2024 00:32:52.465 read: IOPS=313, BW=39.1MiB/s (41.0MB/s)(198MiB/5046msec) 00:32:52.465 slat (nsec): min=4487, max=21710, avg=11217.02, stdev=2158.53 00:32:52.465 clat (usec): min=3538, max=87354, avg=9540.44, stdev=5810.13 00:32:52.465 lat (usec): min=3545, max=87362, avg=9551.66, stdev=5810.14 00:32:52.465 clat percentiles (usec): 00:32:52.465 | 1.00th=[ 4080], 5.00th=[ 5538], 10.00th=[ 6325], 20.00th=[ 6915], 00:32:52.465 | 30.00th=[ 7832], 40.00th=[ 8586], 50.00th=[ 9110], 60.00th=[ 9765], 00:32:52.465 | 70.00th=[10290], 80.00th=[10814], 90.00th=[11469], 95.00th=[11994], 00:32:52.465 | 99.00th=[46924], 99.50th=[49546], 99.90th=[85459], 99.95th=[87557], 00:32:52.465 | 99.99th=[87557] 00:32:52.465 bw ( KiB/s): min=33792, max=44544, per=36.25%, avg=40371.20, stdev=3421.96, samples=10 00:32:52.465 iops : min= 264, max= 348, avg=315.40, stdev=26.73, samples=10 00:32:52.465 lat (msec) : 4=0.76%, 10=63.54%, 20=34.24%, 50=1.20%, 100=0.25% 00:32:52.465 cpu : usr=92.92%, sys=6.78%, ctx=13, majf=0, minf=57 00:32:52.465 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:52.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.465 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.465 issued rwts: total=1580,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.465 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:52.465 filename0: (groupid=0, jobs=1): err= 0: pid=2869808: Fri Dec 6 03:41:11 2024 00:32:52.465 read: IOPS=290, BW=36.3MiB/s (38.1MB/s)(183MiB/5045msec) 00:32:52.465 slat (nsec): min=6315, max=32262, avg=11253.42, stdev=2344.36 00:32:52.465 clat (usec): min=3645, max=53570, avg=10281.56, stdev=7398.08 00:32:52.465 lat (usec): min=3653, max=53583, avg=10292.82, stdev=7398.00 00:32:52.465 clat percentiles (usec): 00:32:52.465 | 1.00th=[ 3884], 5.00th=[ 5800], 10.00th=[ 6456], 20.00th=[ 7373], 00:32:52.465 | 30.00th=[ 8225], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9503], 00:32:52.465 | 70.00th=[10159], 80.00th=[10814], 90.00th=[11731], 95.00th=[12518], 00:32:52.465 | 99.00th=[49546], 99.50th=[50594], 99.90th=[52691], 99.95th=[53740], 00:32:52.465 | 99.99th=[53740] 00:32:52.465 bw ( KiB/s): min=27904, max=45568, per=33.63%, avg=37452.80, stdev=6436.93, samples=10 00:32:52.465 iops : min= 218, max= 356, avg=292.60, stdev=50.29, samples=10 00:32:52.465 lat (msec) : 4=1.71%, 10=66.78%, 20=28.10%, 50=2.59%, 100=0.82% 00:32:52.465 cpu : usr=94.07%, sys=5.65%, ctx=16, majf=0, minf=52 00:32:52.465 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:52.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.465 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.465 issued rwts: total=1466,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.465 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:52.465 filename0: (groupid=0, jobs=1): err= 0: pid=2869809: Fri Dec 6 03:41:11 2024 00:32:52.465 read: IOPS=266, BW=33.3MiB/s (34.9MB/s)(168MiB/5043msec) 00:32:52.465 slat (nsec): min=6245, max=25663, avg=11171.59, stdev=2254.84 00:32:52.465 clat (usec): min=3682, max=53490, avg=11211.57, stdev=8991.84 00:32:52.465 lat (usec): min=3689, max=53501, avg=11222.74, stdev=8991.82 00:32:52.465 clat percentiles (usec): 00:32:52.465 | 1.00th=[ 4228], 5.00th=[ 6390], 10.00th=[ 6849], 20.00th=[ 7963], 00:32:52.465 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9634], 00:32:52.465 | 70.00th=[10159], 80.00th=[10683], 90.00th=[11600], 95.00th=[45351], 00:32:52.465 | 99.00th=[50594], 99.50th=[51119], 99.90th=[53216], 99.95th=[53740], 00:32:52.465 | 99.99th=[53740] 00:32:52.465 bw ( KiB/s): min=27904, max=44800, per=30.85%, avg=34355.20, stdev=5880.95, samples=10 00:32:52.465 iops : min= 218, max= 350, avg=268.40, stdev=45.94, samples=10 00:32:52.465 lat (msec) : 4=0.60%, 10=66.44%, 20=27.68%, 50=4.17%, 100=1.12% 00:32:52.465 cpu : usr=94.88%, sys=4.82%, ctx=12, majf=0, minf=36 00:32:52.465 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:52.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.465 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.465 issued rwts: total=1344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.465 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:52.465 00:32:52.465 Run status group 0 (all jobs): 00:32:52.465 READ: bw=109MiB/s (114MB/s), 33.3MiB/s-39.1MiB/s (34.9MB/s-41.0MB/s), io=549MiB (575MB), run=5043-5046msec 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:52.465 bdev_null0 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:52.465 [2024-12-06 03:41:11.840624] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:52.465 bdev_null1 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.465 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:52.466 bdev_null2 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:52.466 { 00:32:52.466 "params": { 00:32:52.466 "name": "Nvme$subsystem", 00:32:52.466 "trtype": "$TEST_TRANSPORT", 00:32:52.466 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:52.466 "adrfam": "ipv4", 00:32:52.466 "trsvcid": "$NVMF_PORT", 00:32:52.466 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:52.466 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:52.466 "hdgst": ${hdgst:-false}, 00:32:52.466 "ddgst": ${ddgst:-false} 00:32:52.466 }, 00:32:52.466 "method": "bdev_nvme_attach_controller" 00:32:52.466 } 00:32:52.466 EOF 00:32:52.466 )") 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:52.466 { 00:32:52.466 "params": { 00:32:52.466 "name": "Nvme$subsystem", 00:32:52.466 "trtype": "$TEST_TRANSPORT", 00:32:52.466 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:52.466 "adrfam": "ipv4", 00:32:52.466 "trsvcid": "$NVMF_PORT", 00:32:52.466 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:52.466 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:52.466 "hdgst": ${hdgst:-false}, 00:32:52.466 "ddgst": ${ddgst:-false} 00:32:52.466 }, 00:32:52.466 "method": "bdev_nvme_attach_controller" 00:32:52.466 } 00:32:52.466 EOF 00:32:52.466 )") 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:52.466 { 00:32:52.466 "params": { 00:32:52.466 "name": "Nvme$subsystem", 00:32:52.466 "trtype": "$TEST_TRANSPORT", 00:32:52.466 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:52.466 "adrfam": "ipv4", 00:32:52.466 "trsvcid": "$NVMF_PORT", 00:32:52.466 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:52.466 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:52.466 "hdgst": ${hdgst:-false}, 00:32:52.466 "ddgst": ${ddgst:-false} 00:32:52.466 }, 00:32:52.466 "method": "bdev_nvme_attach_controller" 00:32:52.466 } 00:32:52.466 EOF 00:32:52.466 )") 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:32:52.466 03:41:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:52.466 "params": { 00:32:52.466 "name": "Nvme0", 00:32:52.466 "trtype": "tcp", 00:32:52.466 "traddr": "10.0.0.2", 00:32:52.466 "adrfam": "ipv4", 00:32:52.466 "trsvcid": "4420", 00:32:52.466 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:52.466 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:52.466 "hdgst": false, 00:32:52.466 "ddgst": false 00:32:52.466 }, 00:32:52.466 "method": "bdev_nvme_attach_controller" 00:32:52.466 },{ 00:32:52.466 "params": { 00:32:52.466 "name": "Nvme1", 00:32:52.466 "trtype": "tcp", 00:32:52.466 "traddr": "10.0.0.2", 00:32:52.466 "adrfam": "ipv4", 00:32:52.466 "trsvcid": "4420", 00:32:52.466 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:52.466 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:52.466 "hdgst": false, 00:32:52.466 "ddgst": false 00:32:52.466 }, 00:32:52.466 "method": "bdev_nvme_attach_controller" 00:32:52.466 },{ 00:32:52.466 "params": { 00:32:52.466 "name": "Nvme2", 00:32:52.466 "trtype": "tcp", 00:32:52.466 "traddr": "10.0.0.2", 00:32:52.466 "adrfam": "ipv4", 00:32:52.466 "trsvcid": "4420", 00:32:52.466 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:52.466 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:52.466 "hdgst": false, 00:32:52.466 "ddgst": false 00:32:52.466 }, 00:32:52.466 "method": "bdev_nvme_attach_controller" 00:32:52.467 }' 00:32:52.467 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:52.467 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:52.467 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:52.467 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:52.467 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:52.467 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:52.467 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:52.467 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:52.467 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:52.467 03:41:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:52.467 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:52.467 ... 00:32:52.467 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:52.467 ... 00:32:52.467 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:52.467 ... 00:32:52.467 fio-3.35 00:32:52.467 Starting 24 threads 00:33:04.681 00:33:04.681 filename0: (groupid=0, jobs=1): err= 0: pid=2871076: Fri Dec 6 03:41:23 2024 00:33:04.681 read: IOPS=613, BW=2454KiB/s (2513kB/s)(24.0MiB/10014msec) 00:33:04.681 slat (nsec): min=6568, max=74405, avg=16457.87, stdev=9285.62 00:33:04.681 clat (usec): min=1053, max=30201, avg=25952.32, stdev=3424.01 00:33:04.681 lat (usec): min=1067, max=30222, avg=25968.78, stdev=3424.08 00:33:04.681 clat percentiles (usec): 00:33:04.681 | 1.00th=[ 1450], 5.00th=[24249], 10.00th=[25297], 20.00th=[25822], 00:33:04.681 | 30.00th=[26084], 40.00th=[26084], 50.00th=[26084], 60.00th=[26346], 00:33:04.681 | 70.00th=[26346], 80.00th=[27132], 90.00th=[28181], 95.00th=[28705], 00:33:04.681 | 99.00th=[28967], 99.50th=[29754], 99.90th=[30016], 99.95th=[30278], 00:33:04.682 | 99.99th=[30278] 00:33:04.682 bw ( KiB/s): min= 2304, max= 3456, per=4.24%, avg=2451.20, stdev=246.65, samples=20 00:33:04.682 iops : min= 576, max= 864, avg=612.80, stdev=61.66, samples=20 00:33:04.682 lat (msec) : 2=1.04%, 4=0.11%, 10=0.67%, 20=0.78%, 50=97.40% 00:33:04.682 cpu : usr=98.91%, sys=0.70%, ctx=56, majf=0, minf=11 00:33:04.682 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:04.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.682 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.682 issued rwts: total=6144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.682 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.682 filename0: (groupid=0, jobs=1): err= 0: pid=2871077: Fri Dec 6 03:41:23 2024 00:33:04.682 read: IOPS=602, BW=2409KiB/s (2467kB/s)(23.6MiB/10017msec) 00:33:04.682 slat (nsec): min=3423, max=61982, avg=21375.63, stdev=9785.03 00:33:04.682 clat (usec): min=14383, max=41236, avg=26378.43, stdev=1286.53 00:33:04.682 lat (usec): min=14392, max=41249, avg=26399.81, stdev=1287.54 00:33:04.682 clat percentiles (usec): 00:33:04.682 | 1.00th=[23725], 5.00th=[25035], 10.00th=[25560], 20.00th=[25822], 00:33:04.682 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26084], 60.00th=[26084], 00:33:04.682 | 70.00th=[26346], 80.00th=[27132], 90.00th=[28181], 95.00th=[28705], 00:33:04.682 | 99.00th=[28967], 99.50th=[29492], 99.90th=[36439], 99.95th=[36439], 00:33:04.682 | 99.99th=[41157] 00:33:04.682 bw ( KiB/s): min= 2176, max= 2560, per=4.17%, avg=2410.68, stdev=97.23, samples=19 00:33:04.682 iops : min= 544, max= 640, avg=602.53, stdev=24.25, samples=19 00:33:04.682 lat (msec) : 20=0.18%, 50=99.82% 00:33:04.682 cpu : usr=97.79%, sys=1.40%, ctx=190, majf=0, minf=9 00:33:04.682 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:04.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.682 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.682 issued rwts: total=6032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.682 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.682 filename0: (groupid=0, jobs=1): err= 0: pid=2871078: Fri Dec 6 03:41:23 2024 00:33:04.682 read: IOPS=604, BW=2418KiB/s (2476kB/s)(23.6MiB/10007msec) 00:33:04.682 slat (nsec): min=7007, max=66875, avg=14433.30, stdev=7357.28 00:33:04.682 clat (usec): min=9535, max=30898, avg=26356.48, stdev=1566.00 00:33:04.682 lat (usec): min=9550, max=30934, avg=26370.92, stdev=1565.21 00:33:04.682 clat percentiles (usec): 00:33:04.682 | 1.00th=[20055], 5.00th=[25035], 10.00th=[25822], 20.00th=[25822], 00:33:04.682 | 30.00th=[26084], 40.00th=[26084], 50.00th=[26084], 60.00th=[26346], 00:33:04.682 | 70.00th=[26346], 80.00th=[27132], 90.00th=[28443], 95.00th=[28967], 00:33:04.682 | 99.00th=[29230], 99.50th=[29230], 99.90th=[30802], 99.95th=[30802], 00:33:04.682 | 99.99th=[30802] 00:33:04.682 bw ( KiB/s): min= 2176, max= 2688, per=4.18%, avg=2418.00, stdev=112.29, samples=19 00:33:04.682 iops : min= 544, max= 672, avg=604.42, stdev=28.12, samples=19 00:33:04.682 lat (msec) : 10=0.03%, 20=1.03%, 50=98.94% 00:33:04.682 cpu : usr=98.57%, sys=1.04%, ctx=60, majf=0, minf=9 00:33:04.682 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:04.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.682 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.682 issued rwts: total=6048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.682 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.682 filename0: (groupid=0, jobs=1): err= 0: pid=2871079: Fri Dec 6 03:41:23 2024 00:33:04.682 read: IOPS=603, BW=2414KiB/s (2472kB/s)(23.6MiB/10021msec) 00:33:04.682 slat (nsec): min=8213, max=88850, avg=36807.70, stdev=15505.66 00:33:04.682 clat (usec): min=11610, max=37631, avg=26232.19, stdev=1405.66 00:33:04.682 lat (usec): min=11624, max=37683, avg=26269.00, stdev=1406.00 00:33:04.682 clat percentiles (usec): 00:33:04.682 | 1.00th=[23200], 5.00th=[24773], 10.00th=[25297], 20.00th=[25560], 00:33:04.682 | 30.00th=[25822], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:33:04.682 | 70.00th=[26346], 80.00th=[26870], 90.00th=[28181], 95.00th=[28705], 00:33:04.682 | 99.00th=[28967], 99.50th=[29492], 99.90th=[30016], 99.95th=[30016], 00:33:04.682 | 99.99th=[37487] 00:33:04.682 bw ( KiB/s): min= 2176, max= 2688, per=4.17%, avg=2412.90, stdev=102.64, samples=20 00:33:04.682 iops : min= 544, max= 672, avg=603.15, stdev=25.65, samples=20 00:33:04.682 lat (msec) : 20=0.56%, 50=99.44% 00:33:04.682 cpu : usr=98.32%, sys=1.18%, ctx=70, majf=0, minf=9 00:33:04.682 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:04.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.682 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.682 issued rwts: total=6048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.682 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.682 filename0: (groupid=0, jobs=1): err= 0: pid=2871080: Fri Dec 6 03:41:23 2024 00:33:04.682 read: IOPS=601, BW=2405KiB/s (2463kB/s)(23.5MiB/10004msec) 00:33:04.682 slat (nsec): min=3329, max=85781, avg=34874.44, stdev=15221.97 00:33:04.682 clat (usec): min=22871, max=36419, avg=26339.10, stdev=1194.58 00:33:04.682 lat (usec): min=22900, max=36431, avg=26373.97, stdev=1193.88 00:33:04.682 clat percentiles (usec): 00:33:04.682 | 1.00th=[23725], 5.00th=[25035], 10.00th=[25560], 20.00th=[25822], 00:33:04.682 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26084], 60.00th=[26084], 00:33:04.682 | 70.00th=[26346], 80.00th=[26870], 90.00th=[28181], 95.00th=[28705], 00:33:04.682 | 99.00th=[28967], 99.50th=[30540], 99.90th=[36439], 99.95th=[36439], 00:33:04.682 | 99.99th=[36439] 00:33:04.682 bw ( KiB/s): min= 2176, max= 2560, per=4.16%, avg=2404.53, stdev=80.55, samples=19 00:33:04.682 iops : min= 544, max= 640, avg=601.05, stdev=20.11, samples=19 00:33:04.682 lat (msec) : 50=100.00% 00:33:04.682 cpu : usr=97.10%, sys=1.71%, ctx=172, majf=0, minf=9 00:33:04.682 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:04.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.682 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.682 issued rwts: total=6016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.682 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.682 filename0: (groupid=0, jobs=1): err= 0: pid=2871081: Fri Dec 6 03:41:23 2024 00:33:04.682 read: IOPS=603, BW=2414KiB/s (2472kB/s)(23.6MiB/10021msec) 00:33:04.682 slat (nsec): min=7863, max=84648, avg=29033.62, stdev=14015.72 00:33:04.682 clat (usec): min=9881, max=34200, avg=26297.41, stdev=1407.89 00:33:04.682 lat (usec): min=9906, max=34229, avg=26326.44, stdev=1407.37 00:33:04.682 clat percentiles (usec): 00:33:04.682 | 1.00th=[23462], 5.00th=[24773], 10.00th=[25560], 20.00th=[25822], 00:33:04.682 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26084], 60.00th=[26084], 00:33:04.682 | 70.00th=[26346], 80.00th=[26870], 90.00th=[28181], 95.00th=[28705], 00:33:04.682 | 99.00th=[28967], 99.50th=[29754], 99.90th=[30016], 99.95th=[30016], 00:33:04.682 | 99.99th=[34341] 00:33:04.682 bw ( KiB/s): min= 2176, max= 2688, per=4.17%, avg=2412.90, stdev=103.55, samples=20 00:33:04.682 iops : min= 544, max= 672, avg=603.15, stdev=25.88, samples=20 00:33:04.682 lat (msec) : 10=0.03%, 20=0.53%, 50=99.44% 00:33:04.682 cpu : usr=97.89%, sys=1.30%, ctx=129, majf=0, minf=9 00:33:04.682 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:04.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.682 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.682 issued rwts: total=6048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.682 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.682 filename0: (groupid=0, jobs=1): err= 0: pid=2871082: Fri Dec 6 03:41:23 2024 00:33:04.682 read: IOPS=601, BW=2405KiB/s (2463kB/s)(23.5MiB/10005msec) 00:33:04.682 slat (nsec): min=7247, max=75810, avg=27833.16, stdev=14765.99 00:33:04.682 clat (usec): min=21001, max=37284, avg=26400.00, stdev=1156.20 00:33:04.682 lat (usec): min=21010, max=37309, avg=26427.83, stdev=1154.97 00:33:04.682 clat percentiles (usec): 00:33:04.682 | 1.00th=[23725], 5.00th=[25035], 10.00th=[25560], 20.00th=[25822], 00:33:04.682 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26084], 60.00th=[26346], 00:33:04.682 | 70.00th=[26346], 80.00th=[26870], 90.00th=[28443], 95.00th=[28705], 00:33:04.682 | 99.00th=[28967], 99.50th=[30802], 99.90th=[33817], 99.95th=[33817], 00:33:04.682 | 99.99th=[37487] 00:33:04.682 bw ( KiB/s): min= 2304, max= 2560, per=4.17%, avg=2411.26, stdev=64.03, samples=19 00:33:04.682 iops : min= 576, max= 640, avg=602.74, stdev=15.99, samples=19 00:33:04.682 lat (msec) : 50=100.00% 00:33:04.682 cpu : usr=98.63%, sys=1.01%, ctx=18, majf=0, minf=9 00:33:04.682 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:04.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.682 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.682 issued rwts: total=6016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.682 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.682 filename0: (groupid=0, jobs=1): err= 0: pid=2871083: Fri Dec 6 03:41:23 2024 00:33:04.682 read: IOPS=602, BW=2409KiB/s (2467kB/s)(23.6MiB/10016msec) 00:33:04.682 slat (nsec): min=6983, max=61890, avg=16652.61, stdev=11149.66 00:33:04.682 clat (usec): min=15770, max=40014, avg=26442.45, stdev=1317.12 00:33:04.682 lat (usec): min=15780, max=40024, avg=26459.10, stdev=1317.60 00:33:04.682 clat percentiles (usec): 00:33:04.682 | 1.00th=[23725], 5.00th=[25035], 10.00th=[25560], 20.00th=[25822], 00:33:04.682 | 30.00th=[26084], 40.00th=[26084], 50.00th=[26084], 60.00th=[26346], 00:33:04.682 | 70.00th=[26346], 80.00th=[27132], 90.00th=[28443], 95.00th=[28705], 00:33:04.682 | 99.00th=[28967], 99.50th=[29230], 99.90th=[40109], 99.95th=[40109], 00:33:04.682 | 99.99th=[40109] 00:33:04.682 bw ( KiB/s): min= 2176, max= 2560, per=4.17%, avg=2411.00, stdev=97.73, samples=19 00:33:04.682 iops : min= 544, max= 640, avg=602.63, stdev=24.41, samples=19 00:33:04.682 lat (msec) : 20=0.22%, 50=99.78% 00:33:04.682 cpu : usr=98.81%, sys=0.78%, ctx=65, majf=0, minf=9 00:33:04.682 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:04.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.682 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.682 issued rwts: total=6032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.683 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.683 filename1: (groupid=0, jobs=1): err= 0: pid=2871084: Fri Dec 6 03:41:23 2024 00:33:04.683 read: IOPS=601, BW=2405KiB/s (2463kB/s)(23.5MiB/10004msec) 00:33:04.683 slat (nsec): min=6765, max=88825, avg=40502.74, stdev=14518.41 00:33:04.683 clat (usec): min=13953, max=49681, avg=26254.85, stdev=1629.29 00:33:04.683 lat (usec): min=13979, max=49699, avg=26295.36, stdev=1629.54 00:33:04.683 clat percentiles (usec): 00:33:04.683 | 1.00th=[23725], 5.00th=[25035], 10.00th=[25297], 20.00th=[25560], 00:33:04.683 | 30.00th=[25822], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:33:04.683 | 70.00th=[26346], 80.00th=[26870], 90.00th=[28181], 95.00th=[28443], 00:33:04.683 | 99.00th=[28967], 99.50th=[30278], 99.90th=[46400], 99.95th=[46400], 00:33:04.683 | 99.99th=[49546] 00:33:04.683 bw ( KiB/s): min= 2176, max= 2560, per=4.16%, avg=2404.53, stdev=80.99, samples=19 00:33:04.683 iops : min= 544, max= 640, avg=601.05, stdev=20.29, samples=19 00:33:04.683 lat (msec) : 20=0.27%, 50=99.73% 00:33:04.683 cpu : usr=97.58%, sys=1.51%, ctx=174, majf=0, minf=9 00:33:04.683 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:04.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.683 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.683 issued rwts: total=6016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.683 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.683 filename1: (groupid=0, jobs=1): err= 0: pid=2871085: Fri Dec 6 03:41:23 2024 00:33:04.683 read: IOPS=601, BW=2406KiB/s (2463kB/s)(23.5MiB/10003msec) 00:33:04.683 slat (nsec): min=7284, max=79097, avg=38015.70, stdev=14035.41 00:33:04.683 clat (usec): min=14079, max=45698, avg=26294.16, stdev=1585.86 00:33:04.683 lat (usec): min=14089, max=45717, avg=26332.18, stdev=1585.74 00:33:04.683 clat percentiles (usec): 00:33:04.683 | 1.00th=[23725], 5.00th=[25035], 10.00th=[25297], 20.00th=[25560], 00:33:04.683 | 30.00th=[25822], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:33:04.683 | 70.00th=[26346], 80.00th=[26870], 90.00th=[28181], 95.00th=[28705], 00:33:04.683 | 99.00th=[28967], 99.50th=[30278], 99.90th=[45876], 99.95th=[45876], 00:33:04.683 | 99.99th=[45876] 00:33:04.683 bw ( KiB/s): min= 2176, max= 2560, per=4.16%, avg=2404.53, stdev=80.99, samples=19 00:33:04.683 iops : min= 544, max= 640, avg=601.05, stdev=20.29, samples=19 00:33:04.683 lat (msec) : 20=0.27%, 50=99.73% 00:33:04.683 cpu : usr=98.13%, sys=1.23%, ctx=90, majf=0, minf=9 00:33:04.683 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:04.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.683 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.683 issued rwts: total=6016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.683 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.683 filename1: (groupid=0, jobs=1): err= 0: pid=2871086: Fri Dec 6 03:41:23 2024 00:33:04.683 read: IOPS=604, BW=2418KiB/s (2476kB/s)(23.6MiB/10007msec) 00:33:04.683 slat (nsec): min=6917, max=55359, avg=11587.84, stdev=4972.45 00:33:04.683 clat (usec): min=9513, max=30925, avg=26364.71, stdev=1568.37 00:33:04.683 lat (usec): min=9528, max=30961, avg=26376.30, stdev=1567.68 00:33:04.683 clat percentiles (usec): 00:33:04.683 | 1.00th=[20055], 5.00th=[25035], 10.00th=[25822], 20.00th=[25822], 00:33:04.683 | 30.00th=[26084], 40.00th=[26084], 50.00th=[26084], 60.00th=[26346], 00:33:04.683 | 70.00th=[26608], 80.00th=[27132], 90.00th=[28443], 95.00th=[28967], 00:33:04.683 | 99.00th=[29230], 99.50th=[29492], 99.90th=[30802], 99.95th=[30802], 00:33:04.683 | 99.99th=[30802] 00:33:04.683 bw ( KiB/s): min= 2176, max= 2688, per=4.18%, avg=2418.00, stdev=103.87, samples=19 00:33:04.683 iops : min= 544, max= 672, avg=604.42, stdev=26.01, samples=19 00:33:04.683 lat (msec) : 10=0.03%, 20=1.03%, 50=98.94% 00:33:04.683 cpu : usr=98.67%, sys=0.94%, ctx=52, majf=0, minf=11 00:33:04.683 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:04.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.683 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.683 issued rwts: total=6048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.683 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.683 filename1: (groupid=0, jobs=1): err= 0: pid=2871087: Fri Dec 6 03:41:23 2024 00:33:04.683 read: IOPS=603, BW=2414KiB/s (2472kB/s)(23.6MiB/10021msec) 00:33:04.683 slat (nsec): min=6486, max=99331, avg=45115.74, stdev=15311.59 00:33:04.683 clat (usec): min=9826, max=30054, avg=26114.46, stdev=1357.44 00:33:04.683 lat (usec): min=9846, max=30084, avg=26159.58, stdev=1361.59 00:33:04.683 clat percentiles (usec): 00:33:04.683 | 1.00th=[23462], 5.00th=[24773], 10.00th=[25297], 20.00th=[25560], 00:33:04.683 | 30.00th=[25560], 40.00th=[25822], 50.00th=[25822], 60.00th=[26084], 00:33:04.683 | 70.00th=[26346], 80.00th=[26870], 90.00th=[27919], 95.00th=[28443], 00:33:04.683 | 99.00th=[28967], 99.50th=[29230], 99.90th=[30016], 99.95th=[30016], 00:33:04.683 | 99.99th=[30016] 00:33:04.683 bw ( KiB/s): min= 2176, max= 2688, per=4.17%, avg=2412.90, stdev=103.55, samples=20 00:33:04.683 iops : min= 544, max= 672, avg=603.15, stdev=25.88, samples=20 00:33:04.683 lat (msec) : 10=0.03%, 20=0.50%, 50=99.47% 00:33:04.683 cpu : usr=99.00%, sys=0.64%, ctx=17, majf=0, minf=9 00:33:04.683 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:04.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.683 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.683 issued rwts: total=6048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.683 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.683 filename1: (groupid=0, jobs=1): err= 0: pid=2871088: Fri Dec 6 03:41:23 2024 00:33:04.683 read: IOPS=601, BW=2406KiB/s (2463kB/s)(23.5MiB/10003msec) 00:33:04.683 slat (nsec): min=6287, max=93690, avg=40507.51, stdev=17556.28 00:33:04.683 clat (usec): min=3364, max=44296, avg=26238.43, stdev=1791.44 00:33:04.683 lat (usec): min=3370, max=44329, avg=26278.94, stdev=1794.11 00:33:04.683 clat percentiles (usec): 00:33:04.683 | 1.00th=[23725], 5.00th=[24773], 10.00th=[25297], 20.00th=[25560], 00:33:04.683 | 30.00th=[25822], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:33:04.683 | 70.00th=[26346], 80.00th=[26870], 90.00th=[28181], 95.00th=[28443], 00:33:04.683 | 99.00th=[29492], 99.50th=[30540], 99.90th=[44303], 99.95th=[44303], 00:33:04.683 | 99.99th=[44303] 00:33:04.683 bw ( KiB/s): min= 2176, max= 2560, per=4.16%, avg=2404.74, stdev=81.07, samples=19 00:33:04.683 iops : min= 544, max= 640, avg=601.11, stdev=20.31, samples=19 00:33:04.683 lat (msec) : 4=0.17%, 20=0.13%, 50=99.70% 00:33:04.683 cpu : usr=98.48%, sys=0.96%, ctx=90, majf=0, minf=9 00:33:04.683 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:04.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.683 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.683 issued rwts: total=6016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.683 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.683 filename1: (groupid=0, jobs=1): err= 0: pid=2871089: Fri Dec 6 03:41:23 2024 00:33:04.683 read: IOPS=601, BW=2406KiB/s (2464kB/s)(23.5MiB/10002msec) 00:33:04.683 slat (nsec): min=6216, max=97627, avg=42521.53, stdev=16497.82 00:33:04.683 clat (usec): min=22757, max=33814, avg=26230.61, stdev=1124.29 00:33:04.683 lat (usec): min=22770, max=33833, avg=26273.13, stdev=1126.01 00:33:04.683 clat percentiles (usec): 00:33:04.683 | 1.00th=[23725], 5.00th=[25035], 10.00th=[25297], 20.00th=[25560], 00:33:04.683 | 30.00th=[25822], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:33:04.683 | 70.00th=[26346], 80.00th=[26870], 90.00th=[28181], 95.00th=[28443], 00:33:04.683 | 99.00th=[28967], 99.50th=[29492], 99.90th=[33817], 99.95th=[33817], 00:33:04.683 | 99.99th=[33817] 00:33:04.683 bw ( KiB/s): min= 2299, max= 2560, per=4.17%, avg=2411.26, stdev=64.58, samples=19 00:33:04.683 iops : min= 574, max= 640, avg=602.74, stdev=16.21, samples=19 00:33:04.683 lat (msec) : 50=100.00% 00:33:04.683 cpu : usr=98.31%, sys=1.08%, ctx=72, majf=0, minf=9 00:33:04.683 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:04.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.683 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.683 issued rwts: total=6016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.683 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.683 filename1: (groupid=0, jobs=1): err= 0: pid=2871090: Fri Dec 6 03:41:23 2024 00:33:04.683 read: IOPS=602, BW=2410KiB/s (2468kB/s)(23.6MiB/10012msec) 00:33:04.683 slat (nsec): min=5341, max=84974, avg=42585.86, stdev=13602.17 00:33:04.683 clat (usec): min=11442, max=32427, avg=26194.10, stdev=1345.11 00:33:04.683 lat (usec): min=11455, max=32442, avg=26236.69, stdev=1346.62 00:33:04.683 clat percentiles (usec): 00:33:04.683 | 1.00th=[23725], 5.00th=[24773], 10.00th=[25297], 20.00th=[25560], 00:33:04.683 | 30.00th=[25822], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:33:04.683 | 70.00th=[26346], 80.00th=[26870], 90.00th=[28181], 95.00th=[28443], 00:33:04.683 | 99.00th=[28967], 99.50th=[29754], 99.90th=[32375], 99.95th=[32375], 00:33:04.683 | 99.99th=[32375] 00:33:04.683 bw ( KiB/s): min= 2299, max= 2565, per=4.17%, avg=2411.53, stdev=65.23, samples=19 00:33:04.683 iops : min= 574, max= 641, avg=602.79, stdev=16.34, samples=19 00:33:04.683 lat (msec) : 20=0.27%, 50=99.73% 00:33:04.683 cpu : usr=97.94%, sys=1.32%, ctx=154, majf=0, minf=9 00:33:04.683 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:04.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.683 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.683 issued rwts: total=6032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.683 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.683 filename1: (groupid=0, jobs=1): err= 0: pid=2871091: Fri Dec 6 03:41:23 2024 00:33:04.683 read: IOPS=603, BW=2414KiB/s (2472kB/s)(23.6MiB/10020msec) 00:33:04.683 slat (nsec): min=7055, max=97791, avg=43360.60, stdev=16921.02 00:33:04.683 clat (usec): min=10514, max=30074, avg=26108.73, stdev=1370.73 00:33:04.683 lat (usec): min=10522, max=30116, avg=26152.09, stdev=1374.90 00:33:04.683 clat percentiles (usec): 00:33:04.683 | 1.00th=[23462], 5.00th=[24773], 10.00th=[25297], 20.00th=[25560], 00:33:04.684 | 30.00th=[25560], 40.00th=[25822], 50.00th=[25822], 60.00th=[26084], 00:33:04.684 | 70.00th=[26346], 80.00th=[26870], 90.00th=[27919], 95.00th=[28443], 00:33:04.684 | 99.00th=[28967], 99.50th=[29230], 99.90th=[29754], 99.95th=[30016], 00:33:04.684 | 99.99th=[30016] 00:33:04.684 bw ( KiB/s): min= 2176, max= 2693, per=4.17%, avg=2413.15, stdev=112.22, samples=20 00:33:04.684 iops : min= 544, max= 673, avg=603.20, stdev=28.01, samples=20 00:33:04.684 lat (msec) : 20=0.53%, 50=99.47% 00:33:04.684 cpu : usr=98.37%, sys=0.98%, ctx=274, majf=0, minf=9 00:33:04.684 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:04.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.684 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.684 issued rwts: total=6048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.684 filename2: (groupid=0, jobs=1): err= 0: pid=2871092: Fri Dec 6 03:41:23 2024 00:33:04.684 read: IOPS=601, BW=2405KiB/s (2463kB/s)(23.5MiB/10005msec) 00:33:04.684 slat (nsec): min=6252, max=77971, avg=23960.23, stdev=15993.79 00:33:04.684 clat (usec): min=21297, max=41768, avg=26346.84, stdev=1331.45 00:33:04.684 lat (usec): min=21304, max=41786, avg=26370.80, stdev=1333.02 00:33:04.684 clat percentiles (usec): 00:33:04.684 | 1.00th=[23987], 5.00th=[25035], 10.00th=[25560], 20.00th=[25822], 00:33:04.684 | 30.00th=[25822], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:33:04.684 | 70.00th=[26346], 80.00th=[26870], 90.00th=[28181], 95.00th=[28705], 00:33:04.684 | 99.00th=[28967], 99.50th=[30016], 99.90th=[41681], 99.95th=[41681], 00:33:04.684 | 99.99th=[41681] 00:33:04.684 bw ( KiB/s): min= 2180, max= 2560, per=4.16%, avg=2404.74, stdev=79.92, samples=19 00:33:04.684 iops : min= 545, max= 640, avg=601.11, stdev=19.96, samples=19 00:33:04.684 lat (msec) : 50=100.00% 00:33:04.684 cpu : usr=99.03%, sys=0.57%, ctx=34, majf=0, minf=9 00:33:04.684 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:04.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.684 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.684 issued rwts: total=6016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.684 filename2: (groupid=0, jobs=1): err= 0: pid=2871093: Fri Dec 6 03:41:23 2024 00:33:04.684 read: IOPS=602, BW=2410KiB/s (2468kB/s)(23.6MiB/10012msec) 00:33:04.684 slat (nsec): min=6752, max=85174, avg=42651.43, stdev=13509.35 00:33:04.684 clat (usec): min=11441, max=32452, avg=26183.05, stdev=1346.09 00:33:04.684 lat (usec): min=11453, max=32465, avg=26225.70, stdev=1347.92 00:33:04.684 clat percentiles (usec): 00:33:04.684 | 1.00th=[23725], 5.00th=[24773], 10.00th=[25297], 20.00th=[25560], 00:33:04.684 | 30.00th=[25822], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:33:04.684 | 70.00th=[26346], 80.00th=[26870], 90.00th=[28181], 95.00th=[28443], 00:33:04.684 | 99.00th=[28967], 99.50th=[29754], 99.90th=[32375], 99.95th=[32375], 00:33:04.684 | 99.99th=[32375] 00:33:04.684 bw ( KiB/s): min= 2299, max= 2565, per=4.17%, avg=2411.53, stdev=65.23, samples=19 00:33:04.684 iops : min= 574, max= 641, avg=602.79, stdev=16.34, samples=19 00:33:04.684 lat (msec) : 20=0.27%, 50=99.73% 00:33:04.684 cpu : usr=98.19%, sys=1.11%, ctx=86, majf=0, minf=9 00:33:04.684 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:04.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.684 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.684 issued rwts: total=6032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.684 filename2: (groupid=0, jobs=1): err= 0: pid=2871094: Fri Dec 6 03:41:23 2024 00:33:04.684 read: IOPS=601, BW=2405KiB/s (2463kB/s)(23.5MiB/10004msec) 00:33:04.684 slat (nsec): min=7356, max=92161, avg=41086.90, stdev=16200.05 00:33:04.684 clat (usec): min=13966, max=46580, avg=26226.42, stdev=1611.10 00:33:04.684 lat (usec): min=13993, max=46601, avg=26267.51, stdev=1612.12 00:33:04.684 clat percentiles (usec): 00:33:04.684 | 1.00th=[23725], 5.00th=[25035], 10.00th=[25297], 20.00th=[25560], 00:33:04.684 | 30.00th=[25822], 40.00th=[25822], 50.00th=[25822], 60.00th=[26084], 00:33:04.684 | 70.00th=[26346], 80.00th=[26870], 90.00th=[28181], 95.00th=[28443], 00:33:04.684 | 99.00th=[28967], 99.50th=[30016], 99.90th=[46400], 99.95th=[46400], 00:33:04.684 | 99.99th=[46400] 00:33:04.684 bw ( KiB/s): min= 2176, max= 2560, per=4.16%, avg=2404.53, stdev=80.99, samples=19 00:33:04.684 iops : min= 544, max= 640, avg=601.05, stdev=20.29, samples=19 00:33:04.684 lat (msec) : 20=0.27%, 50=99.73% 00:33:04.684 cpu : usr=98.56%, sys=0.92%, ctx=56, majf=0, minf=9 00:33:04.684 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:04.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.684 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.684 issued rwts: total=6016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.684 filename2: (groupid=0, jobs=1): err= 0: pid=2871095: Fri Dec 6 03:41:23 2024 00:33:04.684 read: IOPS=602, BW=2410KiB/s (2467kB/s)(23.6MiB/10013msec) 00:33:04.684 slat (usec): min=6, max=100, avg=41.47, stdev=17.85 00:33:04.684 clat (usec): min=13991, max=31323, avg=26177.09, stdev=1241.37 00:33:04.684 lat (usec): min=14013, max=31341, avg=26218.56, stdev=1244.82 00:33:04.684 clat percentiles (usec): 00:33:04.684 | 1.00th=[23725], 5.00th=[25035], 10.00th=[25297], 20.00th=[25560], 00:33:04.684 | 30.00th=[25822], 40.00th=[25822], 50.00th=[25822], 60.00th=[26084], 00:33:04.684 | 70.00th=[26346], 80.00th=[26870], 90.00th=[27919], 95.00th=[28443], 00:33:04.684 | 99.00th=[28967], 99.50th=[30016], 99.90th=[31327], 99.95th=[31327], 00:33:04.684 | 99.99th=[31327] 00:33:04.684 bw ( KiB/s): min= 2299, max= 2565, per=4.16%, avg=2406.40, stdev=67.98, samples=20 00:33:04.684 iops : min= 574, max= 641, avg=601.55, stdev=17.03, samples=20 00:33:04.684 lat (msec) : 20=0.27%, 50=99.73% 00:33:04.684 cpu : usr=98.91%, sys=0.66%, ctx=48, majf=0, minf=9 00:33:04.684 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:04.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.684 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.684 issued rwts: total=6032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.684 filename2: (groupid=0, jobs=1): err= 0: pid=2871096: Fri Dec 6 03:41:23 2024 00:33:04.684 read: IOPS=603, BW=2414KiB/s (2472kB/s)(23.6MiB/10020msec) 00:33:04.684 slat (nsec): min=7690, max=97813, avg=41796.02, stdev=17109.54 00:33:04.684 clat (usec): min=11604, max=30116, avg=26146.22, stdev=1362.47 00:33:04.684 lat (usec): min=11619, max=30148, avg=26188.02, stdev=1365.74 00:33:04.684 clat percentiles (usec): 00:33:04.684 | 1.00th=[23200], 5.00th=[24773], 10.00th=[25297], 20.00th=[25560], 00:33:04.684 | 30.00th=[25822], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:33:04.684 | 70.00th=[26346], 80.00th=[26870], 90.00th=[27919], 95.00th=[28443], 00:33:04.684 | 99.00th=[28967], 99.50th=[29230], 99.90th=[29754], 99.95th=[30016], 00:33:04.684 | 99.99th=[30016] 00:33:04.684 bw ( KiB/s): min= 2176, max= 2693, per=4.17%, avg=2413.15, stdev=112.22, samples=20 00:33:04.684 iops : min= 544, max= 673, avg=603.20, stdev=28.01, samples=20 00:33:04.684 lat (msec) : 20=0.53%, 50=99.47% 00:33:04.684 cpu : usr=98.78%, sys=0.83%, ctx=27, majf=0, minf=9 00:33:04.684 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:04.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.684 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.684 issued rwts: total=6048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.684 filename2: (groupid=0, jobs=1): err= 0: pid=2871097: Fri Dec 6 03:41:23 2024 00:33:04.684 read: IOPS=603, BW=2414KiB/s (2472kB/s)(23.6MiB/10021msec) 00:33:04.684 slat (nsec): min=8392, max=82056, avg=29091.23, stdev=14542.92 00:33:04.684 clat (usec): min=11587, max=30206, avg=26299.94, stdev=1380.25 00:33:04.684 lat (usec): min=11603, max=30224, avg=26329.03, stdev=1379.63 00:33:04.684 clat percentiles (usec): 00:33:04.684 | 1.00th=[23462], 5.00th=[24773], 10.00th=[25560], 20.00th=[25822], 00:33:04.684 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26084], 60.00th=[26084], 00:33:04.684 | 70.00th=[26346], 80.00th=[26870], 90.00th=[28181], 95.00th=[28705], 00:33:04.684 | 99.00th=[28967], 99.50th=[29492], 99.90th=[30016], 99.95th=[30016], 00:33:04.684 | 99.99th=[30278] 00:33:04.684 bw ( KiB/s): min= 2176, max= 2688, per=4.17%, avg=2412.90, stdev=103.55, samples=20 00:33:04.684 iops : min= 544, max= 672, avg=603.15, stdev=25.88, samples=20 00:33:04.684 lat (msec) : 20=0.53%, 50=99.47% 00:33:04.684 cpu : usr=97.56%, sys=1.46%, ctx=175, majf=0, minf=9 00:33:04.684 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:04.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.684 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.684 issued rwts: total=6048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.684 filename2: (groupid=0, jobs=1): err= 0: pid=2871098: Fri Dec 6 03:41:23 2024 00:33:04.684 read: IOPS=601, BW=2405KiB/s (2463kB/s)(23.5MiB/10005msec) 00:33:04.684 slat (nsec): min=7568, max=91142, avg=41024.76, stdev=16901.86 00:33:04.684 clat (usec): min=13974, max=46562, avg=26241.41, stdev=1607.50 00:33:04.684 lat (usec): min=13986, max=46584, avg=26282.43, stdev=1608.43 00:33:04.684 clat percentiles (usec): 00:33:04.684 | 1.00th=[23725], 5.00th=[25035], 10.00th=[25297], 20.00th=[25560], 00:33:04.684 | 30.00th=[25822], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:33:04.684 | 70.00th=[26346], 80.00th=[26870], 90.00th=[27919], 95.00th=[28443], 00:33:04.684 | 99.00th=[28967], 99.50th=[30016], 99.90th=[46400], 99.95th=[46400], 00:33:04.684 | 99.99th=[46400] 00:33:04.684 bw ( KiB/s): min= 2176, max= 2560, per=4.16%, avg=2404.53, stdev=80.99, samples=19 00:33:04.684 iops : min= 544, max= 640, avg=601.05, stdev=20.29, samples=19 00:33:04.684 lat (msec) : 20=0.27%, 50=99.73% 00:33:04.685 cpu : usr=98.61%, sys=0.92%, ctx=54, majf=0, minf=9 00:33:04.685 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:04.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.685 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.685 issued rwts: total=6016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.685 filename2: (groupid=0, jobs=1): err= 0: pid=2871099: Fri Dec 6 03:41:23 2024 00:33:04.685 read: IOPS=602, BW=2409KiB/s (2467kB/s)(23.6MiB/10015msec) 00:33:04.685 slat (usec): min=7, max=116, avg=43.70, stdev=16.62 00:33:04.685 clat (usec): min=14079, max=31947, avg=26164.67, stdev=1255.75 00:33:04.685 lat (usec): min=14095, max=32063, avg=26208.37, stdev=1259.37 00:33:04.685 clat percentiles (usec): 00:33:04.685 | 1.00th=[23462], 5.00th=[24773], 10.00th=[25297], 20.00th=[25560], 00:33:04.685 | 30.00th=[25560], 40.00th=[25822], 50.00th=[25822], 60.00th=[26084], 00:33:04.685 | 70.00th=[26346], 80.00th=[26870], 90.00th=[27919], 95.00th=[28443], 00:33:04.685 | 99.00th=[28967], 99.50th=[30016], 99.90th=[31327], 99.95th=[31589], 00:33:04.685 | 99.99th=[31851] 00:33:04.685 bw ( KiB/s): min= 2299, max= 2560, per=4.16%, avg=2406.15, stdev=67.37, samples=20 00:33:04.685 iops : min= 574, max= 640, avg=601.50, stdev=16.91, samples=20 00:33:04.685 lat (msec) : 20=0.27%, 50=99.73% 00:33:04.685 cpu : usr=98.86%, sys=0.73%, ctx=40, majf=0, minf=9 00:33:04.685 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:04.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.685 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.685 issued rwts: total=6032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:04.685 00:33:04.685 Run status group 0 (all jobs): 00:33:04.685 READ: bw=56.5MiB/s (59.2MB/s), 2405KiB/s-2454KiB/s (2463kB/s-2513kB/s), io=566MiB (593MB), run=10002-10021msec 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:04.685 bdev_null0 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:04.685 [2024-12-06 03:41:23.624873] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:04.685 bdev_null1 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:04.685 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:04.686 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:04.686 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:04.686 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:04.686 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:04.686 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:04.686 03:41:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:04.686 03:41:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:04.686 03:41:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:04.686 03:41:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:04.686 { 00:33:04.686 "params": { 00:33:04.686 "name": "Nvme$subsystem", 00:33:04.686 "trtype": "$TEST_TRANSPORT", 00:33:04.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:04.686 "adrfam": "ipv4", 00:33:04.686 "trsvcid": "$NVMF_PORT", 00:33:04.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:04.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:04.686 "hdgst": ${hdgst:-false}, 00:33:04.686 "ddgst": ${ddgst:-false} 00:33:04.686 }, 00:33:04.686 "method": "bdev_nvme_attach_controller" 00:33:04.686 } 00:33:04.686 EOF 00:33:04.686 )") 00:33:04.686 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:04.686 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:04.686 03:41:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:04.686 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:04.686 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:04.686 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:04.686 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:04.686 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:04.686 03:41:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:04.686 03:41:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:04.686 03:41:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:04.686 { 00:33:04.686 "params": { 00:33:04.686 "name": "Nvme$subsystem", 00:33:04.686 "trtype": "$TEST_TRANSPORT", 00:33:04.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:04.686 "adrfam": "ipv4", 00:33:04.686 "trsvcid": "$NVMF_PORT", 00:33:04.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:04.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:04.686 "hdgst": ${hdgst:-false}, 00:33:04.686 "ddgst": ${ddgst:-false} 00:33:04.686 }, 00:33:04.686 "method": "bdev_nvme_attach_controller" 00:33:04.686 } 00:33:04.686 EOF 00:33:04.686 )") 00:33:04.686 03:41:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:04.686 03:41:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:04.686 03:41:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:04.686 03:41:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:04.686 "params": { 00:33:04.686 "name": "Nvme0", 00:33:04.686 "trtype": "tcp", 00:33:04.686 "traddr": "10.0.0.2", 00:33:04.686 "adrfam": "ipv4", 00:33:04.686 "trsvcid": "4420", 00:33:04.686 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:04.686 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:04.686 "hdgst": false, 00:33:04.686 "ddgst": false 00:33:04.686 }, 00:33:04.686 "method": "bdev_nvme_attach_controller" 00:33:04.686 },{ 00:33:04.686 "params": { 00:33:04.686 "name": "Nvme1", 00:33:04.686 "trtype": "tcp", 00:33:04.686 "traddr": "10.0.0.2", 00:33:04.686 "adrfam": "ipv4", 00:33:04.686 "trsvcid": "4420", 00:33:04.686 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:04.686 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:04.686 "hdgst": false, 00:33:04.686 "ddgst": false 00:33:04.686 }, 00:33:04.686 "method": "bdev_nvme_attach_controller" 00:33:04.686 }' 00:33:04.686 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:04.686 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:04.686 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:04.686 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:04.686 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:04.686 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:04.686 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:04.686 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:04.686 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:04.686 03:41:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:04.686 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:04.686 ... 00:33:04.686 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:04.686 ... 00:33:04.686 fio-3.35 00:33:04.686 Starting 4 threads 00:33:09.952 00:33:09.952 filename0: (groupid=0, jobs=1): err= 0: pid=2873039: Fri Dec 6 03:41:29 2024 00:33:09.952 read: IOPS=2719, BW=21.2MiB/s (22.3MB/s)(106MiB/5003msec) 00:33:09.952 slat (usec): min=6, max=177, avg= 9.03, stdev= 3.28 00:33:09.952 clat (usec): min=1061, max=43445, avg=2914.27, stdev=1117.31 00:33:09.952 lat (usec): min=1073, max=43473, avg=2923.30, stdev=1117.29 00:33:09.952 clat percentiles (usec): 00:33:09.952 | 1.00th=[ 1827], 5.00th=[ 2147], 10.00th=[ 2343], 20.00th=[ 2507], 00:33:09.952 | 30.00th=[ 2606], 40.00th=[ 2737], 50.00th=[ 2835], 60.00th=[ 2966], 00:33:09.952 | 70.00th=[ 3064], 80.00th=[ 3163], 90.00th=[ 3490], 95.00th=[ 3982], 00:33:09.952 | 99.00th=[ 4686], 99.50th=[ 4817], 99.90th=[ 5276], 99.95th=[43254], 00:33:09.952 | 99.99th=[43254] 00:33:09.952 bw ( KiB/s): min=20480, max=23456, per=26.67%, avg=21850.67, stdev=962.83, samples=9 00:33:09.952 iops : min= 2560, max= 2932, avg=2731.33, stdev=120.35, samples=9 00:33:09.952 lat (msec) : 2=2.48%, 4=92.62%, 10=4.84%, 50=0.06% 00:33:09.952 cpu : usr=95.64%, sys=4.04%, ctx=10, majf=0, minf=9 00:33:09.952 IO depths : 1=0.2%, 2=5.1%, 4=66.1%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:09.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.952 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.952 issued rwts: total=13607,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.952 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:09.952 filename0: (groupid=0, jobs=1): err= 0: pid=2873040: Fri Dec 6 03:41:29 2024 00:33:09.952 read: IOPS=2524, BW=19.7MiB/s (20.7MB/s)(98.6MiB/5001msec) 00:33:09.952 slat (usec): min=6, max=160, avg= 9.19, stdev= 3.44 00:33:09.952 clat (usec): min=808, max=5724, avg=3142.33, stdev=605.26 00:33:09.952 lat (usec): min=820, max=5731, avg=3151.52, stdev=604.92 00:33:09.952 clat percentiles (usec): 00:33:09.952 | 1.00th=[ 1680], 5.00th=[ 2376], 10.00th=[ 2573], 20.00th=[ 2769], 00:33:09.952 | 30.00th=[ 2868], 40.00th=[ 2966], 50.00th=[ 3064], 60.00th=[ 3130], 00:33:09.952 | 70.00th=[ 3228], 80.00th=[ 3458], 90.00th=[ 3916], 95.00th=[ 4490], 00:33:09.952 | 99.00th=[ 5080], 99.50th=[ 5211], 99.90th=[ 5407], 99.95th=[ 5538], 00:33:09.952 | 99.99th=[ 5735] 00:33:09.952 bw ( KiB/s): min=18880, max=22224, per=24.62%, avg=20170.67, stdev=1013.89, samples=9 00:33:09.952 iops : min= 2360, max= 2778, avg=2521.33, stdev=126.74, samples=9 00:33:09.952 lat (usec) : 1000=0.08% 00:33:09.952 lat (msec) : 2=1.68%, 4=89.16%, 10=9.09% 00:33:09.952 cpu : usr=96.18%, sys=3.52%, ctx=8, majf=0, minf=9 00:33:09.952 IO depths : 1=0.1%, 2=3.5%, 4=68.3%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:09.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.952 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.952 issued rwts: total=12625,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.952 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:09.952 filename1: (groupid=0, jobs=1): err= 0: pid=2873041: Fri Dec 6 03:41:29 2024 00:33:09.952 read: IOPS=2434, BW=19.0MiB/s (19.9MB/s)(95.1MiB/5001msec) 00:33:09.952 slat (nsec): min=6262, max=30223, avg=9085.86, stdev=3252.28 00:33:09.952 clat (usec): min=809, max=6501, avg=3259.55, stdev=595.73 00:33:09.952 lat (usec): min=819, max=6508, avg=3268.63, stdev=595.12 00:33:09.952 clat percentiles (usec): 00:33:09.952 | 1.00th=[ 1696], 5.00th=[ 2606], 10.00th=[ 2769], 20.00th=[ 2868], 00:33:09.952 | 30.00th=[ 2999], 40.00th=[ 3064], 50.00th=[ 3130], 60.00th=[ 3195], 00:33:09.952 | 70.00th=[ 3359], 80.00th=[ 3556], 90.00th=[ 4080], 95.00th=[ 4621], 00:33:09.952 | 99.00th=[ 5211], 99.50th=[ 5342], 99.90th=[ 5604], 99.95th=[ 5735], 00:33:09.952 | 99.99th=[ 6521] 00:33:09.952 bw ( KiB/s): min=18400, max=20432, per=23.61%, avg=19346.67, stdev=792.81, samples=9 00:33:09.952 iops : min= 2300, max= 2554, avg=2418.33, stdev=99.10, samples=9 00:33:09.952 lat (usec) : 1000=0.02% 00:33:09.952 lat (msec) : 2=1.43%, 4=88.14%, 10=10.41% 00:33:09.952 cpu : usr=96.22%, sys=3.46%, ctx=7, majf=0, minf=9 00:33:09.952 IO depths : 1=0.1%, 2=2.5%, 4=69.6%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:09.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.952 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.952 issued rwts: total=12176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.952 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:09.952 filename1: (groupid=0, jobs=1): err= 0: pid=2873042: Fri Dec 6 03:41:29 2024 00:33:09.952 read: IOPS=2563, BW=20.0MiB/s (21.0MB/s)(100MiB/5002msec) 00:33:09.952 slat (nsec): min=6259, max=33155, avg=9195.99, stdev=3217.02 00:33:09.952 clat (usec): min=961, max=43174, avg=3091.88, stdev=1154.04 00:33:09.952 lat (usec): min=973, max=43194, avg=3101.07, stdev=1153.87 00:33:09.952 clat percentiles (usec): 00:33:09.952 | 1.00th=[ 1991], 5.00th=[ 2278], 10.00th=[ 2474], 20.00th=[ 2671], 00:33:09.952 | 30.00th=[ 2802], 40.00th=[ 2900], 50.00th=[ 2999], 60.00th=[ 3097], 00:33:09.952 | 70.00th=[ 3163], 80.00th=[ 3359], 90.00th=[ 3785], 95.00th=[ 4359], 00:33:09.952 | 99.00th=[ 5080], 99.50th=[ 5211], 99.90th=[ 5473], 99.95th=[43254], 00:33:09.952 | 99.99th=[43254] 00:33:09.952 bw ( KiB/s): min=18352, max=22224, per=25.08%, avg=20545.78, stdev=1023.89, samples=9 00:33:09.952 iops : min= 2294, max= 2778, avg=2568.22, stdev=127.99, samples=9 00:33:09.952 lat (usec) : 1000=0.01% 00:33:09.952 lat (msec) : 2=1.07%, 4=91.00%, 10=7.86%, 50=0.06% 00:33:09.952 cpu : usr=96.52%, sys=3.16%, ctx=9, majf=0, minf=9 00:33:09.952 IO depths : 1=0.2%, 2=5.4%, 4=66.5%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:09.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.952 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.952 issued rwts: total=12825,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.952 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:09.952 00:33:09.952 Run status group 0 (all jobs): 00:33:09.952 READ: bw=80.0MiB/s (83.9MB/s), 19.0MiB/s-21.2MiB/s (19.9MB/s-22.3MB/s), io=400MiB (420MB), run=5001-5003msec 00:33:09.952 03:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:33:09.952 03:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:09.952 03:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:09.952 03:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:09.952 03:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:09.952 03:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:09.952 03:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.952 03:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:09.952 03:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.952 03:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:09.952 03:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.952 03:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:09.952 03:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.952 03:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:09.952 03:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:09.952 03:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:09.952 03:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:09.952 03:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.952 03:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:09.952 03:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.952 03:41:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:09.952 03:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.952 03:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:10.211 03:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.211 00:33:10.211 real 0m24.480s 00:33:10.211 user 4m50.975s 00:33:10.211 sys 0m5.089s 00:33:10.211 03:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:10.211 03:41:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:10.211 ************************************ 00:33:10.211 END TEST fio_dif_rand_params 00:33:10.211 ************************************ 00:33:10.211 03:41:30 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:33:10.211 03:41:30 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:10.211 03:41:30 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:10.211 03:41:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:10.211 ************************************ 00:33:10.211 START TEST fio_dif_digest 00:33:10.211 ************************************ 00:33:10.211 03:41:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:33:10.211 03:41:30 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:33:10.211 03:41:30 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:33:10.211 03:41:30 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:33:10.211 03:41:30 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:33:10.211 03:41:30 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:33:10.211 03:41:30 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:33:10.211 03:41:30 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:33:10.211 03:41:30 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:33:10.211 03:41:30 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:33:10.211 03:41:30 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:33:10.211 03:41:30 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:33:10.211 03:41:30 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:33:10.211 03:41:30 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:33:10.211 03:41:30 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:33:10.211 03:41:30 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:33:10.211 03:41:30 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:10.211 03:41:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.211 03:41:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:10.211 bdev_null0 00:33:10.211 03:41:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.211 03:41:30 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:10.212 [2024-12-06 03:41:30.188032] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:10.212 { 00:33:10.212 "params": { 00:33:10.212 "name": "Nvme$subsystem", 00:33:10.212 "trtype": "$TEST_TRANSPORT", 00:33:10.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:10.212 "adrfam": "ipv4", 00:33:10.212 "trsvcid": "$NVMF_PORT", 00:33:10.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:10.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:10.212 "hdgst": ${hdgst:-false}, 00:33:10.212 "ddgst": ${ddgst:-false} 00:33:10.212 }, 00:33:10.212 "method": "bdev_nvme_attach_controller" 00:33:10.212 } 00:33:10.212 EOF 00:33:10.212 )") 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:10.212 "params": { 00:33:10.212 "name": "Nvme0", 00:33:10.212 "trtype": "tcp", 00:33:10.212 "traddr": "10.0.0.2", 00:33:10.212 "adrfam": "ipv4", 00:33:10.212 "trsvcid": "4420", 00:33:10.212 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:10.212 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:10.212 "hdgst": true, 00:33:10.212 "ddgst": true 00:33:10.212 }, 00:33:10.212 "method": "bdev_nvme_attach_controller" 00:33:10.212 }' 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:10.212 03:41:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:10.470 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:10.470 ... 00:33:10.470 fio-3.35 00:33:10.470 Starting 3 threads 00:33:22.674 00:33:22.674 filename0: (groupid=0, jobs=1): err= 0: pid=2874102: Fri Dec 6 03:41:41 2024 00:33:22.674 read: IOPS=282, BW=35.3MiB/s (37.0MB/s)(355MiB/10047msec) 00:33:22.674 slat (nsec): min=6533, max=33241, avg=12214.70, stdev=2127.86 00:33:22.674 clat (usec): min=6686, max=53260, avg=10590.44, stdev=1316.82 00:33:22.674 lat (usec): min=6695, max=53268, avg=10602.66, stdev=1316.70 00:33:22.674 clat percentiles (usec): 00:33:22.674 | 1.00th=[ 8586], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[ 9896], 00:33:22.674 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10552], 60.00th=[10814], 00:33:22.674 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11469], 95.00th=[11863], 00:33:22.674 | 99.00th=[12387], 99.50th=[12649], 99.90th=[13304], 99.95th=[47973], 00:33:22.674 | 99.99th=[53216] 00:33:22.674 bw ( KiB/s): min=35072, max=37632, per=35.35%, avg=36300.80, stdev=702.80, samples=20 00:33:22.674 iops : min= 274, max= 294, avg=283.60, stdev= 5.49, samples=20 00:33:22.674 lat (msec) : 10=21.49%, 20=78.44%, 50=0.04%, 100=0.04% 00:33:22.674 cpu : usr=95.40%, sys=4.25%, ctx=53, majf=0, minf=2 00:33:22.674 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:22.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:22.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:22.674 issued rwts: total=2838,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:22.674 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:22.674 filename0: (groupid=0, jobs=1): err= 0: pid=2874103: Fri Dec 6 03:41:41 2024 00:33:22.674 read: IOPS=259, BW=32.5MiB/s (34.0MB/s)(326MiB/10045msec) 00:33:22.674 slat (nsec): min=6509, max=41177, avg=12221.24, stdev=2098.97 00:33:22.674 clat (usec): min=8876, max=52154, avg=11519.18, stdev=1899.99 00:33:22.674 lat (usec): min=8890, max=52195, avg=11531.40, stdev=1900.24 00:33:22.674 clat percentiles (usec): 00:33:22.674 | 1.00th=[ 9765], 5.00th=[10290], 10.00th=[10421], 20.00th=[10814], 00:33:22.674 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11338], 60.00th=[11600], 00:33:22.674 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12518], 95.00th=[12780], 00:33:22.674 | 99.00th=[13566], 99.50th=[14091], 99.90th=[52167], 99.95th=[52167], 00:33:22.674 | 99.99th=[52167] 00:33:22.674 bw ( KiB/s): min=30464, max=34304, per=32.50%, avg=33369.60, stdev=868.24, samples=20 00:33:22.674 iops : min= 238, max= 268, avg=260.70, stdev= 6.78, samples=20 00:33:22.674 lat (msec) : 10=2.11%, 20=97.70%, 50=0.04%, 100=0.15% 00:33:22.674 cpu : usr=95.82%, sys=3.87%, ctx=20, majf=0, minf=12 00:33:22.674 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:22.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:22.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:22.674 issued rwts: total=2609,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:22.674 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:22.674 filename0: (groupid=0, jobs=1): err= 0: pid=2874104: Fri Dec 6 03:41:41 2024 00:33:22.674 read: IOPS=260, BW=32.5MiB/s (34.1MB/s)(327MiB/10044msec) 00:33:22.674 slat (nsec): min=6483, max=26456, avg=12473.46, stdev=1993.90 00:33:22.674 clat (usec): min=7286, max=48967, avg=11501.23, stdev=1300.43 00:33:22.674 lat (usec): min=7303, max=48979, avg=11513.70, stdev=1300.43 00:33:22.674 clat percentiles (usec): 00:33:22.674 | 1.00th=[ 9503], 5.00th=[10159], 10.00th=[10421], 20.00th=[10814], 00:33:22.674 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11469], 60.00th=[11600], 00:33:22.674 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12518], 95.00th=[12911], 00:33:22.674 | 99.00th=[13566], 99.50th=[13829], 99.90th=[14353], 99.95th=[46400], 00:33:22.674 | 99.99th=[49021] 00:33:22.674 bw ( KiB/s): min=32512, max=34560, per=32.55%, avg=33420.80, stdev=541.31, samples=20 00:33:22.674 iops : min= 254, max= 270, avg=261.10, stdev= 4.23, samples=20 00:33:22.674 lat (msec) : 10=2.95%, 20=96.98%, 50=0.08% 00:33:22.674 cpu : usr=94.39%, sys=4.84%, ctx=591, majf=0, minf=9 00:33:22.674 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:22.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:22.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:22.674 issued rwts: total=2613,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:22.674 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:22.674 00:33:22.674 Run status group 0 (all jobs): 00:33:22.674 READ: bw=100MiB/s (105MB/s), 32.5MiB/s-35.3MiB/s (34.0MB/s-37.0MB/s), io=1008MiB (1056MB), run=10044-10047msec 00:33:22.674 03:41:41 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:33:22.674 03:41:41 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:33:22.674 03:41:41 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:33:22.674 03:41:41 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:22.674 03:41:41 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:33:22.674 03:41:41 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:22.674 03:41:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.674 03:41:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:22.674 03:41:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.674 03:41:41 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:22.674 03:41:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.674 03:41:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:22.674 03:41:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.674 00:33:22.674 real 0m11.224s 00:33:22.674 user 0m35.364s 00:33:22.674 sys 0m1.614s 00:33:22.674 03:41:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:22.674 03:41:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:22.674 ************************************ 00:33:22.674 END TEST fio_dif_digest 00:33:22.674 ************************************ 00:33:22.674 03:41:41 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:33:22.674 03:41:41 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:33:22.674 03:41:41 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:22.674 03:41:41 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:33:22.674 03:41:41 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:22.674 03:41:41 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:33:22.674 03:41:41 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:22.674 03:41:41 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:22.674 rmmod nvme_tcp 00:33:22.674 rmmod nvme_fabrics 00:33:22.674 rmmod nvme_keyring 00:33:22.674 03:41:41 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:22.674 03:41:41 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:33:22.674 03:41:41 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:33:22.674 03:41:41 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 2865724 ']' 00:33:22.674 03:41:41 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 2865724 00:33:22.674 03:41:41 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 2865724 ']' 00:33:22.674 03:41:41 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 2865724 00:33:22.674 03:41:41 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:33:22.675 03:41:41 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:22.675 03:41:41 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2865724 00:33:22.675 03:41:41 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:22.675 03:41:41 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:22.675 03:41:41 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2865724' 00:33:22.675 killing process with pid 2865724 00:33:22.675 03:41:41 nvmf_dif -- common/autotest_common.sh@973 -- # kill 2865724 00:33:22.675 03:41:41 nvmf_dif -- common/autotest_common.sh@978 -- # wait 2865724 00:33:22.675 03:41:41 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:33:22.675 03:41:41 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:24.583 Waiting for block devices as requested 00:33:24.583 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:33:24.583 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:24.583 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:24.583 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:24.583 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:24.843 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:24.843 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:24.843 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:24.843 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:25.107 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:25.107 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:25.107 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:25.107 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:25.370 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:25.370 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:25.370 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:25.628 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:25.628 03:41:45 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:25.628 03:41:45 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:25.628 03:41:45 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:33:25.628 03:41:45 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:33:25.628 03:41:45 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:33:25.628 03:41:45 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:25.628 03:41:45 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:25.628 03:41:45 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:25.628 03:41:45 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:25.628 03:41:45 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:25.628 03:41:45 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:28.165 03:41:47 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:28.165 00:33:28.165 real 1m12.640s 00:33:28.165 user 7m7.461s 00:33:28.165 sys 0m19.329s 00:33:28.165 03:41:47 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:28.165 03:41:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:28.165 ************************************ 00:33:28.165 END TEST nvmf_dif 00:33:28.165 ************************************ 00:33:28.165 03:41:47 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:28.165 03:41:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:28.165 03:41:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:28.165 03:41:47 -- common/autotest_common.sh@10 -- # set +x 00:33:28.165 ************************************ 00:33:28.165 START TEST nvmf_abort_qd_sizes 00:33:28.165 ************************************ 00:33:28.165 03:41:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:28.165 * Looking for test storage... 00:33:28.165 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:28.165 03:41:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:28.165 03:41:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:33:28.165 03:41:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:28.165 03:41:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:28.165 03:41:47 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:28.165 03:41:47 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:28.165 03:41:47 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:28.165 03:41:47 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:33:28.165 03:41:47 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:33:28.165 03:41:47 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:33:28.165 03:41:47 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:33:28.165 03:41:47 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:33:28.165 03:41:47 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:33:28.165 03:41:47 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:33:28.165 03:41:47 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:28.165 03:41:47 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:33:28.165 03:41:47 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:33:28.165 03:41:47 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:28.165 03:41:47 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:28.165 03:41:47 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:33:28.165 03:41:47 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:33:28.165 03:41:47 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:28.165 03:41:47 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:33:28.165 03:41:47 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:33:28.165 03:41:47 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:33:28.165 03:41:47 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:33:28.165 03:41:47 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:28.165 03:41:47 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:33:28.165 03:41:47 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:28.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.166 --rc genhtml_branch_coverage=1 00:33:28.166 --rc genhtml_function_coverage=1 00:33:28.166 --rc genhtml_legend=1 00:33:28.166 --rc geninfo_all_blocks=1 00:33:28.166 --rc geninfo_unexecuted_blocks=1 00:33:28.166 00:33:28.166 ' 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:28.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.166 --rc genhtml_branch_coverage=1 00:33:28.166 --rc genhtml_function_coverage=1 00:33:28.166 --rc genhtml_legend=1 00:33:28.166 --rc geninfo_all_blocks=1 00:33:28.166 --rc geninfo_unexecuted_blocks=1 00:33:28.166 00:33:28.166 ' 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:28.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.166 --rc genhtml_branch_coverage=1 00:33:28.166 --rc genhtml_function_coverage=1 00:33:28.166 --rc genhtml_legend=1 00:33:28.166 --rc geninfo_all_blocks=1 00:33:28.166 --rc geninfo_unexecuted_blocks=1 00:33:28.166 00:33:28.166 ' 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:28.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.166 --rc genhtml_branch_coverage=1 00:33:28.166 --rc genhtml_function_coverage=1 00:33:28.166 --rc genhtml_legend=1 00:33:28.166 --rc geninfo_all_blocks=1 00:33:28.166 --rc geninfo_unexecuted_blocks=1 00:33:28.166 00:33:28.166 ' 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:28.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:33:28.166 03:41:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:33.443 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:33.443 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:33.443 Found net devices under 0000:86:00.0: cvl_0_0 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:33.443 Found net devices under 0000:86:00.1: cvl_0_1 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:33.443 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:33.443 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:33.443 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:33:33.443 00:33:33.444 --- 10.0.0.2 ping statistics --- 00:33:33.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:33.444 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:33:33.444 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:33.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:33.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:33:33.444 00:33:33.444 --- 10.0.0.1 ping statistics --- 00:33:33.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:33.444 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:33:33.444 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:33.444 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:33:33.444 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:33:33.444 03:41:53 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:36.008 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:36.008 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:36.008 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:36.008 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:36.008 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:36.266 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:36.266 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:36.266 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:36.266 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:36.266 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:36.266 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:36.266 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:36.266 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:36.266 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:36.266 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:36.266 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:37.199 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:33:37.199 03:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:37.199 03:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:37.199 03:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:37.199 03:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:37.199 03:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:37.199 03:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:37.199 03:41:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:33:37.199 03:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:37.199 03:41:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:37.199 03:41:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:37.199 03:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=2882025 00:33:37.199 03:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 2882025 00:33:37.199 03:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:33:37.199 03:41:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 2882025 ']' 00:33:37.199 03:41:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:37.199 03:41:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:37.199 03:41:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:37.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:37.199 03:41:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:37.199 03:41:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:37.199 [2024-12-06 03:41:57.294341] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:33:37.199 [2024-12-06 03:41:57.294382] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:37.457 [2024-12-06 03:41:57.361772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:37.457 [2024-12-06 03:41:57.406848] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:37.457 [2024-12-06 03:41:57.406886] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:37.457 [2024-12-06 03:41:57.406894] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:37.457 [2024-12-06 03:41:57.406900] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:37.457 [2024-12-06 03:41:57.406905] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:37.457 [2024-12-06 03:41:57.408351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:37.457 [2024-12-06 03:41:57.408371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:37.457 [2024-12-06 03:41:57.408439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:37.457 [2024-12-06 03:41:57.408440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:37.457 03:41:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:37.457 03:41:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:33:37.458 03:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:37.458 03:41:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:37.458 03:41:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:37.458 03:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:37.458 03:41:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:33:37.458 03:41:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:33:37.458 03:41:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:33:37.458 03:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:33:37.458 03:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:33:37.458 03:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:33:37.458 03:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:33:37.458 03:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:33:37.458 03:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:33:37.458 03:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:33:37.458 03:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:33:37.458 03:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:33:37.458 03:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:33:37.458 03:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:33:37.458 03:41:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:33:37.458 03:41:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:33:37.458 03:41:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:33:37.458 03:41:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:37.458 03:41:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:37.458 03:41:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:37.716 ************************************ 00:33:37.716 START TEST spdk_target_abort 00:33:37.716 ************************************ 00:33:37.716 03:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:33:37.716 03:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:33:37.716 03:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:33:37.716 03:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.716 03:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:41.006 spdk_targetn1 00:33:41.006 03:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.006 03:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:41.006 03:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.006 03:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:41.006 [2024-12-06 03:42:00.429662] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:41.006 03:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.006 03:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:33:41.006 03:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.006 03:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:41.006 03:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.006 03:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:33:41.006 03:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.006 03:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:41.006 03:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.006 03:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:33:41.006 03:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.006 03:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:41.006 [2024-12-06 03:42:00.470378] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:41.006 03:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.006 03:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:33:41.006 03:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:41.006 03:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:41.006 03:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:33:41.006 03:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:41.006 03:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:41.006 03:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:41.007 03:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:41.007 03:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:41.007 03:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:41.007 03:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:41.007 03:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:41.007 03:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:41.007 03:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:41.007 03:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:33:41.007 03:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:41.007 03:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:41.007 03:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:41.007 03:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:41.007 03:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:41.007 03:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:44.295 Initializing NVMe Controllers 00:33:44.295 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:44.295 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:44.295 Initialization complete. Launching workers. 00:33:44.295 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 16227, failed: 0 00:33:44.295 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1303, failed to submit 14924 00:33:44.295 success 771, unsuccessful 532, failed 0 00:33:44.295 03:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:44.295 03:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:47.581 Initializing NVMe Controllers 00:33:47.581 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:47.581 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:47.581 Initialization complete. Launching workers. 00:33:47.581 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8528, failed: 0 00:33:47.581 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1234, failed to submit 7294 00:33:47.581 success 322, unsuccessful 912, failed 0 00:33:47.581 03:42:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:47.581 03:42:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:50.116 Initializing NVMe Controllers 00:33:50.116 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:50.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:50.116 Initialization complete. Launching workers. 00:33:50.116 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37606, failed: 0 00:33:50.116 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2818, failed to submit 34788 00:33:50.116 success 573, unsuccessful 2245, failed 0 00:33:50.116 03:42:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:33:50.116 03:42:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.116 03:42:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:50.116 03:42:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.116 03:42:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:33:50.116 03:42:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.116 03:42:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:51.493 03:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.493 03:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2882025 00:33:51.493 03:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 2882025 ']' 00:33:51.493 03:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 2882025 00:33:51.493 03:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:33:51.493 03:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:51.493 03:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2882025 00:33:51.493 03:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:51.493 03:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:51.493 03:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2882025' 00:33:51.493 killing process with pid 2882025 00:33:51.493 03:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 2882025 00:33:51.493 03:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 2882025 00:33:51.752 00:33:51.752 real 0m14.156s 00:33:51.752 user 0m53.938s 00:33:51.752 sys 0m2.631s 00:33:51.752 03:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:51.752 03:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:51.752 ************************************ 00:33:51.752 END TEST spdk_target_abort 00:33:51.752 ************************************ 00:33:51.752 03:42:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:33:51.752 03:42:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:51.752 03:42:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:51.752 03:42:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:51.752 ************************************ 00:33:51.752 START TEST kernel_target_abort 00:33:51.752 ************************************ 00:33:51.752 03:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:33:51.752 03:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:33:51.752 03:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:33:51.752 03:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:51.752 03:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:51.752 03:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:51.752 03:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:51.752 03:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:51.752 03:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:51.752 03:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:51.752 03:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:51.752 03:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:51.752 03:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:51.752 03:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:51.752 03:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:33:51.752 03:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:51.752 03:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:51.752 03:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:51.752 03:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:33:51.752 03:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:33:51.752 03:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:33:51.752 03:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:51.752 03:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:54.288 Waiting for block devices as requested 00:33:54.288 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:33:54.547 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:54.547 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:54.547 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:54.807 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:54.807 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:54.807 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:54.807 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:55.067 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:55.067 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:55.067 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:55.067 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:55.324 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:55.324 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:55.324 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:55.582 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:55.582 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:55.582 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:33:55.582 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:55.582 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:33:55.582 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:33:55.582 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:55.582 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:33:55.582 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:33:55.582 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:55.582 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:55.582 No valid GPT data, bailing 00:33:55.582 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:55.841 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:33:55.841 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:33:55.841 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:33:55.841 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:33:55.841 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:55.841 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:55.841 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:55.841 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:55.841 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:33:55.841 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:33:55.841 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:33:55.841 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:33:55.841 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:33:55.841 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:33:55.841 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:33:55.841 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:55.841 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:33:55.841 00:33:55.841 Discovery Log Number of Records 2, Generation counter 2 00:33:55.841 =====Discovery Log Entry 0====== 00:33:55.841 trtype: tcp 00:33:55.841 adrfam: ipv4 00:33:55.841 subtype: current discovery subsystem 00:33:55.841 treq: not specified, sq flow control disable supported 00:33:55.841 portid: 1 00:33:55.841 trsvcid: 4420 00:33:55.841 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:55.841 traddr: 10.0.0.1 00:33:55.841 eflags: none 00:33:55.841 sectype: none 00:33:55.841 =====Discovery Log Entry 1====== 00:33:55.841 trtype: tcp 00:33:55.841 adrfam: ipv4 00:33:55.841 subtype: nvme subsystem 00:33:55.841 treq: not specified, sq flow control disable supported 00:33:55.841 portid: 1 00:33:55.841 trsvcid: 4420 00:33:55.841 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:55.841 traddr: 10.0.0.1 00:33:55.841 eflags: none 00:33:55.841 sectype: none 00:33:55.841 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:33:55.841 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:55.841 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:55.841 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:33:55.841 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:55.841 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:55.841 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:55.841 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:55.841 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:55.841 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:55.841 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:55.841 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:55.841 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:55.841 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:55.841 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:33:55.841 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:55.841 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:33:55.841 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:55.841 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:55.841 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:55.841 03:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:59.125 Initializing NVMe Controllers 00:33:59.125 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:59.125 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:59.125 Initialization complete. Launching workers. 00:33:59.125 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 89658, failed: 0 00:33:59.125 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 89658, failed to submit 0 00:33:59.125 success 0, unsuccessful 89658, failed 0 00:33:59.125 03:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:59.125 03:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:02.410 Initializing NVMe Controllers 00:34:02.410 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:02.410 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:02.410 Initialization complete. Launching workers. 00:34:02.410 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 142055, failed: 0 00:34:02.410 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35422, failed to submit 106633 00:34:02.410 success 0, unsuccessful 35422, failed 0 00:34:02.410 03:42:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:02.410 03:42:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:05.697 Initializing NVMe Controllers 00:34:05.697 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:05.697 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:05.697 Initialization complete. Launching workers. 00:34:05.697 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 136325, failed: 0 00:34:05.697 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34142, failed to submit 102183 00:34:05.697 success 0, unsuccessful 34142, failed 0 00:34:05.697 03:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:34:05.697 03:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:05.697 03:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:34:05.697 03:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:05.697 03:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:05.697 03:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:05.697 03:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:05.697 03:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:05.697 03:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:05.697 03:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:07.601 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:07.601 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:07.601 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:07.601 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:07.601 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:07.601 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:07.601 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:07.601 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:07.601 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:07.601 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:07.601 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:07.601 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:07.601 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:07.601 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:07.601 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:07.601 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:08.540 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:08.540 00:34:08.540 real 0m16.626s 00:34:08.540 user 0m8.570s 00:34:08.540 sys 0m4.476s 00:34:08.540 03:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:08.540 03:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:08.540 ************************************ 00:34:08.540 END TEST kernel_target_abort 00:34:08.540 ************************************ 00:34:08.540 03:42:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:34:08.540 03:42:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:34:08.540 03:42:28 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:08.540 03:42:28 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:34:08.540 03:42:28 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:08.540 03:42:28 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:34:08.540 03:42:28 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:08.540 03:42:28 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:08.540 rmmod nvme_tcp 00:34:08.540 rmmod nvme_fabrics 00:34:08.540 rmmod nvme_keyring 00:34:08.540 03:42:28 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:08.540 03:42:28 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:34:08.540 03:42:28 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:34:08.540 03:42:28 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 2882025 ']' 00:34:08.540 03:42:28 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 2882025 00:34:08.540 03:42:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 2882025 ']' 00:34:08.540 03:42:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 2882025 00:34:08.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2882025) - No such process 00:34:08.540 03:42:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 2882025 is not found' 00:34:08.540 Process with pid 2882025 is not found 00:34:08.540 03:42:28 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:08.540 03:42:28 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:11.073 Waiting for block devices as requested 00:34:11.073 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:11.073 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:11.333 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:11.333 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:11.333 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:11.592 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:11.592 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:11.592 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:11.592 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:11.851 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:11.851 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:11.851 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:11.851 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:12.110 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:12.110 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:12.110 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:12.406 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:12.406 03:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:12.406 03:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:12.406 03:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:34:12.406 03:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:34:12.406 03:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:12.406 03:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:34:12.406 03:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:12.406 03:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:12.406 03:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:12.406 03:42:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:12.406 03:42:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:14.559 03:42:34 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:14.559 00:34:14.559 real 0m46.681s 00:34:14.559 user 1m6.569s 00:34:14.559 sys 0m15.339s 00:34:14.559 03:42:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:14.559 03:42:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:14.559 ************************************ 00:34:14.559 END TEST nvmf_abort_qd_sizes 00:34:14.559 ************************************ 00:34:14.559 03:42:34 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:34:14.559 03:42:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:14.559 03:42:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:14.559 03:42:34 -- common/autotest_common.sh@10 -- # set +x 00:34:14.559 ************************************ 00:34:14.559 START TEST keyring_file 00:34:14.559 ************************************ 00:34:14.559 03:42:34 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:34:14.559 * Looking for test storage... 00:34:14.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:34:14.559 03:42:34 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:14.559 03:42:34 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:34:14.559 03:42:34 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:14.559 03:42:34 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:14.559 03:42:34 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:14.559 03:42:34 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:14.559 03:42:34 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:14.559 03:42:34 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:34:14.559 03:42:34 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:34:14.559 03:42:34 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:34:14.559 03:42:34 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:34:14.559 03:42:34 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:34:14.559 03:42:34 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:34:14.559 03:42:34 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:34:14.559 03:42:34 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:14.559 03:42:34 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:34:14.560 03:42:34 keyring_file -- scripts/common.sh@345 -- # : 1 00:34:14.560 03:42:34 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:14.560 03:42:34 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:14.560 03:42:34 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:34:14.560 03:42:34 keyring_file -- scripts/common.sh@353 -- # local d=1 00:34:14.560 03:42:34 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:14.560 03:42:34 keyring_file -- scripts/common.sh@355 -- # echo 1 00:34:14.560 03:42:34 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:34:14.560 03:42:34 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:34:14.560 03:42:34 keyring_file -- scripts/common.sh@353 -- # local d=2 00:34:14.560 03:42:34 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:14.560 03:42:34 keyring_file -- scripts/common.sh@355 -- # echo 2 00:34:14.560 03:42:34 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:34:14.560 03:42:34 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:14.560 03:42:34 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:14.560 03:42:34 keyring_file -- scripts/common.sh@368 -- # return 0 00:34:14.560 03:42:34 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:14.560 03:42:34 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:14.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.560 --rc genhtml_branch_coverage=1 00:34:14.560 --rc genhtml_function_coverage=1 00:34:14.560 --rc genhtml_legend=1 00:34:14.560 --rc geninfo_all_blocks=1 00:34:14.560 --rc geninfo_unexecuted_blocks=1 00:34:14.560 00:34:14.560 ' 00:34:14.560 03:42:34 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:14.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.560 --rc genhtml_branch_coverage=1 00:34:14.560 --rc genhtml_function_coverage=1 00:34:14.560 --rc genhtml_legend=1 00:34:14.560 --rc geninfo_all_blocks=1 00:34:14.560 --rc geninfo_unexecuted_blocks=1 00:34:14.560 00:34:14.560 ' 00:34:14.560 03:42:34 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:14.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.560 --rc genhtml_branch_coverage=1 00:34:14.560 --rc genhtml_function_coverage=1 00:34:14.560 --rc genhtml_legend=1 00:34:14.560 --rc geninfo_all_blocks=1 00:34:14.560 --rc geninfo_unexecuted_blocks=1 00:34:14.560 00:34:14.560 ' 00:34:14.560 03:42:34 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:14.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.560 --rc genhtml_branch_coverage=1 00:34:14.560 --rc genhtml_function_coverage=1 00:34:14.560 --rc genhtml_legend=1 00:34:14.560 --rc geninfo_all_blocks=1 00:34:14.560 --rc geninfo_unexecuted_blocks=1 00:34:14.560 00:34:14.560 ' 00:34:14.560 03:42:34 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:34:14.560 03:42:34 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:14.560 03:42:34 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:34:14.560 03:42:34 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:14.560 03:42:34 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:14.560 03:42:34 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:14.560 03:42:34 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:14.560 03:42:34 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:14.560 03:42:34 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:14.560 03:42:34 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:14.560 03:42:34 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:14.560 03:42:34 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:14.560 03:42:34 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:14.560 03:42:34 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:14.560 03:42:34 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:14.560 03:42:34 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:14.560 03:42:34 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:14.560 03:42:34 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:14.560 03:42:34 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:14.560 03:42:34 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:14.560 03:42:34 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:34:14.560 03:42:34 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:14.560 03:42:34 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:14.560 03:42:34 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:14.560 03:42:34 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.560 03:42:34 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.560 03:42:34 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.560 03:42:34 keyring_file -- paths/export.sh@5 -- # export PATH 00:34:14.560 03:42:34 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.560 03:42:34 keyring_file -- nvmf/common.sh@51 -- # : 0 00:34:14.560 03:42:34 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:14.560 03:42:34 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:14.560 03:42:34 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:14.560 03:42:34 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:14.560 03:42:34 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:14.560 03:42:34 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:14.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:14.560 03:42:34 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:14.560 03:42:34 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:14.560 03:42:34 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:14.560 03:42:34 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:34:14.560 03:42:34 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:34:14.560 03:42:34 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:34:14.560 03:42:34 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:34:14.561 03:42:34 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:34:14.561 03:42:34 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:34:14.561 03:42:34 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:34:14.561 03:42:34 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:14.561 03:42:34 keyring_file -- keyring/common.sh@17 -- # name=key0 00:34:14.561 03:42:34 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:14.561 03:42:34 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:14.561 03:42:34 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:14.561 03:42:34 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.llHdKUGKvL 00:34:14.561 03:42:34 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:14.561 03:42:34 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:14.561 03:42:34 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:34:14.561 03:42:34 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:14.561 03:42:34 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:34:14.561 03:42:34 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:34:14.561 03:42:34 keyring_file -- nvmf/common.sh@733 -- # python - 00:34:14.820 03:42:34 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.llHdKUGKvL 00:34:14.820 03:42:34 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.llHdKUGKvL 00:34:14.820 03:42:34 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.llHdKUGKvL 00:34:14.820 03:42:34 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:34:14.820 03:42:34 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:14.820 03:42:34 keyring_file -- keyring/common.sh@17 -- # name=key1 00:34:14.820 03:42:34 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:34:14.820 03:42:34 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:14.820 03:42:34 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:14.820 03:42:34 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.AMG9gSutCt 00:34:14.820 03:42:34 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:34:14.820 03:42:34 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:34:14.820 03:42:34 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:34:14.820 03:42:34 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:14.820 03:42:34 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:34:14.820 03:42:34 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:34:14.820 03:42:34 keyring_file -- nvmf/common.sh@733 -- # python - 00:34:14.820 03:42:34 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.AMG9gSutCt 00:34:14.820 03:42:34 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.AMG9gSutCt 00:34:14.820 03:42:34 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.AMG9gSutCt 00:34:14.820 03:42:34 keyring_file -- keyring/file.sh@30 -- # tgtpid=2891188 00:34:14.820 03:42:34 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2891188 00:34:14.820 03:42:34 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2891188 ']' 00:34:14.820 03:42:34 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:14.820 03:42:34 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:14.820 03:42:34 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:14.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:14.820 03:42:34 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:14.820 03:42:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:14.820 03:42:34 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:34:14.820 [2024-12-06 03:42:34.831238] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:34:14.820 [2024-12-06 03:42:34.831288] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2891188 ] 00:34:14.820 [2024-12-06 03:42:34.892852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:14.820 [2024-12-06 03:42:34.935136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:15.087 03:42:35 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:15.087 03:42:35 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:34:15.087 03:42:35 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:34:15.087 03:42:35 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.087 03:42:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:15.087 [2024-12-06 03:42:35.146826] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:15.087 null0 00:34:15.087 [2024-12-06 03:42:35.178872] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:15.087 [2024-12-06 03:42:35.179208] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:15.087 03:42:35 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.087 03:42:35 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:15.087 03:42:35 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:34:15.087 03:42:35 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:15.087 03:42:35 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:15.087 03:42:35 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:15.087 03:42:35 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:15.087 03:42:35 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:15.087 03:42:35 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:15.087 03:42:35 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.087 03:42:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:15.087 [2024-12-06 03:42:35.206939] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:34:15.087 request: 00:34:15.087 { 00:34:15.087 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:34:15.087 "secure_channel": false, 00:34:15.087 "listen_address": { 00:34:15.087 "trtype": "tcp", 00:34:15.087 "traddr": "127.0.0.1", 00:34:15.087 "trsvcid": "4420" 00:34:15.087 }, 00:34:15.087 "method": "nvmf_subsystem_add_listener", 00:34:15.087 "req_id": 1 00:34:15.087 } 00:34:15.087 Got JSON-RPC error response 00:34:15.087 response: 00:34:15.087 { 00:34:15.087 "code": -32602, 00:34:15.087 "message": "Invalid parameters" 00:34:15.087 } 00:34:15.087 03:42:35 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:15.087 03:42:35 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:34:15.087 03:42:35 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:15.087 03:42:35 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:15.087 03:42:35 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:15.087 03:42:35 keyring_file -- keyring/file.sh@47 -- # bperfpid=2891193 00:34:15.087 03:42:35 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2891193 /var/tmp/bperf.sock 00:34:15.087 03:42:35 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2891193 ']' 00:34:15.087 03:42:35 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:15.087 03:42:35 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:15.087 03:42:35 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:15.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:15.087 03:42:35 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:15.087 03:42:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:15.087 03:42:35 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:34:15.346 [2024-12-06 03:42:35.259165] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:34:15.346 [2024-12-06 03:42:35.259207] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2891193 ] 00:34:15.346 [2024-12-06 03:42:35.319995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:15.346 [2024-12-06 03:42:35.360557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:15.346 03:42:35 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:15.346 03:42:35 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:34:15.346 03:42:35 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.llHdKUGKvL 00:34:15.346 03:42:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.llHdKUGKvL 00:34:15.605 03:42:35 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.AMG9gSutCt 00:34:15.605 03:42:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.AMG9gSutCt 00:34:15.863 03:42:35 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:34:15.863 03:42:35 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:34:15.863 03:42:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:15.863 03:42:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:15.863 03:42:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:16.121 03:42:36 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.llHdKUGKvL == \/\t\m\p\/\t\m\p\.\l\l\H\d\K\U\G\K\v\L ]] 00:34:16.121 03:42:36 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:34:16.121 03:42:36 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:34:16.121 03:42:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:16.121 03:42:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:16.121 03:42:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:16.121 03:42:36 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.AMG9gSutCt == \/\t\m\p\/\t\m\p\.\A\M\G\9\g\S\u\t\C\t ]] 00:34:16.121 03:42:36 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:34:16.121 03:42:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:16.121 03:42:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:16.121 03:42:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:16.121 03:42:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:16.121 03:42:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:16.378 03:42:36 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:34:16.378 03:42:36 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:34:16.378 03:42:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:16.378 03:42:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:16.378 03:42:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:16.378 03:42:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:16.378 03:42:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:16.637 03:42:36 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:34:16.637 03:42:36 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:16.637 03:42:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:16.637 [2024-12-06 03:42:36.767871] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:16.895 nvme0n1 00:34:16.896 03:42:36 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:34:16.896 03:42:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:16.896 03:42:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:16.896 03:42:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:16.896 03:42:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:16.896 03:42:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:17.154 03:42:37 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:34:17.154 03:42:37 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:34:17.154 03:42:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:17.154 03:42:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:17.154 03:42:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:17.154 03:42:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:17.155 03:42:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:17.155 03:42:37 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:34:17.155 03:42:37 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:17.414 Running I/O for 1 seconds... 00:34:18.349 17744.00 IOPS, 69.31 MiB/s 00:34:18.349 Latency(us) 00:34:18.349 [2024-12-06T02:42:38.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:18.349 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:34:18.349 nvme0n1 : 1.00 17786.04 69.48 0.00 0.00 7181.82 4758.48 16298.52 00:34:18.349 [2024-12-06T02:42:38.490Z] =================================================================================================================== 00:34:18.349 [2024-12-06T02:42:38.490Z] Total : 17786.04 69.48 0.00 0.00 7181.82 4758.48 16298.52 00:34:18.349 { 00:34:18.349 "results": [ 00:34:18.349 { 00:34:18.349 "job": "nvme0n1", 00:34:18.349 "core_mask": "0x2", 00:34:18.349 "workload": "randrw", 00:34:18.349 "percentage": 50, 00:34:18.349 "status": "finished", 00:34:18.349 "queue_depth": 128, 00:34:18.349 "io_size": 4096, 00:34:18.349 "runtime": 1.004833, 00:34:18.349 "iops": 17786.040068349666, 00:34:18.349 "mibps": 69.47671901699088, 00:34:18.349 "io_failed": 0, 00:34:18.349 "io_timeout": 0, 00:34:18.349 "avg_latency_us": 7181.815689541085, 00:34:18.349 "min_latency_us": 4758.48347826087, 00:34:18.349 "max_latency_us": 16298.518260869565 00:34:18.349 } 00:34:18.349 ], 00:34:18.349 "core_count": 1 00:34:18.349 } 00:34:18.349 03:42:38 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:18.349 03:42:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:18.608 03:42:38 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:34:18.608 03:42:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:18.608 03:42:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:18.608 03:42:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:18.608 03:42:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:18.608 03:42:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:18.867 03:42:38 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:34:18.867 03:42:38 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:34:18.867 03:42:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:18.867 03:42:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:18.867 03:42:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:18.867 03:42:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:18.867 03:42:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:18.867 03:42:38 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:34:18.867 03:42:38 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:18.867 03:42:38 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:34:18.867 03:42:38 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:18.867 03:42:38 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:34:18.867 03:42:38 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:18.867 03:42:38 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:34:18.867 03:42:38 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:18.867 03:42:38 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:18.867 03:42:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:19.127 [2024-12-06 03:42:39.151024] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:34:19.127 [2024-12-06 03:42:39.151547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2402e30 (107): Transport endpoint is not connected 00:34:19.127 [2024-12-06 03:42:39.152542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2402e30 (9): Bad file descriptor 00:34:19.127 [2024-12-06 03:42:39.153543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:34:19.127 [2024-12-06 03:42:39.153553] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:34:19.127 [2024-12-06 03:42:39.153560] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:34:19.127 [2024-12-06 03:42:39.153569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:34:19.127 request: 00:34:19.127 { 00:34:19.127 "name": "nvme0", 00:34:19.127 "trtype": "tcp", 00:34:19.127 "traddr": "127.0.0.1", 00:34:19.127 "adrfam": "ipv4", 00:34:19.127 "trsvcid": "4420", 00:34:19.127 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:19.127 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:19.127 "prchk_reftag": false, 00:34:19.127 "prchk_guard": false, 00:34:19.127 "hdgst": false, 00:34:19.127 "ddgst": false, 00:34:19.127 "psk": "key1", 00:34:19.127 "allow_unrecognized_csi": false, 00:34:19.127 "method": "bdev_nvme_attach_controller", 00:34:19.127 "req_id": 1 00:34:19.127 } 00:34:19.127 Got JSON-RPC error response 00:34:19.127 response: 00:34:19.127 { 00:34:19.127 "code": -5, 00:34:19.127 "message": "Input/output error" 00:34:19.127 } 00:34:19.127 03:42:39 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:34:19.127 03:42:39 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:19.127 03:42:39 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:19.127 03:42:39 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:19.127 03:42:39 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:34:19.127 03:42:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:19.127 03:42:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:19.127 03:42:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:19.127 03:42:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:19.127 03:42:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:19.386 03:42:39 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:34:19.386 03:42:39 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:34:19.386 03:42:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:19.386 03:42:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:19.386 03:42:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:19.386 03:42:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:19.386 03:42:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:19.645 03:42:39 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:34:19.645 03:42:39 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:34:19.645 03:42:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:19.645 03:42:39 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:34:19.645 03:42:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:34:19.904 03:42:39 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:34:19.904 03:42:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:19.904 03:42:39 keyring_file -- keyring/file.sh@78 -- # jq length 00:34:20.163 03:42:40 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:34:20.163 03:42:40 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.llHdKUGKvL 00:34:20.163 03:42:40 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.llHdKUGKvL 00:34:20.163 03:42:40 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:34:20.163 03:42:40 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.llHdKUGKvL 00:34:20.163 03:42:40 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:34:20.163 03:42:40 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:20.163 03:42:40 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:34:20.163 03:42:40 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:20.163 03:42:40 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.llHdKUGKvL 00:34:20.163 03:42:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.llHdKUGKvL 00:34:20.421 [2024-12-06 03:42:40.334680] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.llHdKUGKvL': 0100660 00:34:20.421 [2024-12-06 03:42:40.334706] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:34:20.421 request: 00:34:20.421 { 00:34:20.421 "name": "key0", 00:34:20.421 "path": "/tmp/tmp.llHdKUGKvL", 00:34:20.421 "method": "keyring_file_add_key", 00:34:20.421 "req_id": 1 00:34:20.421 } 00:34:20.421 Got JSON-RPC error response 00:34:20.421 response: 00:34:20.421 { 00:34:20.421 "code": -1, 00:34:20.421 "message": "Operation not permitted" 00:34:20.421 } 00:34:20.421 03:42:40 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:34:20.421 03:42:40 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:20.421 03:42:40 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:20.421 03:42:40 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:20.421 03:42:40 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.llHdKUGKvL 00:34:20.421 03:42:40 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.llHdKUGKvL 00:34:20.421 03:42:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.llHdKUGKvL 00:34:20.422 03:42:40 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.llHdKUGKvL 00:34:20.422 03:42:40 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:34:20.422 03:42:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:20.422 03:42:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:20.422 03:42:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:20.422 03:42:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:20.422 03:42:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:20.681 03:42:40 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:34:20.681 03:42:40 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:20.681 03:42:40 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:34:20.681 03:42:40 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:20.681 03:42:40 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:34:20.681 03:42:40 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:20.681 03:42:40 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:34:20.681 03:42:40 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:20.681 03:42:40 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:20.681 03:42:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:20.939 [2024-12-06 03:42:40.916230] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.llHdKUGKvL': No such file or directory 00:34:20.939 [2024-12-06 03:42:40.916257] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:34:20.939 [2024-12-06 03:42:40.916273] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:34:20.939 [2024-12-06 03:42:40.916280] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:34:20.939 [2024-12-06 03:42:40.916287] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:20.939 [2024-12-06 03:42:40.916294] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:34:20.939 request: 00:34:20.939 { 00:34:20.939 "name": "nvme0", 00:34:20.939 "trtype": "tcp", 00:34:20.939 "traddr": "127.0.0.1", 00:34:20.939 "adrfam": "ipv4", 00:34:20.939 "trsvcid": "4420", 00:34:20.939 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:20.939 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:20.939 "prchk_reftag": false, 00:34:20.939 "prchk_guard": false, 00:34:20.939 "hdgst": false, 00:34:20.939 "ddgst": false, 00:34:20.939 "psk": "key0", 00:34:20.939 "allow_unrecognized_csi": false, 00:34:20.939 "method": "bdev_nvme_attach_controller", 00:34:20.939 "req_id": 1 00:34:20.939 } 00:34:20.939 Got JSON-RPC error response 00:34:20.939 response: 00:34:20.939 { 00:34:20.939 "code": -19, 00:34:20.939 "message": "No such device" 00:34:20.939 } 00:34:20.939 03:42:40 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:34:20.939 03:42:40 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:20.939 03:42:40 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:20.939 03:42:40 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:20.939 03:42:40 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:34:20.939 03:42:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:21.197 03:42:41 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:34:21.197 03:42:41 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:21.197 03:42:41 keyring_file -- keyring/common.sh@17 -- # name=key0 00:34:21.197 03:42:41 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:21.197 03:42:41 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:21.197 03:42:41 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:21.197 03:42:41 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.UH3TucCiEe 00:34:21.197 03:42:41 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:21.197 03:42:41 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:21.197 03:42:41 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:34:21.197 03:42:41 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:21.197 03:42:41 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:34:21.197 03:42:41 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:34:21.197 03:42:41 keyring_file -- nvmf/common.sh@733 -- # python - 00:34:21.197 03:42:41 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.UH3TucCiEe 00:34:21.197 03:42:41 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.UH3TucCiEe 00:34:21.197 03:42:41 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.UH3TucCiEe 00:34:21.197 03:42:41 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.UH3TucCiEe 00:34:21.197 03:42:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.UH3TucCiEe 00:34:21.455 03:42:41 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:21.455 03:42:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:21.714 nvme0n1 00:34:21.714 03:42:41 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:34:21.714 03:42:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:21.714 03:42:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:21.714 03:42:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:21.714 03:42:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:21.714 03:42:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:21.714 03:42:41 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:34:21.714 03:42:41 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:34:21.714 03:42:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:21.974 03:42:42 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:34:21.974 03:42:42 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:34:21.974 03:42:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:21.974 03:42:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:21.974 03:42:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:22.233 03:42:42 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:34:22.233 03:42:42 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:34:22.233 03:42:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:22.233 03:42:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:22.233 03:42:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:22.233 03:42:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:22.233 03:42:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:22.491 03:42:42 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:34:22.492 03:42:42 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:22.492 03:42:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:22.492 03:42:42 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:34:22.492 03:42:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:22.492 03:42:42 keyring_file -- keyring/file.sh@105 -- # jq length 00:34:22.750 03:42:42 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:34:22.750 03:42:42 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.UH3TucCiEe 00:34:22.750 03:42:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.UH3TucCiEe 00:34:23.009 03:42:42 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.AMG9gSutCt 00:34:23.009 03:42:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.AMG9gSutCt 00:34:23.268 03:42:43 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:23.268 03:42:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:23.526 nvme0n1 00:34:23.526 03:42:43 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:34:23.526 03:42:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:34:23.787 03:42:43 keyring_file -- keyring/file.sh@113 -- # config='{ 00:34:23.787 "subsystems": [ 00:34:23.787 { 00:34:23.787 "subsystem": "keyring", 00:34:23.787 "config": [ 00:34:23.787 { 00:34:23.787 "method": "keyring_file_add_key", 00:34:23.787 "params": { 00:34:23.787 "name": "key0", 00:34:23.787 "path": "/tmp/tmp.UH3TucCiEe" 00:34:23.787 } 00:34:23.787 }, 00:34:23.787 { 00:34:23.787 "method": "keyring_file_add_key", 00:34:23.787 "params": { 00:34:23.787 "name": "key1", 00:34:23.787 "path": "/tmp/tmp.AMG9gSutCt" 00:34:23.787 } 00:34:23.787 } 00:34:23.787 ] 00:34:23.787 }, 00:34:23.787 { 00:34:23.787 "subsystem": "iobuf", 00:34:23.787 "config": [ 00:34:23.787 { 00:34:23.787 "method": "iobuf_set_options", 00:34:23.787 "params": { 00:34:23.787 "small_pool_count": 8192, 00:34:23.787 "large_pool_count": 1024, 00:34:23.787 "small_bufsize": 8192, 00:34:23.787 "large_bufsize": 135168, 00:34:23.787 "enable_numa": false 00:34:23.787 } 00:34:23.787 } 00:34:23.787 ] 00:34:23.787 }, 00:34:23.787 { 00:34:23.787 "subsystem": "sock", 00:34:23.787 "config": [ 00:34:23.787 { 00:34:23.787 "method": "sock_set_default_impl", 00:34:23.787 "params": { 00:34:23.787 "impl_name": "posix" 00:34:23.787 } 00:34:23.787 }, 00:34:23.787 { 00:34:23.787 "method": "sock_impl_set_options", 00:34:23.787 "params": { 00:34:23.787 "impl_name": "ssl", 00:34:23.787 "recv_buf_size": 4096, 00:34:23.787 "send_buf_size": 4096, 00:34:23.787 "enable_recv_pipe": true, 00:34:23.787 "enable_quickack": false, 00:34:23.787 "enable_placement_id": 0, 00:34:23.787 "enable_zerocopy_send_server": true, 00:34:23.787 "enable_zerocopy_send_client": false, 00:34:23.787 "zerocopy_threshold": 0, 00:34:23.787 "tls_version": 0, 00:34:23.787 "enable_ktls": false 00:34:23.787 } 00:34:23.787 }, 00:34:23.787 { 00:34:23.787 "method": "sock_impl_set_options", 00:34:23.787 "params": { 00:34:23.787 "impl_name": "posix", 00:34:23.787 "recv_buf_size": 2097152, 00:34:23.787 "send_buf_size": 2097152, 00:34:23.787 "enable_recv_pipe": true, 00:34:23.787 "enable_quickack": false, 00:34:23.787 "enable_placement_id": 0, 00:34:23.787 "enable_zerocopy_send_server": true, 00:34:23.787 "enable_zerocopy_send_client": false, 00:34:23.787 "zerocopy_threshold": 0, 00:34:23.787 "tls_version": 0, 00:34:23.787 "enable_ktls": false 00:34:23.787 } 00:34:23.787 } 00:34:23.787 ] 00:34:23.787 }, 00:34:23.787 { 00:34:23.787 "subsystem": "vmd", 00:34:23.787 "config": [] 00:34:23.787 }, 00:34:23.787 { 00:34:23.787 "subsystem": "accel", 00:34:23.787 "config": [ 00:34:23.787 { 00:34:23.787 "method": "accel_set_options", 00:34:23.787 "params": { 00:34:23.787 "small_cache_size": 128, 00:34:23.787 "large_cache_size": 16, 00:34:23.787 "task_count": 2048, 00:34:23.787 "sequence_count": 2048, 00:34:23.787 "buf_count": 2048 00:34:23.787 } 00:34:23.787 } 00:34:23.787 ] 00:34:23.787 }, 00:34:23.787 { 00:34:23.787 "subsystem": "bdev", 00:34:23.787 "config": [ 00:34:23.787 { 00:34:23.787 "method": "bdev_set_options", 00:34:23.787 "params": { 00:34:23.787 "bdev_io_pool_size": 65535, 00:34:23.787 "bdev_io_cache_size": 256, 00:34:23.787 "bdev_auto_examine": true, 00:34:23.787 "iobuf_small_cache_size": 128, 00:34:23.787 "iobuf_large_cache_size": 16 00:34:23.787 } 00:34:23.787 }, 00:34:23.787 { 00:34:23.787 "method": "bdev_raid_set_options", 00:34:23.787 "params": { 00:34:23.787 "process_window_size_kb": 1024, 00:34:23.787 "process_max_bandwidth_mb_sec": 0 00:34:23.787 } 00:34:23.787 }, 00:34:23.787 { 00:34:23.787 "method": "bdev_iscsi_set_options", 00:34:23.787 "params": { 00:34:23.787 "timeout_sec": 30 00:34:23.787 } 00:34:23.787 }, 00:34:23.787 { 00:34:23.787 "method": "bdev_nvme_set_options", 00:34:23.787 "params": { 00:34:23.787 "action_on_timeout": "none", 00:34:23.787 "timeout_us": 0, 00:34:23.787 "timeout_admin_us": 0, 00:34:23.787 "keep_alive_timeout_ms": 10000, 00:34:23.787 "arbitration_burst": 0, 00:34:23.787 "low_priority_weight": 0, 00:34:23.787 "medium_priority_weight": 0, 00:34:23.787 "high_priority_weight": 0, 00:34:23.787 "nvme_adminq_poll_period_us": 10000, 00:34:23.787 "nvme_ioq_poll_period_us": 0, 00:34:23.787 "io_queue_requests": 512, 00:34:23.787 "delay_cmd_submit": true, 00:34:23.787 "transport_retry_count": 4, 00:34:23.787 "bdev_retry_count": 3, 00:34:23.787 "transport_ack_timeout": 0, 00:34:23.787 "ctrlr_loss_timeout_sec": 0, 00:34:23.787 "reconnect_delay_sec": 0, 00:34:23.787 "fast_io_fail_timeout_sec": 0, 00:34:23.787 "disable_auto_failback": false, 00:34:23.787 "generate_uuids": false, 00:34:23.787 "transport_tos": 0, 00:34:23.787 "nvme_error_stat": false, 00:34:23.787 "rdma_srq_size": 0, 00:34:23.787 "io_path_stat": false, 00:34:23.788 "allow_accel_sequence": false, 00:34:23.788 "rdma_max_cq_size": 0, 00:34:23.788 "rdma_cm_event_timeout_ms": 0, 00:34:23.788 "dhchap_digests": [ 00:34:23.788 "sha256", 00:34:23.788 "sha384", 00:34:23.788 "sha512" 00:34:23.788 ], 00:34:23.788 "dhchap_dhgroups": [ 00:34:23.788 "null", 00:34:23.788 "ffdhe2048", 00:34:23.788 "ffdhe3072", 00:34:23.788 "ffdhe4096", 00:34:23.788 "ffdhe6144", 00:34:23.788 "ffdhe8192" 00:34:23.788 ] 00:34:23.788 } 00:34:23.788 }, 00:34:23.788 { 00:34:23.788 "method": "bdev_nvme_attach_controller", 00:34:23.788 "params": { 00:34:23.788 "name": "nvme0", 00:34:23.788 "trtype": "TCP", 00:34:23.788 "adrfam": "IPv4", 00:34:23.788 "traddr": "127.0.0.1", 00:34:23.788 "trsvcid": "4420", 00:34:23.788 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:23.788 "prchk_reftag": false, 00:34:23.788 "prchk_guard": false, 00:34:23.788 "ctrlr_loss_timeout_sec": 0, 00:34:23.788 "reconnect_delay_sec": 0, 00:34:23.788 "fast_io_fail_timeout_sec": 0, 00:34:23.788 "psk": "key0", 00:34:23.788 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:23.788 "hdgst": false, 00:34:23.788 "ddgst": false, 00:34:23.788 "multipath": "multipath" 00:34:23.788 } 00:34:23.788 }, 00:34:23.788 { 00:34:23.788 "method": "bdev_nvme_set_hotplug", 00:34:23.788 "params": { 00:34:23.788 "period_us": 100000, 00:34:23.788 "enable": false 00:34:23.788 } 00:34:23.788 }, 00:34:23.788 { 00:34:23.788 "method": "bdev_wait_for_examine" 00:34:23.788 } 00:34:23.788 ] 00:34:23.788 }, 00:34:23.788 { 00:34:23.788 "subsystem": "nbd", 00:34:23.788 "config": [] 00:34:23.788 } 00:34:23.788 ] 00:34:23.788 }' 00:34:23.788 03:42:43 keyring_file -- keyring/file.sh@115 -- # killprocess 2891193 00:34:23.788 03:42:43 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2891193 ']' 00:34:23.788 03:42:43 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2891193 00:34:23.788 03:42:43 keyring_file -- common/autotest_common.sh@959 -- # uname 00:34:23.788 03:42:43 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:23.788 03:42:43 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2891193 00:34:23.788 03:42:43 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:23.788 03:42:43 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:23.788 03:42:43 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2891193' 00:34:23.788 killing process with pid 2891193 00:34:23.788 03:42:43 keyring_file -- common/autotest_common.sh@973 -- # kill 2891193 00:34:23.788 Received shutdown signal, test time was about 1.000000 seconds 00:34:23.788 00:34:23.788 Latency(us) 00:34:23.788 [2024-12-06T02:42:43.929Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:23.788 [2024-12-06T02:42:43.929Z] =================================================================================================================== 00:34:23.788 [2024-12-06T02:42:43.929Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:23.788 03:42:43 keyring_file -- common/autotest_common.sh@978 -- # wait 2891193 00:34:23.788 03:42:43 keyring_file -- keyring/file.sh@118 -- # bperfpid=2892706 00:34:23.788 03:42:43 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2892706 /var/tmp/bperf.sock 00:34:23.788 03:42:43 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2892706 ']' 00:34:23.788 03:42:43 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:23.788 03:42:43 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:34:23.788 03:42:43 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:23.788 03:42:43 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:23.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:23.788 03:42:43 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:34:23.788 "subsystems": [ 00:34:23.788 { 00:34:23.788 "subsystem": "keyring", 00:34:23.788 "config": [ 00:34:23.788 { 00:34:23.788 "method": "keyring_file_add_key", 00:34:23.788 "params": { 00:34:23.788 "name": "key0", 00:34:23.788 "path": "/tmp/tmp.UH3TucCiEe" 00:34:23.788 } 00:34:23.788 }, 00:34:23.788 { 00:34:23.788 "method": "keyring_file_add_key", 00:34:23.788 "params": { 00:34:23.788 "name": "key1", 00:34:23.788 "path": "/tmp/tmp.AMG9gSutCt" 00:34:23.788 } 00:34:23.788 } 00:34:23.788 ] 00:34:23.788 }, 00:34:23.788 { 00:34:23.788 "subsystem": "iobuf", 00:34:23.788 "config": [ 00:34:23.788 { 00:34:23.788 "method": "iobuf_set_options", 00:34:23.788 "params": { 00:34:23.788 "small_pool_count": 8192, 00:34:23.788 "large_pool_count": 1024, 00:34:23.788 "small_bufsize": 8192, 00:34:23.788 "large_bufsize": 135168, 00:34:23.788 "enable_numa": false 00:34:23.788 } 00:34:23.788 } 00:34:23.788 ] 00:34:23.788 }, 00:34:23.788 { 00:34:23.788 "subsystem": "sock", 00:34:23.788 "config": [ 00:34:23.788 { 00:34:23.788 "method": "sock_set_default_impl", 00:34:23.788 "params": { 00:34:23.788 "impl_name": "posix" 00:34:23.788 } 00:34:23.788 }, 00:34:23.788 { 00:34:23.788 "method": "sock_impl_set_options", 00:34:23.788 "params": { 00:34:23.788 "impl_name": "ssl", 00:34:23.788 "recv_buf_size": 4096, 00:34:23.788 "send_buf_size": 4096, 00:34:23.788 "enable_recv_pipe": true, 00:34:23.788 "enable_quickack": false, 00:34:23.788 "enable_placement_id": 0, 00:34:23.788 "enable_zerocopy_send_server": true, 00:34:23.788 "enable_zerocopy_send_client": false, 00:34:23.788 "zerocopy_threshold": 0, 00:34:23.788 "tls_version": 0, 00:34:23.788 "enable_ktls": false 00:34:23.788 } 00:34:23.788 }, 00:34:23.788 { 00:34:23.788 "method": "sock_impl_set_options", 00:34:23.788 "params": { 00:34:23.788 "impl_name": "posix", 00:34:23.788 "recv_buf_size": 2097152, 00:34:23.788 "send_buf_size": 2097152, 00:34:23.788 "enable_recv_pipe": true, 00:34:23.788 "enable_quickack": false, 00:34:23.788 "enable_placement_id": 0, 00:34:23.788 "enable_zerocopy_send_server": true, 00:34:23.788 "enable_zerocopy_send_client": false, 00:34:23.788 "zerocopy_threshold": 0, 00:34:23.788 "tls_version": 0, 00:34:23.789 "enable_ktls": false 00:34:23.789 } 00:34:23.789 } 00:34:23.789 ] 00:34:23.789 }, 00:34:23.789 { 00:34:23.789 "subsystem": "vmd", 00:34:23.789 "config": [] 00:34:23.789 }, 00:34:23.789 { 00:34:23.789 "subsystem": "accel", 00:34:23.789 "config": [ 00:34:23.789 { 00:34:23.789 "method": "accel_set_options", 00:34:23.789 "params": { 00:34:23.789 "small_cache_size": 128, 00:34:23.789 "large_cache_size": 16, 00:34:23.789 "task_count": 2048, 00:34:23.789 "sequence_count": 2048, 00:34:23.789 "buf_count": 2048 00:34:23.789 } 00:34:23.789 } 00:34:23.789 ] 00:34:23.789 }, 00:34:23.789 { 00:34:23.789 "subsystem": "bdev", 00:34:23.789 "config": [ 00:34:23.789 { 00:34:23.789 "method": "bdev_set_options", 00:34:23.789 "params": { 00:34:23.789 "bdev_io_pool_size": 65535, 00:34:23.789 "bdev_io_cache_size": 256, 00:34:23.789 "bdev_auto_examine": true, 00:34:23.789 "iobuf_small_cache_size": 128, 00:34:23.789 "iobuf_large_cache_size": 16 00:34:23.789 } 00:34:23.789 }, 00:34:23.789 { 00:34:23.789 "method": "bdev_raid_set_options", 00:34:23.789 "params": { 00:34:23.789 "process_window_size_kb": 1024, 00:34:23.789 "process_max_bandwidth_mb_sec": 0 00:34:23.789 } 00:34:23.789 }, 00:34:23.789 { 00:34:23.789 "method": "bdev_iscsi_set_options", 00:34:23.789 "params": { 00:34:23.789 "timeout_sec": 30 00:34:23.789 } 00:34:23.789 }, 00:34:23.789 { 00:34:23.789 "method": "bdev_nvme_set_options", 00:34:23.789 "params": { 00:34:23.789 "action_on_timeout": "none", 00:34:23.789 "timeout_us": 0, 00:34:23.789 "timeout_admin_us": 0, 00:34:23.789 "keep_alive_timeout_ms": 10000, 00:34:23.789 "arbitration_burst": 0, 00:34:23.789 "low_priority_weight": 0, 00:34:23.789 "medium_priority_weight": 0, 00:34:23.789 "high_priority_weight": 0, 00:34:23.789 "nvme_adminq_poll_period_us": 10000, 00:34:23.789 "nvme_ioq_poll_period_us": 0, 00:34:23.789 "io_queue_requests": 512, 00:34:23.789 "delay_cmd_submit": true, 00:34:23.789 "transport_retry_count": 4, 00:34:23.789 "bdev_retry_count": 3, 00:34:23.789 "transport_ack_timeout": 0, 00:34:23.789 "ctrlr_loss_timeout_sec": 0, 00:34:23.789 "reconnect_delay_sec": 0, 00:34:23.789 "fast_io_fail_timeout_sec": 0, 00:34:23.789 "disable_auto_failback": false, 00:34:23.789 "generate_uuids": false, 00:34:23.789 "transport_tos": 0, 00:34:23.789 "nvme_error_stat": false, 00:34:23.789 "rdma_srq_size": 0, 00:34:23.789 "io_path_stat": false, 00:34:23.789 "allow_accel_sequence": false, 00:34:23.789 "rdma_max_cq_size": 0, 00:34:23.789 "rdma_cm_event_timeout_ms": 0, 00:34:23.789 "dhchap_digests": [ 00:34:23.789 "sha256", 00:34:23.789 "sha384", 00:34:23.789 "sha512" 00:34:23.789 ], 00:34:23.789 "dhchap_dhgroups": [ 00:34:23.789 "null", 00:34:23.789 "ffdhe2048", 00:34:23.789 "ffdhe3072", 00:34:23.789 "ffdhe4096", 00:34:23.789 "ffdhe6144", 00:34:23.789 "ffdhe8192" 00:34:23.789 ] 00:34:23.789 } 00:34:23.789 }, 00:34:23.789 { 00:34:23.789 "method": "bdev_nvme_attach_controller", 00:34:23.789 "params": { 00:34:23.789 "name": "nvme0", 00:34:23.789 "trtype": "TCP", 00:34:23.789 "adrfam": "IPv4", 00:34:23.789 "traddr": "127.0.0.1", 00:34:23.789 "trsvcid": "4420", 00:34:23.789 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:23.789 "prchk_reftag": false, 00:34:23.789 "prchk_guard": false, 00:34:23.789 "ctrlr_loss_timeout_sec": 0, 00:34:23.789 "reconnect_delay_sec": 0, 00:34:23.789 "fast_io_fail_timeout_sec": 0, 00:34:23.789 "psk": "key0", 00:34:23.789 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:23.789 "hdgst": false, 00:34:23.789 "ddgst": false, 00:34:23.789 "multipath": "multipath" 00:34:23.789 } 00:34:23.789 }, 00:34:23.789 { 00:34:23.789 "method": "bdev_nvme_set_hotplug", 00:34:23.789 "params": { 00:34:23.789 "period_us": 100000, 00:34:23.789 "enable": false 00:34:23.789 } 00:34:23.789 }, 00:34:23.789 { 00:34:23.789 "method": "bdev_wait_for_examine" 00:34:23.789 } 00:34:23.789 ] 00:34:23.789 }, 00:34:23.789 { 00:34:23.789 "subsystem": "nbd", 00:34:23.789 "config": [] 00:34:23.789 } 00:34:23.789 ] 00:34:23.789 }' 00:34:23.789 03:42:43 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:23.789 03:42:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:24.049 [2024-12-06 03:42:43.936670] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:34:24.049 [2024-12-06 03:42:43.936721] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2892706 ] 00:34:24.049 [2024-12-06 03:42:43.998393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:24.049 [2024-12-06 03:42:44.036308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:24.308 [2024-12-06 03:42:44.197873] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:24.876 03:42:44 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:24.876 03:42:44 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:34:24.876 03:42:44 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:34:24.876 03:42:44 keyring_file -- keyring/file.sh@121 -- # jq length 00:34:24.876 03:42:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:24.876 03:42:44 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:34:24.876 03:42:44 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:34:24.876 03:42:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:24.876 03:42:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:24.876 03:42:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:24.876 03:42:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:24.876 03:42:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:25.149 03:42:45 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:34:25.149 03:42:45 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:34:25.149 03:42:45 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:25.149 03:42:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:25.149 03:42:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:25.149 03:42:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:25.149 03:42:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:25.411 03:42:45 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:34:25.411 03:42:45 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:34:25.411 03:42:45 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:34:25.411 03:42:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:34:25.670 03:42:45 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:34:25.670 03:42:45 keyring_file -- keyring/file.sh@1 -- # cleanup 00:34:25.670 03:42:45 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.UH3TucCiEe /tmp/tmp.AMG9gSutCt 00:34:25.670 03:42:45 keyring_file -- keyring/file.sh@20 -- # killprocess 2892706 00:34:25.670 03:42:45 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2892706 ']' 00:34:25.670 03:42:45 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2892706 00:34:25.670 03:42:45 keyring_file -- common/autotest_common.sh@959 -- # uname 00:34:25.670 03:42:45 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:25.670 03:42:45 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2892706 00:34:25.670 03:42:45 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:25.670 03:42:45 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:25.670 03:42:45 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2892706' 00:34:25.670 killing process with pid 2892706 00:34:25.670 03:42:45 keyring_file -- common/autotest_common.sh@973 -- # kill 2892706 00:34:25.670 Received shutdown signal, test time was about 1.000000 seconds 00:34:25.670 00:34:25.670 Latency(us) 00:34:25.670 [2024-12-06T02:42:45.811Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:25.670 [2024-12-06T02:42:45.811Z] =================================================================================================================== 00:34:25.670 [2024-12-06T02:42:45.811Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:25.670 03:42:45 keyring_file -- common/autotest_common.sh@978 -- # wait 2892706 00:34:25.670 03:42:45 keyring_file -- keyring/file.sh@21 -- # killprocess 2891188 00:34:25.670 03:42:45 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2891188 ']' 00:34:25.670 03:42:45 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2891188 00:34:25.670 03:42:45 keyring_file -- common/autotest_common.sh@959 -- # uname 00:34:25.670 03:42:45 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:25.670 03:42:45 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2891188 00:34:25.930 03:42:45 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:25.930 03:42:45 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:25.930 03:42:45 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2891188' 00:34:25.930 killing process with pid 2891188 00:34:25.930 03:42:45 keyring_file -- common/autotest_common.sh@973 -- # kill 2891188 00:34:25.930 03:42:45 keyring_file -- common/autotest_common.sh@978 -- # wait 2891188 00:34:26.188 00:34:26.188 real 0m11.635s 00:34:26.188 user 0m28.771s 00:34:26.188 sys 0m2.727s 00:34:26.188 03:42:46 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:26.188 03:42:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:26.188 ************************************ 00:34:26.188 END TEST keyring_file 00:34:26.188 ************************************ 00:34:26.188 03:42:46 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:34:26.188 03:42:46 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:34:26.188 03:42:46 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:26.188 03:42:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:26.188 03:42:46 -- common/autotest_common.sh@10 -- # set +x 00:34:26.188 ************************************ 00:34:26.188 START TEST keyring_linux 00:34:26.188 ************************************ 00:34:26.189 03:42:46 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:34:26.189 Joined session keyring: 836711445 00:34:26.189 * Looking for test storage... 00:34:26.189 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:34:26.189 03:42:46 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:26.189 03:42:46 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:34:26.189 03:42:46 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:26.448 03:42:46 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:26.448 03:42:46 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:26.448 03:42:46 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:26.448 03:42:46 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:26.448 03:42:46 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:34:26.448 03:42:46 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:34:26.448 03:42:46 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:34:26.448 03:42:46 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:34:26.448 03:42:46 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:34:26.448 03:42:46 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:34:26.448 03:42:46 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:34:26.448 03:42:46 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:26.448 03:42:46 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:34:26.448 03:42:46 keyring_linux -- scripts/common.sh@345 -- # : 1 00:34:26.448 03:42:46 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:26.448 03:42:46 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:26.448 03:42:46 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:34:26.448 03:42:46 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:34:26.448 03:42:46 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:26.448 03:42:46 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:34:26.448 03:42:46 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:34:26.448 03:42:46 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:34:26.448 03:42:46 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:34:26.448 03:42:46 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:26.448 03:42:46 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:34:26.448 03:42:46 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:34:26.448 03:42:46 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:26.448 03:42:46 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:26.448 03:42:46 keyring_linux -- scripts/common.sh@368 -- # return 0 00:34:26.448 03:42:46 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:26.448 03:42:46 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:26.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:26.448 --rc genhtml_branch_coverage=1 00:34:26.448 --rc genhtml_function_coverage=1 00:34:26.448 --rc genhtml_legend=1 00:34:26.448 --rc geninfo_all_blocks=1 00:34:26.448 --rc geninfo_unexecuted_blocks=1 00:34:26.448 00:34:26.448 ' 00:34:26.448 03:42:46 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:26.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:26.448 --rc genhtml_branch_coverage=1 00:34:26.448 --rc genhtml_function_coverage=1 00:34:26.448 --rc genhtml_legend=1 00:34:26.448 --rc geninfo_all_blocks=1 00:34:26.448 --rc geninfo_unexecuted_blocks=1 00:34:26.448 00:34:26.448 ' 00:34:26.448 03:42:46 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:26.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:26.448 --rc genhtml_branch_coverage=1 00:34:26.448 --rc genhtml_function_coverage=1 00:34:26.448 --rc genhtml_legend=1 00:34:26.448 --rc geninfo_all_blocks=1 00:34:26.448 --rc geninfo_unexecuted_blocks=1 00:34:26.448 00:34:26.448 ' 00:34:26.448 03:42:46 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:26.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:26.448 --rc genhtml_branch_coverage=1 00:34:26.448 --rc genhtml_function_coverage=1 00:34:26.448 --rc genhtml_legend=1 00:34:26.448 --rc geninfo_all_blocks=1 00:34:26.448 --rc geninfo_unexecuted_blocks=1 00:34:26.448 00:34:26.448 ' 00:34:26.448 03:42:46 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:34:26.448 03:42:46 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:26.448 03:42:46 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:34:26.448 03:42:46 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:26.448 03:42:46 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:26.448 03:42:46 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:26.448 03:42:46 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:26.448 03:42:46 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:26.448 03:42:46 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:26.448 03:42:46 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:26.448 03:42:46 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:26.448 03:42:46 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:26.448 03:42:46 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:26.448 03:42:46 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:26.448 03:42:46 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:26.448 03:42:46 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:26.448 03:42:46 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:26.448 03:42:46 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:26.448 03:42:46 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:26.448 03:42:46 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:26.448 03:42:46 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:34:26.448 03:42:46 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:26.448 03:42:46 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:26.448 03:42:46 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:26.448 03:42:46 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.448 03:42:46 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.448 03:42:46 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.448 03:42:46 keyring_linux -- paths/export.sh@5 -- # export PATH 00:34:26.449 03:42:46 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.449 03:42:46 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:34:26.449 03:42:46 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:26.449 03:42:46 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:26.449 03:42:46 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:26.449 03:42:46 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:26.449 03:42:46 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:26.449 03:42:46 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:26.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:26.449 03:42:46 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:26.449 03:42:46 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:26.449 03:42:46 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:26.449 03:42:46 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:34:26.449 03:42:46 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:34:26.449 03:42:46 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:34:26.449 03:42:46 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:34:26.449 03:42:46 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:34:26.449 03:42:46 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:34:26.449 03:42:46 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:34:26.449 03:42:46 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:34:26.449 03:42:46 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:34:26.449 03:42:46 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:26.449 03:42:46 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:34:26.449 03:42:46 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:34:26.449 03:42:46 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:26.449 03:42:46 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:26.449 03:42:46 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:34:26.449 03:42:46 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:26.449 03:42:46 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:34:26.449 03:42:46 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:34:26.449 03:42:46 keyring_linux -- nvmf/common.sh@733 -- # python - 00:34:26.449 03:42:46 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:34:26.449 03:42:46 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:34:26.449 /tmp/:spdk-test:key0 00:34:26.449 03:42:46 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:34:26.449 03:42:46 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:34:26.449 03:42:46 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:34:26.449 03:42:46 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:34:26.449 03:42:46 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:34:26.449 03:42:46 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:34:26.449 03:42:46 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:34:26.449 03:42:46 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:34:26.449 03:42:46 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:34:26.449 03:42:46 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:26.449 03:42:46 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:34:26.449 03:42:46 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:34:26.449 03:42:46 keyring_linux -- nvmf/common.sh@733 -- # python - 00:34:26.449 03:42:46 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:34:26.449 03:42:46 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:34:26.449 /tmp/:spdk-test:key1 00:34:26.449 03:42:46 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2893264 00:34:26.449 03:42:46 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:34:26.449 03:42:46 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2893264 00:34:26.449 03:42:46 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2893264 ']' 00:34:26.449 03:42:46 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:26.449 03:42:46 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:26.449 03:42:46 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:26.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:26.449 03:42:46 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:26.449 03:42:46 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:26.449 [2024-12-06 03:42:46.524338] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:34:26.449 [2024-12-06 03:42:46.524387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2893264 ] 00:34:26.449 [2024-12-06 03:42:46.582420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:26.707 [2024-12-06 03:42:46.625275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:26.708 03:42:46 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:26.708 03:42:46 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:34:26.708 03:42:46 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:34:26.708 03:42:46 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.708 03:42:46 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:26.708 [2024-12-06 03:42:46.836087] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:26.966 null0 00:34:26.967 [2024-12-06 03:42:46.868131] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:26.967 [2024-12-06 03:42:46.868451] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:26.967 03:42:46 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.967 03:42:46 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:34:26.967 88628351 00:34:26.967 03:42:46 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:34:26.967 846100851 00:34:26.967 03:42:46 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2893269 00:34:26.967 03:42:46 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2893269 /var/tmp/bperf.sock 00:34:26.967 03:42:46 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2893269 ']' 00:34:26.967 03:42:46 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:26.967 03:42:46 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:26.967 03:42:46 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:26.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:26.967 03:42:46 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:26.967 03:42:46 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:34:26.967 03:42:46 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:26.967 [2024-12-06 03:42:46.938248] Starting SPDK v25.01-pre git sha1 05632f11a / DPDK 24.03.0 initialization... 00:34:26.967 [2024-12-06 03:42:46.938290] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2893269 ] 00:34:26.967 [2024-12-06 03:42:46.999731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:26.967 [2024-12-06 03:42:47.042452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:26.967 03:42:47 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:26.967 03:42:47 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:34:26.967 03:42:47 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:34:26.967 03:42:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:34:27.225 03:42:47 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:34:27.225 03:42:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:27.483 03:42:47 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:34:27.483 03:42:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:34:27.742 [2024-12-06 03:42:47.712516] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:27.742 nvme0n1 00:34:27.742 03:42:47 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:34:27.742 03:42:47 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:34:27.742 03:42:47 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:34:27.742 03:42:47 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:34:27.742 03:42:47 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:34:27.742 03:42:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:28.000 03:42:47 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:34:28.000 03:42:47 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:34:28.000 03:42:47 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:34:28.000 03:42:47 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:34:28.000 03:42:47 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:28.000 03:42:47 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:34:28.000 03:42:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:28.259 03:42:48 keyring_linux -- keyring/linux.sh@25 -- # sn=88628351 00:34:28.259 03:42:48 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:34:28.259 03:42:48 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:34:28.259 03:42:48 keyring_linux -- keyring/linux.sh@26 -- # [[ 88628351 == \8\8\6\2\8\3\5\1 ]] 00:34:28.259 03:42:48 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 88628351 00:34:28.259 03:42:48 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:34:28.259 03:42:48 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:28.259 Running I/O for 1 seconds... 00:34:29.195 19110.00 IOPS, 74.65 MiB/s 00:34:29.195 Latency(us) 00:34:29.195 [2024-12-06T02:42:49.336Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:29.195 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:29.195 nvme0n1 : 1.01 19109.03 74.64 0.00 0.00 6673.26 5356.86 13734.07 00:34:29.195 [2024-12-06T02:42:49.336Z] =================================================================================================================== 00:34:29.195 [2024-12-06T02:42:49.336Z] Total : 19109.03 74.64 0.00 0.00 6673.26 5356.86 13734.07 00:34:29.195 { 00:34:29.195 "results": [ 00:34:29.195 { 00:34:29.195 "job": "nvme0n1", 00:34:29.195 "core_mask": "0x2", 00:34:29.195 "workload": "randread", 00:34:29.195 "status": "finished", 00:34:29.195 "queue_depth": 128, 00:34:29.195 "io_size": 4096, 00:34:29.195 "runtime": 1.006854, 00:34:29.195 "iops": 19109.026730787184, 00:34:29.195 "mibps": 74.64463566713744, 00:34:29.195 "io_failed": 0, 00:34:29.195 "io_timeout": 0, 00:34:29.195 "avg_latency_us": 6673.2588845701885, 00:34:29.195 "min_latency_us": 5356.855652173913, 00:34:29.195 "max_latency_us": 13734.066086956522 00:34:29.195 } 00:34:29.195 ], 00:34:29.195 "core_count": 1 00:34:29.195 } 00:34:29.195 03:42:49 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:29.195 03:42:49 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:29.455 03:42:49 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:34:29.455 03:42:49 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:34:29.455 03:42:49 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:34:29.455 03:42:49 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:34:29.455 03:42:49 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:34:29.455 03:42:49 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:29.714 03:42:49 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:34:29.714 03:42:49 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:34:29.714 03:42:49 keyring_linux -- keyring/linux.sh@23 -- # return 00:34:29.714 03:42:49 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:29.714 03:42:49 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:34:29.714 03:42:49 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:29.714 03:42:49 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:34:29.714 03:42:49 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:29.714 03:42:49 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:34:29.714 03:42:49 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:29.714 03:42:49 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:29.714 03:42:49 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:29.973 [2024-12-06 03:42:49.886844] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:34:29.973 [2024-12-06 03:42:49.887128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e7bc0 (107): Transport endpoint is not connected 00:34:29.973 [2024-12-06 03:42:49.888123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e7bc0 (9): Bad file descriptor 00:34:29.973 [2024-12-06 03:42:49.889124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:34:29.973 [2024-12-06 03:42:49.889142] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:34:29.973 [2024-12-06 03:42:49.889149] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:34:29.973 [2024-12-06 03:42:49.889157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:34:29.973 request: 00:34:29.973 { 00:34:29.973 "name": "nvme0", 00:34:29.973 "trtype": "tcp", 00:34:29.973 "traddr": "127.0.0.1", 00:34:29.973 "adrfam": "ipv4", 00:34:29.973 "trsvcid": "4420", 00:34:29.973 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:29.973 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:29.973 "prchk_reftag": false, 00:34:29.973 "prchk_guard": false, 00:34:29.973 "hdgst": false, 00:34:29.973 "ddgst": false, 00:34:29.973 "psk": ":spdk-test:key1", 00:34:29.973 "allow_unrecognized_csi": false, 00:34:29.973 "method": "bdev_nvme_attach_controller", 00:34:29.973 "req_id": 1 00:34:29.973 } 00:34:29.973 Got JSON-RPC error response 00:34:29.973 response: 00:34:29.973 { 00:34:29.973 "code": -5, 00:34:29.973 "message": "Input/output error" 00:34:29.973 } 00:34:29.973 03:42:49 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:34:29.973 03:42:49 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:29.973 03:42:49 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:29.973 03:42:49 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:29.973 03:42:49 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:34:29.973 03:42:49 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:34:29.973 03:42:49 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:34:29.973 03:42:49 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:34:29.973 03:42:49 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:34:29.973 03:42:49 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:34:29.973 03:42:49 keyring_linux -- keyring/linux.sh@33 -- # sn=88628351 00:34:29.973 03:42:49 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 88628351 00:34:29.973 1 links removed 00:34:29.973 03:42:49 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:34:29.973 03:42:49 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:34:29.973 03:42:49 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:34:29.973 03:42:49 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:34:29.973 03:42:49 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:34:29.973 03:42:49 keyring_linux -- keyring/linux.sh@33 -- # sn=846100851 00:34:29.973 03:42:49 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 846100851 00:34:29.973 1 links removed 00:34:29.973 03:42:49 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2893269 00:34:29.973 03:42:49 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2893269 ']' 00:34:29.973 03:42:49 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2893269 00:34:29.973 03:42:49 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:34:29.973 03:42:49 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:29.973 03:42:49 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2893269 00:34:29.973 03:42:49 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:29.974 03:42:49 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:29.974 03:42:49 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2893269' 00:34:29.974 killing process with pid 2893269 00:34:29.974 03:42:49 keyring_linux -- common/autotest_common.sh@973 -- # kill 2893269 00:34:29.974 Received shutdown signal, test time was about 1.000000 seconds 00:34:29.974 00:34:29.974 Latency(us) 00:34:29.974 [2024-12-06T02:42:50.115Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:29.974 [2024-12-06T02:42:50.115Z] =================================================================================================================== 00:34:29.974 [2024-12-06T02:42:50.115Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:29.974 03:42:49 keyring_linux -- common/autotest_common.sh@978 -- # wait 2893269 00:34:30.233 03:42:50 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2893264 00:34:30.233 03:42:50 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2893264 ']' 00:34:30.233 03:42:50 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2893264 00:34:30.233 03:42:50 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:34:30.233 03:42:50 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:30.233 03:42:50 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2893264 00:34:30.233 03:42:50 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:30.233 03:42:50 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:30.233 03:42:50 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2893264' 00:34:30.233 killing process with pid 2893264 00:34:30.233 03:42:50 keyring_linux -- common/autotest_common.sh@973 -- # kill 2893264 00:34:30.233 03:42:50 keyring_linux -- common/autotest_common.sh@978 -- # wait 2893264 00:34:30.493 00:34:30.493 real 0m4.288s 00:34:30.493 user 0m7.998s 00:34:30.493 sys 0m1.442s 00:34:30.493 03:42:50 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:30.493 03:42:50 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:30.493 ************************************ 00:34:30.493 END TEST keyring_linux 00:34:30.493 ************************************ 00:34:30.493 03:42:50 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:34:30.493 03:42:50 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:34:30.493 03:42:50 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:34:30.493 03:42:50 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:34:30.493 03:42:50 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:34:30.493 03:42:50 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:34:30.493 03:42:50 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:34:30.493 03:42:50 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:30.493 03:42:50 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:34:30.493 03:42:50 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:30.493 03:42:50 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:34:30.493 03:42:50 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:30.493 03:42:50 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:30.493 03:42:50 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:34:30.493 03:42:50 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:34:30.493 03:42:50 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:34:30.493 03:42:50 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:34:30.493 03:42:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:30.493 03:42:50 -- common/autotest_common.sh@10 -- # set +x 00:34:30.493 03:42:50 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:34:30.493 03:42:50 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:34:30.493 03:42:50 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:34:30.493 03:42:50 -- common/autotest_common.sh@10 -- # set +x 00:34:35.763 INFO: APP EXITING 00:34:35.763 INFO: killing all VMs 00:34:35.763 INFO: killing vhost app 00:34:35.763 INFO: EXIT DONE 00:34:37.665 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:34:37.665 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:34:37.665 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:34:37.665 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:34:37.665 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:34:37.665 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:34:37.665 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:34:37.666 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:34:37.666 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:34:37.666 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:34:37.666 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:34:37.666 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:34:37.666 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:34:37.666 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:34:37.666 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:34:37.666 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:34:37.666 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:34:40.220 Cleaning 00:34:40.220 Removing: /var/run/dpdk/spdk0/config 00:34:40.220 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:40.220 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:40.479 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:40.479 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:40.479 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:34:40.479 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:34:40.479 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:34:40.479 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:34:40.479 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:40.479 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:40.479 Removing: /var/run/dpdk/spdk1/config 00:34:40.479 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:34:40.479 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:34:40.479 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:34:40.479 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:34:40.479 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:34:40.479 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:34:40.479 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:34:40.479 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:34:40.479 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:34:40.479 Removing: /var/run/dpdk/spdk1/hugepage_info 00:34:40.479 Removing: /var/run/dpdk/spdk2/config 00:34:40.479 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:34:40.479 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:34:40.479 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:34:40.479 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:34:40.479 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:34:40.479 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:34:40.479 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:34:40.479 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:34:40.479 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:34:40.479 Removing: /var/run/dpdk/spdk2/hugepage_info 00:34:40.479 Removing: /var/run/dpdk/spdk3/config 00:34:40.479 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:34:40.479 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:34:40.479 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:34:40.479 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:34:40.479 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:34:40.480 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:34:40.480 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:34:40.480 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:34:40.480 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:34:40.480 Removing: /var/run/dpdk/spdk3/hugepage_info 00:34:40.480 Removing: /var/run/dpdk/spdk4/config 00:34:40.480 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:34:40.480 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:34:40.480 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:34:40.480 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:34:40.480 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:34:40.480 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:34:40.480 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:34:40.480 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:34:40.480 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:34:40.480 Removing: /var/run/dpdk/spdk4/hugepage_info 00:34:40.480 Removing: /dev/shm/bdev_svc_trace.1 00:34:40.480 Removing: /dev/shm/nvmf_trace.0 00:34:40.480 Removing: /dev/shm/spdk_tgt_trace.pid2419469 00:34:40.480 Removing: /var/run/dpdk/spdk0 00:34:40.480 Removing: /var/run/dpdk/spdk1 00:34:40.480 Removing: /var/run/dpdk/spdk2 00:34:40.480 Removing: /var/run/dpdk/spdk3 00:34:40.480 Removing: /var/run/dpdk/spdk4 00:34:40.480 Removing: /var/run/dpdk/spdk_pid2417354 00:34:40.738 Removing: /var/run/dpdk/spdk_pid2418390 00:34:40.738 Removing: /var/run/dpdk/spdk_pid2419469 00:34:40.738 Removing: /var/run/dpdk/spdk_pid2420110 00:34:40.738 Removing: /var/run/dpdk/spdk_pid2421177 00:34:40.738 Removing: /var/run/dpdk/spdk_pid2421199 00:34:40.738 Removing: /var/run/dpdk/spdk_pid2422472 00:34:40.738 Removing: /var/run/dpdk/spdk_pid2422702 00:34:40.738 Removing: /var/run/dpdk/spdk_pid2423013 00:34:40.738 Removing: /var/run/dpdk/spdk_pid2424661 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2425940 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2426231 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2426526 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2426828 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2427082 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2427250 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2427417 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2427731 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2428426 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2431425 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2431690 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2431937 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2431947 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2432435 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2432438 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2432954 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2432958 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2433222 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2433248 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2433486 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2433588 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2434061 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2434310 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2434606 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2438305 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2442561 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2452598 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2453197 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2457350 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2457814 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2461868 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2467834 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2470858 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2480969 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2489757 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2491405 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2492346 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2508995 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2513018 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2558543 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2563930 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2570205 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2576595 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2576689 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2577423 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2578317 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2579227 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2579699 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2579878 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2580147 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2580165 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2580167 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2581077 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2581992 00:34:40.739 Removing: /var/run/dpdk/spdk_pid2582907 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2583379 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2583381 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2583682 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2584833 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2585833 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2593925 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2622847 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2627347 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2628951 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2630789 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2631021 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2631041 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2631270 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2631776 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2633573 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2634380 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2634882 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2636984 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2637473 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2637975 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2642201 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2648027 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2648029 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2648031 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2651701 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2659978 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2663851 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2669846 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2671139 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2672456 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2673770 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2678259 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2682600 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2686431 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2693861 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2694029 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2698977 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2699145 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2699257 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2699681 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2699698 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2704160 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2704733 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2709134 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2711822 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2717113 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2722547 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2731295 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2738110 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2738117 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2757412 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2757884 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2758395 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2759046 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2759687 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2760259 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2760732 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2761308 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2765457 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2765695 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2771761 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2771915 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2777278 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2781505 00:34:40.998 Removing: /var/run/dpdk/spdk_pid2791750 00:34:41.257 Removing: /var/run/dpdk/spdk_pid2792246 00:34:41.257 Removing: /var/run/dpdk/spdk_pid2796462 00:34:41.257 Removing: /var/run/dpdk/spdk_pid2796724 00:34:41.257 Removing: /var/run/dpdk/spdk_pid2800829 00:34:41.257 Removing: /var/run/dpdk/spdk_pid2806594 00:34:41.257 Removing: /var/run/dpdk/spdk_pid2809173 00:34:41.257 Removing: /var/run/dpdk/spdk_pid2819121 00:34:41.257 Removing: /var/run/dpdk/spdk_pid2827797 00:34:41.257 Removing: /var/run/dpdk/spdk_pid2829402 00:34:41.257 Removing: /var/run/dpdk/spdk_pid2830320 00:34:41.257 Removing: /var/run/dpdk/spdk_pid2846857 00:34:41.257 Removing: /var/run/dpdk/spdk_pid2850625 00:34:41.257 Removing: /var/run/dpdk/spdk_pid2853442 00:34:41.257 Removing: /var/run/dpdk/spdk_pid2860828 00:34:41.257 Removing: /var/run/dpdk/spdk_pid2860911 00:34:41.257 Removing: /var/run/dpdk/spdk_pid2865777 00:34:41.257 Removing: /var/run/dpdk/spdk_pid2867740 00:34:41.257 Removing: /var/run/dpdk/spdk_pid2869643 00:34:41.257 Removing: /var/run/dpdk/spdk_pid2870751 00:34:41.257 Removing: /var/run/dpdk/spdk_pid2872718 00:34:41.257 Removing: /var/run/dpdk/spdk_pid2873903 00:34:41.257 Removing: /var/run/dpdk/spdk_pid2882549 00:34:41.257 Removing: /var/run/dpdk/spdk_pid2883253 00:34:41.257 Removing: /var/run/dpdk/spdk_pid2883768 00:34:41.257 Removing: /var/run/dpdk/spdk_pid2886426 00:34:41.257 Removing: /var/run/dpdk/spdk_pid2886890 00:34:41.257 Removing: /var/run/dpdk/spdk_pid2887355 00:34:41.257 Removing: /var/run/dpdk/spdk_pid2891188 00:34:41.257 Removing: /var/run/dpdk/spdk_pid2891193 00:34:41.257 Removing: /var/run/dpdk/spdk_pid2892706 00:34:41.257 Removing: /var/run/dpdk/spdk_pid2893264 00:34:41.257 Removing: /var/run/dpdk/spdk_pid2893269 00:34:41.257 Clean 00:34:41.257 03:43:01 -- common/autotest_common.sh@1453 -- # return 0 00:34:41.257 03:43:01 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:34:41.257 03:43:01 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:41.257 03:43:01 -- common/autotest_common.sh@10 -- # set +x 00:34:41.257 03:43:01 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:34:41.257 03:43:01 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:41.257 03:43:01 -- common/autotest_common.sh@10 -- # set +x 00:34:41.516 03:43:01 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:41.516 03:43:01 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:34:41.516 03:43:01 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:34:41.516 03:43:01 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:34:41.516 03:43:01 -- spdk/autotest.sh@398 -- # hostname 00:34:41.516 03:43:01 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:34:41.516 geninfo: WARNING: invalid characters removed from testname! 00:35:03.463 03:43:22 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:05.368 03:43:25 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:07.900 03:43:27 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:09.279 03:43:29 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:11.185 03:43:31 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:13.088 03:43:33 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:14.993 03:43:35 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:14.993 03:43:35 -- spdk/autorun.sh@1 -- $ timing_finish 00:35:14.993 03:43:35 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:35:14.993 03:43:35 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:14.993 03:43:35 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:35:14.993 03:43:35 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:14.993 + [[ -n 2341105 ]] 00:35:14.993 + sudo kill 2341105 00:35:15.259 [Pipeline] } 00:35:15.272 [Pipeline] // stage 00:35:15.277 [Pipeline] } 00:35:15.292 [Pipeline] // timeout 00:35:15.296 [Pipeline] } 00:35:15.309 [Pipeline] // catchError 00:35:15.313 [Pipeline] } 00:35:15.327 [Pipeline] // wrap 00:35:15.343 [Pipeline] } 00:35:15.355 [Pipeline] // catchError 00:35:15.362 [Pipeline] stage 00:35:15.364 [Pipeline] { (Epilogue) 00:35:15.375 [Pipeline] catchError 00:35:15.377 [Pipeline] { 00:35:15.389 [Pipeline] echo 00:35:15.390 Cleanup processes 00:35:15.396 [Pipeline] sh 00:35:15.685 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:15.685 2903638 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:15.697 [Pipeline] sh 00:35:15.977 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:15.977 ++ grep -v 'sudo pgrep' 00:35:15.977 ++ awk '{print $1}' 00:35:15.977 + sudo kill -9 00:35:15.977 + true 00:35:15.986 [Pipeline] sh 00:35:16.263 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:28.476 [Pipeline] sh 00:35:28.755 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:28.755 Artifacts sizes are good 00:35:28.768 [Pipeline] archiveArtifacts 00:35:28.775 Archiving artifacts 00:35:28.892 [Pipeline] sh 00:35:29.273 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:35:29.287 [Pipeline] cleanWs 00:35:29.296 [WS-CLEANUP] Deleting project workspace... 00:35:29.296 [WS-CLEANUP] Deferred wipeout is used... 00:35:29.302 [WS-CLEANUP] done 00:35:29.304 [Pipeline] } 00:35:29.319 [Pipeline] // catchError 00:35:29.329 [Pipeline] sh 00:35:29.610 + logger -p user.info -t JENKINS-CI 00:35:29.619 [Pipeline] } 00:35:29.632 [Pipeline] // stage 00:35:29.636 [Pipeline] } 00:35:29.649 [Pipeline] // node 00:35:29.654 [Pipeline] End of Pipeline 00:35:29.687 Finished: SUCCESS